Information

Which commodity technologies offer useful neuroimaging data?

Which commodity technologies offer useful neuroimaging data?

Do any commodity technologies exist that offer useful neuroimaging data? By commodity I mean lowish cost and not requiring specialist technicians to run. By useful I mean able to reliably provide some signal at different times from the same subject, even if it is of a limited nature. If so, what are the capabilities of the technology?

For instance, do the cheap EEG machines provide useful data or not?

Are any such technologies likely to become available in the foreseeable future?


The question is a little vague, but here is a partial answer.

There are a number of EEG systems available that are low-cost. Examples include Emotiv's EPOC System and the OpenBCI project. The former has been available in various models for some years now and it has been established that it records real EEG signals, as opposed to just recording signals generated in the muscles and other artifacts.

The first major validation document is by Ekanayake as a white paper, that has been updated (here). As I understand it, these are not peer reviewed, nor were they presented at a conference. But the work is quite good.

For the collection of ERP (Evoked Potentials) data, there are two substantial peer-reviewed papers by a research group: Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children by Badcock et al. (2015) and Validation of the Emotiv EPOC EEG gaming system for measuring research quality auditory ERPs, also by Badcock et al. (2013). While these papers are on ERP experiments, their results suggest that other collection styles, such as continuous collection, are likely to be ok as well.

Additionally, Liu et al. (2102), Implementation of SSVEP Based BCI with Emotiv EPOC, is suggestive but appears to be an unreviewed conference paper. As a counterpoint to this, there is Performance of the Emotiv Epoc headset for P300-based applications, by Duvinage et al. (2013). Searching Google Scholar for these topics, shows a trend that more recent papers are generally more positive, but I don't have time to work the statistics to show that is more than my quick opinion.

As an aside, our laboratory has been using the Emotiv system for approximately a year and a half, collecting data for approximately 6 months or so, and for continuous EEG collection (non-ERP) we are replicating the expected EEG phenomena. Hopefully we'll be added to the peer-reviewed literature above soon.


AMA Journal of Ethics

Psychiatric Diagnosis and Brain Imaging

The biological revolution in psychiatry, which started in the 1960s, has so thoroughly transformed the field that the phrase “biological psychiatry” now seems redundant. A huge literature exists on the biological correlates of psychiatric illness, including thousands of published research studies using functional neuroimaging methods such as SPECT, PET, and fMRI. In addition, most psychiatric treatment is biological in that it directly affects the brain through medication, stimulation, or surgery. Even “talking therapies” are now understood to change the brain in ways that have been visualized by neuroimaging [1].

Diagnoses in psychiatry, however, are based entirely on behavioral, not biological, criteria [2]. Depression is diagnosed by asking patients how they feel and whether their sleeping, eating, and other behaviors have changed. Attention deficit hyperactivity disorder (ADHD) is diagnosed by asking the patient, family members, and others about the patient’s tendency to get distracted, act impulsively, and so on. For these and all other psychiatric illnesses described by the Diagnostic and Statistical Manual of the American Psychiatric Association, findings from brain imaging do not appear among the diagnostic criteria. Aside from its use to rule out potential physical causes of a patient’s condition, for example a brain tumor, neuroimaging is not used in the process of psychiatric diagnosis.

In this article we review the current status of brain imaging for psychiatric diagnosis. Among the questions to be addressed are: why has diagnostic neuroimaging not yet found a place in psychiatric practice? What are its near-term and longer-term prospects? What obstacles block the use of such methods? The answers to these questions involve the nature of imaging studies and of psychiatric diagnosis.

Sensitivity, specificity and standardization in psychiatric brain imaging. The vast majority of psychiatric neuroimaging studies aggregate data from groups of subjects for analysis, whereas diagnosis must be made for individuals, not groups. Structural and functional studies reveal a high degree of variability within groups of healthy and ill subjects, often with considerable overlap between the distributions of the two groups [3]. In the language of diagnostic tests, imaging studies are generally not highly sensitive to the difference between illness and health.

Another current limitation concerns the specificity of candidate diagnostic markers from imaging. Most psychiatric imaging studies involve subjects from only two categories—patients from a single diagnostic category and people without any psychiatric diagnosis. The most that can be learned from such a study is how brain activation in those with a particular disorder differs from brain activation in those without a disorder. The dilemma faced by a diagnosing clinician, on the other hand, is rarely “Does this person have disorder X or is she healthy?” Rather, it is typically “Does this person have disorder X, Y, or Z?” The pattern that distinguishes people with disorder X from healthy people may not be unique to X but shared with a whole alphabet of other disorders.

Indeed, there is considerable similarity in the abnormalities noted in brain activation across different diagnoses. A meta-analysis of neuroimaging studies of anxiety disorders reported common areas of activation (amygdala, insula) across posttraumatic stress disorder, social phobia, and specific phobia—suggesting that neuroimaging has yet to reveal patterns of neural activity that are unique to specific anxiety disorders [4]. Abnormalities of amygdala activation also have been reported consistently in neuroimaging studies of depression [5], bipolar disorder [6], schizophrenia (a disorder primarily of thought rather than of mood) [7], and psychopathy (which shares features with the DSM diagnosis of antisocial personality disorder) [8].

More sophisticated methods of image analysis may hold promise for discerning the underlying differences among the many disorders that feature similar regional abnormalities, including the “usual suspects,” limbic hyperactivity and prefrontal hypoactivity. In addition, new multivariate statistical approaches to image analysis make it possible to discover spatial and temporal patterns that correspond to performance of specific tasks and specific diagnoses [9]. These methods have only begun to be applied to clinical disorders but show promise for increasing the specificity of brain imaging markers for psychiatric illness [10, 11].

Standardization is relevant in light of the many ways in which protocols differ from study to study, particularly among functional imaging studies. The patterns of activation obtained in studies of psychiatric patients depend strongly on the tasks performed by the subjects and the statistical comparisons examined by the researchers afterwards. Although the results of psychiatric imaging research are often summarized by stating that certain regions are under- or overactive or more or less functionally connected, such summaries are fundamentally incomplete unless they include information about what task evoked the activation in question: were the patients resting, processing emotional stimuli, trying not to process emotional stimuli, or engaged in effortful cognition? The fact that any imaging study’s conclusions are relative to the tasks performed adds further complexity to the problem of seeking consistently discriminating patterns of activation for healthy and ill subjects.

Reliability and validity of current diagnostic categories. Other reasons why progress toward diagnostic imaging in psychiatry has been slow stem from the nature of the diagnostic categories themselves. The categories of the DSM are intended to be both reliable and valid. That is, they are intended to be usable in consistent ways by any appropriately trained clinician, so that different diagnosticians arrive at the same diagnosis for each patient (reliability) and to correspond to the true categories of psychiatric illness found in the population, that is, to reflect the underlying psychological and biological commonalities and differences among different disorders (validity). Good, or at least improved, reliability was one of the signal achievements of the DSM-III, and was carried over to DSM-IV. Unfortunately, validity continues to be more difficult to achieve.

As an illustration of how far from being necessarily valid our current diagnostic categories are, consider the criteria for one of the more common serious psychiatric conditions, major depressive disorder. According to the DSM-IV-TR, patients must report either depressed mood or anhedonia and at least four of eight additional symptoms. It is therefore possible for two patients who do not share a single symptom to both receive a diagnosis of major depressive disorder. There are also commonalities of symptoms between categories. For example, impulsivity, emotional lability, and difficulty with concentration each occurs in more than one disorder. To the extent that our psychiatric categories do not correspond to “natural kinds,” we should probably not expect correspondence with brain physiology as revealed by imaging. Taken together, the fact that (a) different exemplars of a category can share no symptoms and (b) exemplars of two different categories may share common symptoms raises questions about the validity of the current diagnostic categories.

The Present and Future of Diagnostic Brain Imaging in Psychiatry

A defiant minority now use brain imaging for psychiatric diagnosis. Despite the challenges just reviewed, a small number of psychiatrists offer diagnostic neuroimaging to patients in their clinics. The imaging method used is single photon emission computed tomography (SPECT), which measures regional cerebral blood flow by detecting a gamma-emitting tracer in the blood. The best known of these clinics are the four Amen Clinics, founded by the psychiatrist and self-help author Daniel Amen. Others include the Clements Clinic, Cerescan, Pathfinder Brain SPECT, and Dr. Spect Scan. The use of brain imaging appears to be a selling point for these clinics their websites generally feature brain images prominently and the names of the last three leave no doubt about the emphasis they place on imaging.

These clinics promise to diagnose and treat a wide range of psychiatric disorders in children and adults based on patient history and examination along with the results of SPECT scans. The Amen Clinics use a system of diagnosis that does not correspond to the standard diagnostic categories defined by the American Psychiatric Association’s Diagnostic and Statistical Manual. For example, anxiety and depression are combined into a single superordinate category and then divided into 7 subtypes with names such as “temporal lobe anxiety and depression” and “overfocused anxiety and depression” [12]. Attention deficit hyperactivity disorder is also reconceptualized as having 6 subtypes, with names such as “limbic ADD” and “ring of fire ADD” [13].

The Amen Clinics website states that they have performed almost 50,000 scans [14], a huge number that, combined with associated clinical data including outcomes, could provide important evidence on the value of SPECT scanning in diagnosis and the efficacy of Amen’s approach to psychiatric care. Unfortunately, no such studies have been reported. The lack of empirical validation has led many to condemn the use of diagnostic SPECT as premature and unproven [15-18].

Why do people pay for an unproven, even dubious, diagnostic test? Brain imaging has a high-tech allure that suggests advanced medical care. People may assume that the treatments available at these clinics, as well as the diagnostic methods, are cutting-edge. In addition, there is a strong allure to the idea that imaging can give visual proof that psychological problems have a physical cause. The Amen Clinics cite several ways in which patients and their families may find this evidence helpful, including the reduction of stigma and guilt [14]. Of course, these considerations do not address the question of whether diagnosis is improved by the use of SPECT scans.

Diagnostic neuroimaging: prospects for the near-term and longer-term future. Few believe that brain imaging will play a role in psychiatric diagnosis any time soon. The forthcoming DSM-5, expected in May of 2013, will include reference to a variety of biomarkers for psychiatric disease, including those visible by brain imaging, but their role is expected to be in the validation of the categories themselves rather than in the criteria for diagnosing an individual patient [19].

In the long term, there is reason for optimism concerning the contribution of brain imaging to psychiatric diagnosis. This may happen first for differential diagnosis, particularly for diagnostic distinctions that are difficult to make on the basis of behavioral observations alone. In such cases potentially distinctive patterns of brain activation identified through imaging will be especially useful. For example, Brotman et al. have studied the patterns of brain activation evoked in the performing of various tasks with pictures of faces and found differences between the neural responses of children diagnosed with severe mood dysregulation and those with ADHD or bipolar disorder [20]. They and others [21] suggest that this finding could provide the basis for the future development of diagnostic imaging.

Diagnostic imaging in psychiatry could emerge from basic research on psychopathology, as in the example just cited. Alternatively, the relatively atheoretical multivariate statistical approach mentioned earlier could provide the first candidate neural signatures of psychiatric disorders. By whatever method the candidate neural signatures are identified, large-scale validation trials will be needed before they can enter routine clinical use. This process promises to be lengthy and expensive and could easily fill the interval between two or more editions of the DSM.

Coevolution of diagnostic methods and diagnostic categories. Whether the path to imaging-based diagnosis involves translation of newly discovered mechanisms of pathophysiology, brute-force number crunching, or both, we cannot assume that it will preserve current nosology. Indeed, given the overlap of imaging findings between diagnostic categories and the heterogeneity within categories mentioned earlier, it seems likely the widespread incorporation of imaging into diagnostic criteria will force our nosology to change. If the mismatch between imaging markers and diagnostic categories is not drastic, the DSM categories may change incrementally, for example by revisions of individual diagnostic criteria for specific disorders. However, if brain imaging reveals a radically different pattern of “natural kinds,” and if these kinds are proven to have clinical utility (e.g., enabling better treatment decisions), then imaging may prompt a radical reconceptualization of psychiatric diagnosis and entirely new diagnostic categories may emerge.

There are, however, strong arguments for conservatism. The current system of diagnostic categories is valuable in part simply because we have used it for so long and therefore much of our clinical knowledge is defined in relation to this system. DSM diagnoses have so far changed in a gradual and piecemeal manner through multiple editions of the manual, with most disorders retaining their defining criteria and a only minority being subdivided, merged, added, and eliminated in the light of new research findings. In keeping with this approach, the future influence of brain imaging on psychiatric diagnosis is likely to be more evolutionary than revolutionary.

An attempt to reconcile the need for consistency with the promise of more neurobiologically based classifications can be found in the Research Domain Criteria (RDoC) for psychiatry research proposed by the U.S. National Institute of Mental Health. This is “a long-term framework for research… [with] classifications based on genomics and neuroscience as well as clinical observation, with the goal of improving treatment outcomes” [22]. The RDoC system, still under construction at the time of writing [23], is meant to be used, in parallel with DSM categories, for research that may ultimately lead to more valid diagnostic categories, which might also be more consistent with the use of imaging as a diagnostic test.

Conclusions

Brain imaging will probably enter clinical use in other roles before it serves as a diagnostic laboratory test. For example, imaging has already guided clinical researchers in the development of new therapies [24] and in the customization of therapy for individual patients [25] it shows promise as a predictor of vulnerability [26] and treatment response [27] and has even been used as a therapy itself [28].

While some physicians insist that they are able to use brain imaging now for psychiatric diagnosis, there is currently no reliable evidence supporting this view. On the contrary, there are many reasons to doubt that imaging will play a role in psychiatric diagnosis in the near future. As argued here, much psychiatric imaging research remains to be done to achieve sensitivity, specificity, and standardization of imaging protocols.

In addition, the nature of current psychiatric diagnosis may not even correspond to the categories of brain dysfunction that imaging reveals. Finally, the practical value of maintaining continuity in diagnostic classifications requires a cautious and incremental approach to redrawing diagnostic classifications on the basis of imaging research.


Which commodity technologies offer useful neuroimaging data? - Psychology

Using fMRI Data to Predict Autism Diagnoses with Various Machine Learning Models and Cross-Validation Methods

Contributors: Emily Chen, Andréanne Proulx, Mikkel Schöttner

This repository contains contributions from the ABIDE team made during The BrainHack School 2020. This project uses resting state fMRI data from the ABIDE dataset to train machine learning models and is licensed under a Creative Commons Zero v1.0 Universal license.

Please feel free to reach out to any of us if you have any questions or comments!

Hello! I am an (incoming) fourth year undergraduate student at McGill University studying computer science and urban health geography with a minor in cognitive science. I am a research assistant with Isabelle Arseneau-Bruneau in the Zatorre Lab and have been learning a lot about neuroscience research while (hopefully) lending some of my technical skills to Isabelle's PhD work exploring the effect of musical training on FFR.

When joining the BrainHack School, I was looking forward in particular to working on a project at the intersection of neuroscience and CS because my previous classes rarely focused on the application of the abstract concepts we learned. Upon completion of the BrainHack School, I had the opportunity to learn from and collaborate with talented neuroscience researchers and participants, write a machine learning script using python and neuroscience data, and gain hands-on practice in reproducibility efforts and good project management.

You can find me on GitHub at emilyemchen and on Twitter at @emilyemchen.

Hi! I am an incoming master's student in Psychology at the University of Montreal. My background is in cognitive neuroscience and my career objective is to work on research projects aiming to discover new ways of characterizing the brain in its pathological states. At the moment, I am working with Sébastien Jacquemont and Pierre Bellec, and the focus of our research is on investigating the effect of genetic mutations on functional and structural brain phenotypes. More precisely, I have been working with resting-state functional connectivity measures in carrier populations with developmental disorders.

By joining the Brainhack School, I hoped to strengthen my computational skills and my knowledge in the applications of machine learning to the field of neuroimaging. Not only did I get to work on a machine learning problem specific to my field, but I also got to learn about useful tools such as GitHub. More importantly, I also met a community of brilliant/aware researchers and got to collaborate with other students, even across the world!

Hey! I am a master's student in Cognitive Neuroscience at the University of Marburg. My background is in psychology. I am especially interested in the social brain and how humans infer thoughts and emotions from the actions of others. Right now, I am doing my Masters thesis on how the capacities of Theory of Mind and empathy interact in the brain using fMRI. I am also very much interested in research methods and how we can use statistics and data science techniques on neuroimaging data.

When getting into Brainhack School, I hoped to get a better understanding of machine learning, and how it is useful for analyzing fMRI data. I learned a lot about collaborative project organization, how to make pretty interactive plots and how to write better Python scripts. Brainhack School was really an awesome experience getting to know a lot of talented and motivated people and being able to collaborate with them across the Atlantic!

You can find me on GitHub at mschoettner and on Twitter @mikkelschoett.

We expected to use the following tools, technologies, and libraries for this project:

  • Git
  • GitHub
  • Visual Studio Code
  • Docker
  • Jupyter Notebook
  • HPC/Compute Canada
  • Python libraries: numpy , pandas , matplotlib , scikit-learn , nilearn , seaborn , pyplot , pyplot
  • venv

The goal of this project was to compare different machine learning models and cross-validation methods and see how well each is able to predict autism from resting state fMRI data. We used the preprocessed open source ABIDE database, which contains structural, functional, and phenotypic data of 539 individuals with autism and 573 typical controls from 20 different research sites.

By the end of The BrainHack School, we aimed to have the following:

  • README.md file
  • requirements.txt file that outlines the packages needed to run the script
  • Jupyter notebooks with code and explanations
  • GitHub repository documenting project workflow
  • Presentation showing project results

Several studies have found an altered connectivity profile in the default mode network of subjects with Autism Spectrum Disorder, or ASD (Anderson, 2014). Based on these findings, resting state fMRI data have been used to predict autism by training a classifier on the multi-site ABIDE data set (Nielsen et al., 2013). This project's scientific aim is to replicate these findings and extends the literature by comparing the effects of differing cross-validation methods on various classification algorithms.

The three of us joined forces when we realized that we shared many similar learning goals and interests. With such similar project ideas, we figured we would accomplish more working together by each taking on a different cross-validation methods to train various machine learning models.

We all shared a common interest in making our project as reproducible as possible. This goal involved creating a transparent, collaborative workflow that could be tracked at any time by anyone. To achieve this objective, we utilized the many features that GitHub has to offer, all of which you can see in action at our shared repository here.

  • Branches: used to work simultaneously on our own parts and then push changes to the master branch
  • Pull requests: created when making changes to the master branch
  • Issues: used to communicate with each other and keep track of tasks
  • Tags: used to keep issues organized
  • Milestones: used to keep our main goals in mind (Week 4 presentation and final deliverable)
  • Projects: used to track various aspects of our work

Standardized Data Preparation and Jupyter Notebooks

The data are processed in a standardized way using a Python script that prepares the data for the machine learning classifiers. Several Jupyter notebooks then implement different models and cross-validation techniques which are described in detail below.

Tools, Technologies, and Libraries Learned

Many of these contribute to open science practices!

  • Jupyter Notebook: Write code in a virtual environment
  • Visual Studio Code: Edit files such as README.md
  • Python libraries: numpy , pandas , nilearn , matplotlib , plotly , seaborn , plotly , sklearn (dimensionality reduction, test-train split, gridsearch, cross-validation methods, evaluate performance)
  • Git: Track file changes
  • GitHub: Organize team workflow and project
  • Machine learning: Apply ML concepts and tools to the neuroimaging field
  • venv : Make requirement.txt file for a reproducible virtual environment

Deliverable 1: Jupyter notebooks (x3)

This notebook contains code to run a linear support vector classification to predict autism from resting state data. It uses leave-group-out cross-validation using site as the group variable. The results give a good estimate of how stable the model is. While for most of the sites the prediction works above chance level, for some, autism is predicted only at or even below chance level.

K-fold and leave-one-out cross-validation

This notebook contains the code to run linear support vector classification, k-nearest neighbors, decision tree, and random forest algorithms on the ABIDE dataset. The models are trained and evaluated using k-fold and leave-one out cross-validation methods. We obtain accuracy scores that represent how skilled the model is at predicting the labels of unseen data. Leave-one out cross validation gives more accurate predictions than k-fold cross validation. The accuracy values range from 55.8% to 69.2%.

Group k-folds cross-validation

This notebook implements a 10-fold iterator variant to split the data into non-overlapping groups. It then runs four classification algorithms on the data: linear support vector machines ( LinearSVC ), k-nearest neighbors ( KNeighborsClassifier ), decision tree ( DecisionTreeClassifier ), and random forest ( RandomForestClassifier ). The most accurate classifier, LinearSVC , has an average accuracy score for all models of 63.5%, while the least accurate classifier, RandomForestClassifier , has an average accuracy score for all models of 52.6%. This latter percentage is not far off from the average accuracy scores of the other two classifiers. These scores are not particularly impressive, and one reason could be the data that this model is being trained on. Each site had differing processes, methods, and amounts of data they collected, so the lack of similarity could suggest less widespread generalizability. Future work into optimizing the parameters would be the logical next step.

Table 1: Group 10-fold cross validation average accuracy scores (percentages) per classifier algorithm

Algorithm Average Score Maximum Score Minimum Score
LinearSVC 63.5% 72.0% 52.3%
KNeighborsClassifier 55.2% 66.7% 48.7%
DecisionTreeClassifier 54.3% 61.6% 44.9%
RandomForestClassifier 52.6% 60.5% 39.7%

What were the overall results?

  • downloads the data
  • extracts the time series of 64 regions of interest defined by the BASC brain atlas
  • computes the correlations between time series for each participant
  • uses a principal component analysis for dimensionality reduction

To fetch and prepare the data set you can call the prepare_data.py script like this:

./prepare_data.py data_dir output_dir

python prepare_data.py data_dir output_dir

  • data_dir is the directory where you want to save the data or have it already saved and
  • output_dir is the directory where you want to store the outputs the script generates.

The notebooks also call the prepare_data function from the preparation script.

This file increases reproducibility by helping to ensure that the scripts run correctly on any machine. To make sure that all scripts work correctly, you can create a virtual environment using pythons built-in library venv . To do this, follow these steps:

  1. Clone repo and navigate to folder in a shell
  2. Create a virtual environment in a folder of your choice: python -m venv /path/to/folder
  3. Activate it (bash command, see here how to activate in different shells): source /path/to/folder/bin/activate
  4. Install all necessary requirements from requirements file: pip install -r requirements.txt
  5. Create kernel for jupyter notebooks: ipython kernel install --user --name=abide-ml
  6. Open a jupyter notebook: jupyter-notebook , then click the notebook you want to run
  7. Select different kernel by clicking Kernel -> Change Kernel -> abide-ml
  8. Run the code!

Deliverable 4: Data visualizations (Week 3)

Deliverable 5: Presentation (Week 4)

The presentation slides can be viewed here on Canva, which is the platform we used to create the slides. A video of our presentation can be viewed on this project page. We presented our work to The BrainHack School on June 5, 2020 using the RISE integration in a Jupyter notebook, which can be found here.

Deliverable 6: Overview of the project and results in the README.md file

This README.md file contains the content that will be shown on The BrainHack School website project page.

Conclusion and Acknowledgement

First and foremost, we would like to thank our Weeks 3 and 4 peer clinic mentor and co-organizer of the Brainhack School, Pierre Bellec. A special thank you as well to our Week 2 mentors Désirée Lussier-Lévesque, Alexa Pichet-Binette, and Sebastian Urchs.

Thank you also to The BrainHack School 2020 leaders and co-organizers Jean-Baptiste Poline, Tristan Glatard, and Benjamin de Leener, as well as the instructors and mentors Karim Jerbi, Elizabeth DuPre, Ross Markello, Peer Herholz, Samuel Guay, Valerie Hayot-Sasson, Greg Kiar, Jake Vogel, and Agâh Karakuzu.

Thanks to all of you, this school was an amazing learning experience!

Benchmarking functional connectome-based predictive models for resting-state fMRI (for classification estimator inspiration) https://hal.inria.fr/hal-01824205

Scientific articles:

Anderson, J. S., Patel, V. B., Preedy, V. R., & Martin, C. R. (2014). Cortical underconnectivity hypothesis in autism: evidence from functional connectivity MRI. Comprehensive Guide to Autism, 1457, 1471.

Nielsen, J. A., Zielinski, B. A., Fletcher, P. T., Alexander, A. L., Lange, N., Bigler, E. D., . & Anderson, J. S. (2013). Multisite functional connectivity MRI classification of autism: ABIDE results. Frontiers in Human Neuroscience, 7, 599.


Related

APS Fellows Elected to SEP

In addition, an APS Rising Star receives the society’s Early Investigator Award. More

Make Your Voice Heard on NIH Proposal

NIH has issued a Request for Information asking the community to weigh in on a number of questions related to basic behavioral science, and NIH needs to hear from individual scientists like you that basic human subjects research should not be classified as clinical trials. More

Cattell Fund Projects Include Explorations of Sensory Processes, Memory

With support from the James McKeen Cattell Fund, four researchers are devoting sabbaticals to advancing research on active sensing, spatial and episodic memory, and children’s emotional development. More


Neuroimaging Technological Advancements for Targeting in Functional Neurosurgery

Ablations and particularly deep brain stimulation (DBS) of a variety of CNS targets are established therapeutic tools for movement disorders. Accurate targeting of the intended structure is crucial for optimal clinical outcomes. However, most targets used in functional neurosurgery are sub-optimally visualized on routine MRI. This article reviews recent neuroimaging advancements for targeting in movement disorders.

Recent Findings

Dedicated MRI sequences can often visualize to some degree anatomical structures commonly targeted during DBS surgery, including at 1.5-T field strengths. Due to recent technological advancements, MR images using ultra-high magnetic field strengths and new acquisition parameters allow for markedly improved visualization of common movement disorder targets. In addition, novel neuroimaging techniques have enabled group-level analysis of DBS patients and delineation of areas associated with clinical benefits. These areas might diverge from the conventionally targeted nuclei and may instead correspond to white matter tracts or hubs of functional networks.

Summary

Neuroimaging advancements have enabled improved direct visualization-based targeting as well as optimization and adjustment of conventionally targeted structures.


How to Reduce Barriers to Data Sharing?

The Push for a More Open Data Model

A number of recent examples point to a general trend to make information, particularly governmental or administrative data open to the public, within the limits of privacy protections. Last year, The Economist reported that �rack Obama issued a presidential memorandum ordering the heads of federal agencies to make available as much information as possible […]”, and that “Providing access to data creates a culture of accountability” (“The Open Society,” 2010). The US government 27 and New York City 28 websites release a great amount of information and data to the public. Public transportation systems make their data available and private developers use this data to produce transit-tracking applications the European Union also funds research on this theme (see “The Open Data Open Society Report” 29 ). Closer to the concerns of the neuroimaging community, the British parliament released a report on the importance “of reviewing the underlying data behind research and how those data should be managed” 30 . Individual researchers' choices as well as institution-wide policies will be influenced by this societal trend for more open data. The current very fast expansion in social networking sites is a good reflection of how quickly people can adopt new habits, and how the society evolves with these profound technological mutations.

Funding Agencies and Journals

It has become clear that cost reduction and maximizing the impact of funding in the community will also shape tool development for sharing data, as exemplified by recent requirements from major funding agencies (NIH, Wellcome Trust, UK Medical Research Council), and more generally their shift in commitment to initiatives that help the community rather than lab-specific projects. As early as 2003, the 𠇏inal NIH Statement on Sharing Research Data” 31 stated that the “NIH expects and supports the timely release and sharing of final research data from NIH-supported studies for use by other researchers,” and NIH grants above a certain funding level are also required to “to include a plan for data sharing” in their proposals.

Journals will play a key role in the process requiring that data are made available for reviewers or to the article readers. The Journal of Cognitive Neuroscience was a pioneer in this context. This initiative established a neuroimaging data repository (fMRI Data Center, or fMRIDC) that was an unprecedented success in the field, with many works published based on re-analyzed data obtained from the repository. Its success was, however, limited by the lack of standardized formats, paradigm descriptions, and analyses, as well as the limited availability of tools to query and download well-formatted data. The idea that data should be accessible remains in several high-ranked journals, with more and more supplementary material made available for reviewers and for the community [however, see The Journal of Neuroscience's recent decision to not accept supplementary material (Maunsell, 2010)]. In the future, it may be that both data and computational tools will be made available in some new form of data warehouse to help track data provenance (e.g., see the Provenance Interchange Working Group Charter 32 ) and enable reproducibility, not withstanding the associated technical difficulties and costs. Proceedings of the National Academy of Sciences, for instance, now require that data are made available to manuscript reviewers. This again points to the need for software tools able to capture and release data at different stages of their acquisition and analysis.

Increased Citations and Visibility by Releasing Data

Researchers can receive more citations if their data are re-used. Rather than counting the number of publications, h-indices are increasingly used as a metric of output (Ball, 2007). There are now several successful examples of this re-use, such as ADNI. ADNI has yielded many publications since its launch. One specific requirement of ADNI's use agreement is that the ADNI consortium be listed on every related publication's author list. This is a very powerful means for gaining visibility and ADNI has benefited from this—its funding was renewed with ADNI2𠅋ut this policy may not meet the standards of authorship of scientific publication (Rohlfing and Poline, 2011), and generally the ADNI requirements are seen as too demanding by the community 33 .

It has become apparent that papers on data would be a great way to credit researchers who share data. By being cited in the same way that a piece of software or a method is cited, data acquirers could get recognition within the current academic evaluation methods for individuals (e.g., number of citations with h factor). Additionally, the peer review process will help to ensure the quality of the data and accompanying metadata. This, however, requires that journals will accept �ta papers” and develop principles for how such papers should be structured and reviewed. It is also necessary that authors and journal editors consider data acquisition and sharing as an important aspect of research, on par with software or methods development. Recently, both Neuroinformatics (Kennedy et al., 2011) and BioMedCentral (Hrynaszkiewicz, 2010) have announced their intention to publish such articles. Datacite 34 gathers institutions from around the world and provides them with resources to address the “why” and “how” of data citation. By working with these organizations and participating in data publishing, the neuroimaging community can help ensure appropriate credit is given for shared data.

Provide Guidelines to Help Researchers with the Ethical and Legal Aspects of Data Sharing

There is a need for localized (specific to country or founding body) guidelines on how to prepare ethics applications and anonymize data in order to share them freely, or as freely as possible. It is recommended that research protocols and informed consent documents submitted to ERB/IRBs consider possible further re-use of the data: include provision in consent forms that while subject's data will certainly be used to study the specific research question outlined in the form, limited data may also be shared (anonymously) with other researchers, for different purposes and that subjects shouldn't participate if they are uncomfortable with such sharing. A recent survey of the UK general public found that while by far the majority of respondents would be willing to take part in neuroimaging studies for the purpose of scientific or medical research, only a small number would be willing to undergo scans for other purposes, like advertising research or insurance coverage (Wardlaw et al., 2011).

Illes and colleagues (Illes et al., 2010) have noted that many researchers feel distrust and confusion when dealing with IRBs despite their “original mission … to ensure the ethical conduct of neuro-research, may be acting as a barrier to science due to time delays, lack of expertise in protocol evaluation by these boards and inconsistent guidelines on the preparation of participant-intended documentation” (Kehagia et al., 2011). In such cases, a few individual researchers cannot single-handedly reform the approach used by their local ERB/IRB funding agencies, institutions, and the broader scientific community need to work together on providing better information and even outreach materials. Kehalia and colleagues noted that researchers would welcome “the development and dissemination of best practices and standardized ethics review for minimally invasive neuroimaging protocols.” INCF plans to gather and make available such material.

The condition under which data may be licensed varies across countries and often depends how the data were acquired (Uhlir and Schrr, 2007). Creative Commons has done outstanding work in promoting licenses that are compatible with the broader open data movement, and which affect in no way a subject's privacy rights some examples include all Public Library of Science (PLoS) content is published under a Creative Commons Attribution license, and the MNI's widely-used brain template ICBM-152 is compatible with this license. Note that Creative Commons itself has only one data-oriented license, CCZero, which is a dedication to the public domain, while the other licenses are intended for artistic work. For further reading on open data licensing, see Open Definition 35 and Open Data Commons 36 , as well as Stodden (Stodden, 2009) for a review of copyright and licensing issues in a scientific context.

There is also an important interaction between the technical and ethical aspects of data sharing—what constitutes a data set that is safe to share? The degree of anonymization necessary (removing all metadata? just the name and date of birth of the subject? defacing volumetric scans?) might vary within country, region, and institution. The same concern applies to the way subjects will be informed about how their data might be used in the future. Providing clear guidelines and ready to use document templates will encourage and maximize data sharing. These guidelines and documents could be tagged with respect to their data sharing characteristics (“very open” to “very restricted”).

The need to Share the Tools: the Neurodebian Approach

Even after the legal and technical problems of data capture and sharing are resolved, there are further obstacles to address to make collaborative data analysis efficient. Typically, analysis pipelines for neuroimaging studies vary significantly across labs. They use different data formats, prefer different pre-processing schemes, require different analysis toolkits and favor different visualization techniques. Efficient collaboration in, for example, a multi-center study requires a software platform that can cope with this heterogeneity, allows for uniform deployment of all necessary research tools, and nevertheless remains easy to maintain. However, compatibility differences across software vendors and tedious installation and upgrade procedures often hinder efficiency.

Turning data sharing into efficient collaboration requires sharing of tools (Ince et al., 2012). Ideally, neuroimaging research would be based on a computing platform that can easily be shared as a whole. On one hand this would significantly lower the barrier to explore new tools and to re-use existing analysis workflows developed by other groups. On the other hand it would make sharing of software easier for the respective developers, as consolidation on a standard platform reduces demand for maintenance and support. Today, the NeuroDebian project 37 is the most comprehensive effort aimed at creating a common computing platform for neuroimaging research and providing all necessary software from data capture to analysis. NeuroDebian's strategy is to integrate all relevant software into the Debian GNU/Linux operating system which offers some unique advantages in the context of neuroimaging research: it runs virtually on any hardware platform (including mobile devices), it offers the most comprehensive archive of readily usable and integrated software, it is developed as a collaborative effort by experts of their respective fields, and is free to be used, modified, and re-distributed for any purpose. Integration into Debian allows for deploying any software through a uniform and convenient interface, promotes software interoperability by a proven policy, and facilitates software maintenance via (automated) quality assurance efforts covering all integrated software. By means of hardware virtualization the advantages of this platform also benefit many users of commercial systems, such as Windows and Mac OS X (Hanke and Halchenko, 2011). For example, a NeuroDebian virtual appliance with a pre-configured XNAT neuroimaging data management platform 38 allows users on any system to start using XNAT within minutes.

The Role of International Coordination

The INCF 39 was established through the Global Science Forum of the OECD to develop an international neuroinformatics infrastructure, which promotes the sharing of data and computing resources to the international research community. A larger objective of the INCF is to help develop scalable, portable, and extensible applications that can be used by neuroscience laboratories worldwide. The INCF Task Force on Neuroimaging Datasharing (part of a broader scientific program on data sharing in neuroscience research 40 ) has recently formed to address challenges in databasing and metadata that hinder effective data sharing in the neuroimaging community, and to develop standards for archiving, storing, sharing, and re-using neuroimaging data. The initial focus of this group is MRI data. Representatives from several major efforts around the world are involved.

While the neuroimaging community acknowledges the need for standards in data exchange, the definition of those standards and their acceptance and use is a difficult task involving social engineering and the coordinated efforts of many. What details are necessary to share these data and results, to reproduce the results, and to use the data for other investigations? Through feedback from the neuroimaging community via workshops and informal discussion groups, the Task Force found that while there is enthusiasm for data sharing, the average neuroimaging researcher, particularly in a small lab setting, often experiences several technical barriers that impede effective data sharing. This finding has been noted in other surveys of barriers to biomedical data sharing (Anderson et al., 2007). While certain large research groups have made great strides in establishing federated databases and metadata schemas, these solutions often still involve in-house scripts and specialized software tools, tailored to the particular workflow of a specific lab. As noted previously, a lack of standards, recommendations, and interoperable and easy-to-use tools hinder the degree to which data sharing could be adopted more generally. In an attempt to improve the tools broadly available to the community, the Task Force identified four specific projects to be carried out during 2011 and 2012:

  1. Creation of a “One-Click Share Tool” to allow researchers to upload MRI data (in DICOM or NIFTI format) to an XNAT database hosted at INCF. Once the data is on the central server, a quality control (QC) check is launched and the report is sent to the researcher. The raw data, metadata, and QC data are stored in the database. The QC will be initially derived from FBIRN recommendations (Glover et al., 2012) and generalized to other methods. “One-click” may only be the idealized operation of the system, but the term does express the principle that the system should be trivial to use: metadata is captured from the uploaded data itself to the extent possible, and the researcher is prompted for missing information. The system encourages best practices of EDC from data acquisition, but fills in the gaps with its own EDC and management.
  2. Establishment of a neuroimaging data description schema and common API to facilitate communication among databases. A number of efforts have already made progress toward that goal, of which XCEDE is probably the most well-known (Gadde et al., 2011). This standard description would be used to mediate between databases with different data models, or as a recommendation to expose a minimal set of metadata elements and a common vocabulary. Eventually, this will be linked to a set of ontologies to allow for semantic searches and reasoning.
  3. Introduction of a mechanism to capture related data under a single container. For example, diffusion data requires additional information in the form of diffusion gradient vectors and b-values. Most DICOM conversion utilities will write these out as two separate files. We will attempt to use the Connectome File Format (CFF) Container to store this data (Gerhard et al., 2011). This solution could be applied to other cases such as multi-echo data acquired from a single acquisition that are not handled natively by any major data format (e.g., FreeSurfer's .mgz format captures such information).
  4. Automatic storage of the metadata and results of processing streams to a database. Using the QC workflows as a starting point, ensure that output of these workflows can be pushed to, initially, an XNAT database. We will augment the processing to use the common application programming interface (API), extended XCEDE schema, and CFF container technology to capture processed data and metadata, in particular provenance. The first trials could be performed with Nipype (Gorgolewski et al., 2011) and PyXNAT (Schwartz et al., submitted). Recently, some first steps have been done to implement the provenance “PROV” data model developed through the World Wide Web Consortium (W3C) into neuroimaging software, to include the provenance information within the XCEDE schema and to create a vocabulary for neuroimaging that can be used to annotate data. This will allow direct and automatic capture of the standardized provenance information that could then be used to maintain appropriate metadata for processed images and in data queries. The generation of machine readable annotations of the metadata (e.g., data, and increase the effectiveness of search engines such as NIF in their ability to aggregate data from disparate sources.

What Makes New Social Media Different?

I grew up in the era of MSN messenger, chat rooms, message boards, and MySpace. I remember feeling heavily drawn to connecting with my friends online, often communicating more online than in person. But these online social platforms are fundamentally different from today’s platforms.

Today’s social platforms are more than a neutral space to communicate with friends. They are miniature broadcasting platforms.

I still remember a time before the Facebook ‘like’ button. The button was introduced in 2009 and made the platform more than simply about commenting and sharing. In 2016, the ‘reactions’ button came out, further allowing users to share their emotional reactions to your content.

The ‘like’ button amplified the social comparison potential. In an article featuring an interview with Dr. Max Blumburg, be states:

“…you’re making yourself vulnerable to the thoughts of others, so it’s not surprising that if it doesn’t elicit the reaction you’d hoped for your pride takes a hit. We’re seeking approval from our peers and it’s not nice when we don’t get it – you want people to think your ‘content’ is funny/interesting/likeable. ‘If you have low self-esteem and you don’t do well on social media, you’re going to feel particularly bad.

We are all miniature media companies, in a sense. The responsibility to manage one’s reputation online has skyrocketed since the development of these advanced technologies.

The thought of having a career as a social media celebrity was unthinkable not too long ago. Now, social media influencers are a key aspect of mainstream marketing.

We’re all having to become our own marketers online. This is something I think about quite a bit, given the fact that I created this website for the purpose of sharing my ideas in a non-traditional way.

Although you can use social media to further your professional career and build connections with like-minded people, it is important to be mindful of when it is having a negative impact on your life.

If you suspect your internet use may be having a negative impact, you can try the free Internet Addiction Test here.


Revealing the Neural Substrate of Psychiatric Disorders

Use of anatomic and functional brain imaging has had a great effect on psychiatry, providing some of the most robust and direct demonstrations of morphologic and functional brain alterations in disorders that many had considered psychologic in nature. Over time, these approaches have yielded information on common and overlapping neural mechanisms of psychiatric disorders, including MDD , schizophrenia, BD , ADHD , PTSD , and autism spectrum disorder (Table E1 [online]). Most of the following cited articles used DSM IV as diagnostic criteria, but there have been no drastic changes in DSM criteria for the major disorders since DSM III.

Major Depressive Disorder

Anatomic and functional deficits are revealed in multiple brain regions in patients with MDD , especially prefrontal-limbic circuits (Fig 1). Our recent meta-analysis of voxel-based morphometry studies of medication-free patients with MDD identified robust gray matter decreases in prefrontal and limbic regions, mainly including the bilateral superior frontal gyrus, lateral middle temporal and inferior frontal gyri, and bilateral parahippocampal gyrus and hippocampus (11). Subcortical brain alterations have included smaller hippocampal volumes in patients with MDD , which appear to be moderated by age of onset and to be greater in patients with recurrent episodes than in those experiencing their first episode (12). Notably, the interesting finding of smaller hippocampal volume has been linked to the “neurotrophic hypothesis of depression,” which proposes that elevated glucocorticoid levels associated with chronic hyperactivity of the hypothalamic-pituitary-adrenal axis in patients with MDD may induce brain atrophy via remodeling and downregulation of growth factors, including brain-derived neurotrophic factor (13). Moreover, these anatomic changes are different (a) between patients experiencing their first episode and those with chronic disease (14) and (b) between remitted patients with MDD and those who are currently depressed (15). The variable pattern of increased and decreased cortical thickness across the brain suggests that a profile analysis of changes across the brain may be more useful for diagnosis than measured changes in any particular brain region.

White matter deficits, especially those revealed by DT imaging, have been observed in patients with mood disorders, mainly within emotion regulation circuitry (Fig 1) this finding is consistent with gray matter findings. For example, patients with depressive disorder exhibited substantially lower fractional anisotropy (FA ) values in the white matter of the right middle frontal gyrus, the left lateral occipitotemporal gyrus, and the angular gyrus of the right parietal lobe than did healthy comparison subjects. Thus, white matter abnormalities may contribute to disruptions in neural circuits involved in mood regulation and therefore may contribute to the neuropathology of MDD (16). Furthermore, decreased FA in the left anterior limb of the internal capsule may reflect disease in its frontostriatal and frontothalamic projections, which could increase risk for impulsive and emotionally disinhibited behavior, such as suicide (17,18). If replicated, these findings may provide an objective biomarker for suicide risk, which is highly elevated in patients with depression or other major psychiatric disorders and represents the only major cause of mortality in psychiatry. Neuroimaging biomarkers might help identify the specific patients in need of early preventive intervention and intensive monitoring to reduce their suicide risk.

Functional and metabolic studies have provided further evidence demonstrating abnormalities in multiple distributed neural circuits in patients with depression, especially those supporting emotion regulation and reward processing. The most consistent findings involve two patterns of distinct functional abnormalities: (a) those in serotonergically modulated implicit emotion regulation neural circuitry, including the amygdala and regions in the medial prefrontal cortex, and (b) those in dopaminergically modulated reward processing circuitry, including the ventral striatum and medial prefrontal cortex (19). Previous studies used graph theory–based approaches and found that depression was characterized by lower path length and higher global efficiency, implying a shift toward randomization in brain networks that could contribute to disturbances in mood and cognition in patients with MDD (20). In addition, by using resting-state functional connectivity analysis, we discovered treatment refractory depression is associated with disrupted functional connectivity mainly in thalamocortical circuits, while nonrefractory depression is associated with more distributed decreased connectivity in the limbic-striatal-pallidal-thalamic circuit. These results suggest that nonrefractory and refractory depression may represent two distinct MDD subtypes characterized by distinct functional deficits in distributed brain networks (21). Such approaches could be of importance in the early detection of patients who are not likely to respond to first-line treatments and who require adjunctive medical and psychosocial therapies.

Besides diagnosis, studies of treatment response prediction in depression indicate an important role for analyses focused on the amygdala. One study of cognitive behavioral therapy reported that greater pretreatment amygdala activity predicted better outcome (24), while a study of the rapid antidepressant ketamine reported the opposite effect (25), suggesting potential use of amygdala activity to guide treatment with psychosocial approaches instead of with antidepressant medication. Another study reported that greater amygdala activation in response to emotional facial expressions predicted greater reduction in depressive symptoms 8 months after different types of treatment (26). Besides the amygdala, other regions within brain circuitry supporting emotion processing, including the dorsal anterior cingulate cortex (27) and the ventrolateral prefrontal cortex (28,29), were found to have potential in predicting clinical outcome in patients with MDD (30).

Schizophrenia

Schizophrenia is another common psychiatric disorder, affecting approximately 1% of the population, and it is characterized by delusions, flat affect, hallucinations, social withdrawal, and bizarre behavior. Studies of patients with treatment-naive first-episode schizophrenia have revealed brain deficits at the onset of illness (8,31,32). Furthermore, these anatomic deficits may affect functional networks, which subsequently may represent the proximate cause of the clinical symptoms (8). Findings from longitudinal studies of first-episode schizophrenia and comparison of findings in patients with first-episode schizophrenia and those with chronic disease suggest considerable variability in patterns of anatomic changes at the early phase of illness, smaller deficits in patients with first-episode schizophrenia than in those with chronic disease, and some regional progression of brain changes over the longer term course of illness (33). Some of these changes appear to relate to clinical manifestations. For example, patients with prominent negative symptoms, such as affective flattening, avolition, and apathy, have been reported to show greater reductions of gray matter volume in the temporal lobe (31). Regions such as the medial prefrontal cortex, striatum, and thalamus are within the dopamine pathway, which is both a treatment target and a system implicated in the pathogenesis of schizophrenia (Fig 2). There are other regions where illness effects have been seen, including the parietal and occipital regions, that do not receive prominent dopaminergic innervation (Fig 2). This suggests a complex pathophysiologic process with diverse brain effects.

Similarly, studies of white matter in patients with first-episode schizophrenia indicate widespread abnormalities across white matter tracts (34), with evidence for reductions in FA in the uncinate (35), cingulum (36), fornix (37), corpus callosum (38), and inferior longitudinal fasciculus (39) however, negative results also have been reported (40). The inconsistent localization of findings is reflected in a relatively recent meta-analysis of FA findings in patients with schizophrenia, as reported in 23 published articles, where nonoverlapping and scattered findings were seen throughout white matter tracts (41). As with morphometric studies, this variability is likely due to factors, such as differences in image acquisition and analysis, small sample sizes, variable duration of illness, and illness heterogeneity.

Findings also suggest that the most robust changes in brain function occur primarily in regions different from those where anatomic findings have been identified (42–44). Hypofunction of the medial prefrontal cortex and hyperactivity of the hippocampus and striatum were reported in patients with first-episode schizophrenia before treatment and may in time provide biomarkers for the disorder and targets for treatment (42). While apparently beneficial and adverse functional changes have been seen after antipsychotic treatment in some brain regions, such as increased activity of the medial prefrontal cortex and disrupted connectivity within the prefrontal-parietal network (45), other alterations in anatomy and function appear to remain relatively stable early in the course of illness after treatment and clinical stabilization, as do cognitive deficits (46). While progression of brain deficits in patients with schizophrenia is not substantial in the early course of illness, they do appear to occur in some brain regions in the decades after illness onset (33). Recent evidence is beginning to identify associations between neuroimaging findings and genetic factors, and the clarification of these associations is among the most important directions for future research in this area (47).

Some relatively recent studies have examined the potential role of imaging features in the clinical diagnosis of schizophrenia. Three studies (48–50) have shown that volume reduction in prefrontal and temporal regions was the main anatomic difference between patients and control subjects, separating groups with a classification accuracy of 75%–90%. Although these findings are encouraging, the interpretation of the results is hindered by potential confounding effects of drug treatments, long duration of illness, and small sample sizes. MR imaging studies have shown that antipsychotic drugs can cause gray matter loss in the neocortex, although potential compensatory effects involving increased striatal volumes have been observed (51,52). Although the effects of different antipsychotic drugs on brain anatomy and function are not well established, these imaging changes have been used to predict the treatment response (53–56).

Bipolar Disorder

Consistent with the anatomic findings, functional brain deficits in patients with BD also have been reported these mainly include reductions in activation in the right ventral lateral prefrontal cortex, amygdala, and anterior cingulate cortex (Table E1 [online]). The amygdala dysfunction may represent a state marker of bipolar illness, whereas ventral lateral prefrontal cortex dysfunction may be independent of mood state and may represent a trait marker of the illness (64) however, this pattern is less evident in pediatric patients (65). The functional connectivity between the posterior anterior cingulate cortex and the amygdala in patients with BD also has been found to be disrupted during emotion processing, which may be caused by disruptions in white matter connectivity of the posterior anterior cingulate cortex and amygdala (66).

MR imaging has proven to have high value in understanding and evaluating brain alterations in pediatric patients with psychiatric disorders. With task-based functional MR imaging studies, emotion processing problems implicated in pediatric patients with BD have been identified, mainly involving the activity of the amygdala and prefrontal regions (67,68). Apart from the illness-related deficits, treatment effect also has been revealed. For example, Pavuluri et al (69) found that second-generation antipsychotics showed enhanced prefrontal and temporal lobe activity in patients with adolescent BD . These deficits have promise as potential biomarkers in the clinical diagnosis of and treatment planning for BD (70).

Attention Deficit Hyperactivity Disorder

As for functional changes, patients with ADHD have been shown to have abnormal activity in the orbitofrontal, middle temporal, and dorsolateral prefrontal cortex (Fig 4). Hypoactivation in the inferior frontal cortex in patients with ADHD has been related to impaired behavioral response inhibition, which is a central neurobehavioral feature of ADHD (78). Hyperactivation in the striatum and mediotemporal areas during working memory tasks suggests that atypical corticostriatal circuitry may be an important component in the pathyphysiology of ADHD (7). Besides cerebral functional changes, default mode network regions in the cerebellum have shown positive functional connectivity (in contrast to what is typically negative functional connectivity in control subjects), with widespread regions of salience, dorsal attention, and sensorimotor networks. This provides evidence of impaired cerebellar default mode network coupling with cortical networks in patients with ADHD and highlights a role of cerebrocerebellar interactions in patients with this disease. This circuitry has been proposed as a target for therapeutic interventions in patients with ADHD (79). By using multivariate analysis, one previous study applied pattern classification to task-based functional MR imaging of behavioral inhibition to accurately identify 77% of patients with ADHD (80).

Posttraumatic Stress Disorder

Functional studies have shown reduced activation of the thalamus, anterior cingulate gyrus, and medial frontal gyrus relative to those in healthy control subjects (91). In contrast, increased activation has been seen in the left hippocampus (92), amygdala (93), and visual cortex these findings are positively correlated with re-experiencing trauma or avoidance symptoms in patients with PTSD (94). Furthermore, our team used relevance vector regression to examine the relationship between resting-state functional MR imaging data and symptom scores and found that accurate identification of patients with PTSD was based on functional activation in a number of prefrontal, parietal, and occipital regions this finding enabled us to confirm that PTSD is a disorder specific to the frontolimbic networks (95). The pattern of observed network alterations largely overlapped with the salience, central executive, and default mode networks (96,97), as well as with the somatomotor, auditory, and visual networks (98) (Fig 5). We also found substantial alterations in brain function that are similar in many ways to those observed in patients with PTSD in individuals shortly after major traumatic experiences, highlighting the need for early evaluation and intervention in survivors of trauma (99). However, we also found long-term changes in neural networks involved in core aspects of self-processing and cognitive and emotional functioning in disaster survivors that were independent of anxiety symptoms (100).

These imaging features also showed potential value in prediction of illness onset. For example, Shin et al (101) reported that hyperresponsivity in the dorsal anterior cingulate appears to be a familial risk factor for the development of PTSD after psychologic trauma. In one of our previous studies, we found that patterns of neuroanatomic alternations could be used to identify trauma survivors with PTSD and those without (102). Volume reductions in the hippocampus may hold promise as an approach to monitor therapeutic outcomes (103).


Discussion

Recent years have witnessed an emerging interest in the neurobiological systems that support intrinsic motivational processes. Although this area of inquiry is young, conceptual and empirical evidence points to the role of dopaminergic systems in supporting intrinsically motivated behaviors. Across different mammalian species, there appear to be linkages between dopamine and the positive experiences associated with exploration, new learning and interest in one’s environment (Panksepp, 1998 Panksepp and Biven, 2012). Building on Bromberg-Martin et al.’s (2010) distinction between dopaminergic value- and salience-coding and on previous work respectively mapping these systems onto the phenomenology of competence (Tricomi and DePasque, 2016) and interest (DeYoung, 2013), we propose that intrinsic motivation entails both types dopaminergic transmission. Because these dopamine systems entail distinct neural structures, future neuroimaging studies have a strong conceptual basis for specifying distinct a priori regions of interest. Beyond that, evidence suggests that intrinsic motivation involves alterations between the neural networks of salience detection, attentional control, and self-referential cognition (Menon and Uddin, 2010 Menon, 2015). Better understanding of these large-scale neural dynamics may provide greater resolution of the processes that support high quality learning and performance.

Despite the clear conceptual relationship between intrinsic motivation and dopaminergic transmission, only two existing studies provide direct evidence of an association between these two processes (de Manzano et al., 2013 Gyurkovics et al., 2016). The bulk of existing research provides indirect support to the hypothesis that dopamine is a substrate of intrinsic motivation in that the core regions innervated by dopamine neurons are activated during intrinsic motivation. Pharmacological manipulations of dopamine thus represent an important new research direction. Indeed, such manipulations have already been fruitfully applied in the study of dispositional traits (e.g., Wacker and Smillie, 2015) and their application in the study of motivational states would seem a natural extension. Pharmacological manipulations of dopamine may, for example, allow researchers to more precisely decode the neural mechanisms that mediate the undermining effect of externally contingent rewards on intrinsic motivation.

The link between dopaminergic systems and intrinsic motivation may also prove useful for developmental roboticists for whom the topic of intrinsic motivation has recently fallen into purview (e.g., Gottlieb et al., 2013, 2016). The stated goal of developmental robotics is to design embodied agents that self-organize their development by constructing sensorimotor, cognitive, and social skills over the course of their interactions with the environment. Roboticists have proposed that in order for embodied agents to be capable of intrinsic motivation, they must not only be outfitted with computational systems that orient them toward novel, surprising, or uncertain stimuli, but also with meta-monitoring processes that track their learning progress in their investigation of such stimuli (Gottlieb et al., 2013, 2016). Without meta-monitoring processes that track learning, agents will likely get trapped investigating stimuli that are random or otherwise unlearnable, precluding the possibility for self-directed development. The existence of salience- and value-coding dopaminergic systems, respectively capable of tracking novelty and rewarding feedback, may partially represent an organic instantiation of the type of computational system that Gottlieb et al. (2013, 2016) hypothesize to be a requirement for intrinsic motivation. We believe that roboticists are well-positioned to discover the types of computational problems that need to be solved for a full understanding of the neural substrates of intrinsic motivation. We thus hope that some of the present ideas will help to spur robotics research on intrinsic motivation.

Future studies are also needed to directly test the hypothesis that intrinsically motivated states entail dynamic switching between the salience, central executive and default mode networks. Beyond traditional fMRI analyses comparing activity in a priori regions across intrinsically and non-intrinsically motivated states, this hypothesis specifically encourages the use of connectivity analyses and the adoption of chronometric techniques that can provide information about the dynamics and directionality of activity across large-scale networks (e.g., Sridharan et al., 2008). This research direction may help to not only elucidate the neural basis of intrinsic motivation but also to identify the neural mechanisms through which intrinsic motivation enhances learning and performance outcomes, especially on tasks that require depth of processing and high-quality engagement.

Beyond Exploration, Curiosity and Mastery: Intrinsically Motivated Social Play

SDT uses intrinsic motivation as a broad term for diversity of activities that are inherently rewarding and growth promoting (Ryan and Deci, 2017). This is a large class of behaviors, minimally including curious exploration and mastery tendencies, on the one hand, and social play, on the other (Ryan and Di Domenico, 2016). To date, human neuroscience studies have focused on intrinsic motivation associated with curious exploration and mastery, rather than social play, and we accordingly based our review on this subset of intrinsically motivated behaviors. Yet, comparative affective neuroscience suggests that exploration and social play have both distinct and overlapping neurobiological and phenomenological underpinnings, the former being subserved by the SEEKING system and the latter by the PLAY system (Panksepp, 1998 Panksepp and Biven, 2012). The subcortical PLAY system governs the rough-and-tumble (R&T) interactions of mammals, energizing them to develop and refine their physical, emotional, and social competencies in a safe context (Panksepp, 1998 Pellis and Pellis, 2007 Trezza et al., 2010 Panksepp and Biven, 2012). In early mammalian development, R&T play constitutes a type of embodied social cognition that provides a basis for cooperation and the adaptive self-regulation of aggression (Peterson and Flanders, 2005). Humans are of course also capable of more sophisticated forms of play beyond R&T such as common playground games, sports play and friendly humor, but such human play may be nonetheless organized around basic PLAY motivations (Panksepp, 1998 Panksepp and Biven, 2012).

We might therefore regard play as intrinsically motivated socialization (Ryan and Di Domenico, 2016), an expression of people’s complementary tendencies toward autonomy and sociality in development (Ryan, 1995 Ryan et al., 1997). Indeed, research in SDT suggests that in addition to competence and autonomy, people have a basic psychological need for relatedness, the sense of feeling meaningfully connected with others (Ryan and Deci, 2017). Whereas strong associations between exploratory intrinsic motivations and satisfactions of competence and autonomy have been clearly demonstrated, relatedness is usually seen to play a more distal role in the expression of these intrinsic motivations. Specifically, relatedness satisfactions provide people (especially children) with a sense of safety, a secure base from which their exploratory tendencies can be more robustly expressed (Ryan and Deci, 2017). Recognition of social PLAY signifies the centrality of the need for relatedness in some intrinsically motivated activities.

Interest in the overlaps and contrasts between intrinsically motivated exploration and play is thus an important agenda for future studies and both are relevant to intrinsic motivation as it is studied within SDT (Ryan and Di Domenico, 2016). Behavioral models of human intrinsic motivation have generally conflated exploration and play because these activities share common features such as an internal perceived locus of causality and perceived competence or mastery. Indeed, functional distinctions between intrinsically motivated exploration and object or manipulative play are subtle and suggest that, for many activities recognized as “playful”, the conflation is appropriate and productive. For example, Wilson (2000) suggested that “In passing from exploration to play, the animal or child changes its emphasis from ‘What does this object do?’ to ‘What can I do with this object?”’ (p.165). In fact, intrinsically motivated object play, manipulative play, and solitary gaming likely arise from the activity of the SEEKING system (Panksepp, 1998 Panksepp and Biven, 2012). Clearly, more empirical work is needed to differentiate these types of intrinsic motivation in humans.

Methodological Suggestions

Our principal intent in this review article, is to stimulate increasing integration between social behavioral research on intrinsic motivation and the neuroscience of motivation. We see many new and promising pathways opening up. At the same time, methodological issues persist that warrant serious considerations. We list but a few of these.

First, intrinsic motivation and the associated undermining effect of rewards on these behaviors pertain only to tasks that are interesting and enjoyable in the first place. Thus, researchers should pilot test target activities to ensure that the activities are suitable for examining the undermining effect. This is especially important in neuroscience, where contemporary methods such as fMRI often involve procedures that limit how interesting experimental tasks can be. Researchers should also use multi-method assessments of intrinsic motivation to validate their measures and to ensure that the correct behavioral phenomena are being tapped. For example, in an attempted (and failed) replication and extension of Murayama et al.’s (2010) fMRI study on the undermining effect, Albrecht et al.’s (2014) utilized a picture-discrimination task for which participants may not have been intrinsically motivated in the first place (pilot testing was not reported) and for which free-choice behavior was not examined as a dependent variable. In the absence of these important design characteristics, it is difficult to draw decisive conclusions from their experiment. Incidentally, we note that Albrecht et al.’s (2014) study did show that competence feedback increased participants’ self-reported fun and that it was also associated with increased activations within the midbrain, striatum, and lateral PFC, findings that are consistent with the idea that competence is associated with dopamine-related activity.

Second, replicability is a central concern, as it is throughout the social and personality neurosciences (Allen and DeYoung, 2016). Most studies to date have been small-sample investigations, and larger samples are needed if we are to derive foundational conclusions. A priori hypotheses concerning regions of interest will also add confidence to the interpretation of findings. Toward that end, the present review ought to provide future studies with a useful reference for making clear predictions about the neural basis of intrinsic motivation.

Conclusion

Intrinsic motivation is a topic of interest within both basic behavioral science and applied translational studies and interventions (Ryan and Deci, 2000, 2017). Yet important to the progress of empirical research on intrinsic motivation is integrating what is known from phenomenological and behavioral studies with neuroscience studies. As we suggested at the outset, neuroscience holds potential for testing existing models of the situational and social determinants of intrinsic motivation as well as for providing greater resolution on the affective and cognitive processes that underpin such activities. Movement toward consilience is a central concern to SDT and our hope is that the current synthesis provides some broad stoke encouragement for that agenda.


Which commodity technologies offer useful neuroimaging data? - Psychology

Using fMRI Data to Predict Autism Diagnoses with Various Machine Learning Models and Cross-Validation Methods

Contributors: Emily Chen, Andréanne Proulx, Mikkel Schöttner

This repository contains contributions from the ABIDE team made during The BrainHack School 2020. This project uses resting state fMRI data from the ABIDE dataset to train machine learning models and is licensed under a Creative Commons Zero v1.0 Universal license.

Please feel free to reach out to any of us if you have any questions or comments!

Hello! I am an (incoming) fourth year undergraduate student at McGill University studying computer science and urban health geography with a minor in cognitive science. I am a research assistant with Isabelle Arseneau-Bruneau in the Zatorre Lab and have been learning a lot about neuroscience research while (hopefully) lending some of my technical skills to Isabelle's PhD work exploring the effect of musical training on FFR.

When joining the BrainHack School, I was looking forward in particular to working on a project at the intersection of neuroscience and CS because my previous classes rarely focused on the application of the abstract concepts we learned. Upon completion of the BrainHack School, I had the opportunity to learn from and collaborate with talented neuroscience researchers and participants, write a machine learning script using python and neuroscience data, and gain hands-on practice in reproducibility efforts and good project management.

You can find me on GitHub at emilyemchen and on Twitter at @emilyemchen.

Hi! I am an incoming master's student in Psychology at the University of Montreal. My background is in cognitive neuroscience and my career objective is to work on research projects aiming to discover new ways of characterizing the brain in its pathological states. At the moment, I am working with Sébastien Jacquemont and Pierre Bellec, and the focus of our research is on investigating the effect of genetic mutations on functional and structural brain phenotypes. More precisely, I have been working with resting-state functional connectivity measures in carrier populations with developmental disorders.

By joining the Brainhack School, I hoped to strengthen my computational skills and my knowledge in the applications of machine learning to the field of neuroimaging. Not only did I get to work on a machine learning problem specific to my field, but I also got to learn about useful tools such as GitHub. More importantly, I also met a community of brilliant/aware researchers and got to collaborate with other students, even across the world!

Hey! I am a master's student in Cognitive Neuroscience at the University of Marburg. My background is in psychology. I am especially interested in the social brain and how humans infer thoughts and emotions from the actions of others. Right now, I am doing my Masters thesis on how the capacities of Theory of Mind and empathy interact in the brain using fMRI. I am also very much interested in research methods and how we can use statistics and data science techniques on neuroimaging data.

When getting into Brainhack School, I hoped to get a better understanding of machine learning, and how it is useful for analyzing fMRI data. I learned a lot about collaborative project organization, how to make pretty interactive plots and how to write better Python scripts. Brainhack School was really an awesome experience getting to know a lot of talented and motivated people and being able to collaborate with them across the Atlantic!

You can find me on GitHub at mschoettner and on Twitter @mikkelschoett.

We expected to use the following tools, technologies, and libraries for this project:

  • Git
  • GitHub
  • Visual Studio Code
  • Docker
  • Jupyter Notebook
  • HPC/Compute Canada
  • Python libraries: numpy , pandas , matplotlib , scikit-learn , nilearn , seaborn , pyplot , pyplot
  • venv

The goal of this project was to compare different machine learning models and cross-validation methods and see how well each is able to predict autism from resting state fMRI data. We used the preprocessed open source ABIDE database, which contains structural, functional, and phenotypic data of 539 individuals with autism and 573 typical controls from 20 different research sites.

By the end of The BrainHack School, we aimed to have the following:

  • README.md file
  • requirements.txt file that outlines the packages needed to run the script
  • Jupyter notebooks with code and explanations
  • GitHub repository documenting project workflow
  • Presentation showing project results

Several studies have found an altered connectivity profile in the default mode network of subjects with Autism Spectrum Disorder, or ASD (Anderson, 2014). Based on these findings, resting state fMRI data have been used to predict autism by training a classifier on the multi-site ABIDE data set (Nielsen et al., 2013). This project's scientific aim is to replicate these findings and extends the literature by comparing the effects of differing cross-validation methods on various classification algorithms.

The three of us joined forces when we realized that we shared many similar learning goals and interests. With such similar project ideas, we figured we would accomplish more working together by each taking on a different cross-validation methods to train various machine learning models.

We all shared a common interest in making our project as reproducible as possible. This goal involved creating a transparent, collaborative workflow that could be tracked at any time by anyone. To achieve this objective, we utilized the many features that GitHub has to offer, all of which you can see in action at our shared repository here.

  • Branches: used to work simultaneously on our own parts and then push changes to the master branch
  • Pull requests: created when making changes to the master branch
  • Issues: used to communicate with each other and keep track of tasks
  • Tags: used to keep issues organized
  • Milestones: used to keep our main goals in mind (Week 4 presentation and final deliverable)
  • Projects: used to track various aspects of our work

Standardized Data Preparation and Jupyter Notebooks

The data are processed in a standardized way using a Python script that prepares the data for the machine learning classifiers. Several Jupyter notebooks then implement different models and cross-validation techniques which are described in detail below.

Tools, Technologies, and Libraries Learned

Many of these contribute to open science practices!

  • Jupyter Notebook: Write code in a virtual environment
  • Visual Studio Code: Edit files such as README.md
  • Python libraries: numpy , pandas , nilearn , matplotlib , plotly , seaborn , plotly , sklearn (dimensionality reduction, test-train split, gridsearch, cross-validation methods, evaluate performance)
  • Git: Track file changes
  • GitHub: Organize team workflow and project
  • Machine learning: Apply ML concepts and tools to the neuroimaging field
  • venv : Make requirement.txt file for a reproducible virtual environment

Deliverable 1: Jupyter notebooks (x3)

This notebook contains code to run a linear support vector classification to predict autism from resting state data. It uses leave-group-out cross-validation using site as the group variable. The results give a good estimate of how stable the model is. While for most of the sites the prediction works above chance level, for some, autism is predicted only at or even below chance level.

K-fold and leave-one-out cross-validation

This notebook contains the code to run linear support vector classification, k-nearest neighbors, decision tree, and random forest algorithms on the ABIDE dataset. The models are trained and evaluated using k-fold and leave-one out cross-validation methods. We obtain accuracy scores that represent how skilled the model is at predicting the labels of unseen data. Leave-one out cross validation gives more accurate predictions than k-fold cross validation. The accuracy values range from 55.8% to 69.2%.

Group k-folds cross-validation

This notebook implements a 10-fold iterator variant to split the data into non-overlapping groups. It then runs four classification algorithms on the data: linear support vector machines ( LinearSVC ), k-nearest neighbors ( KNeighborsClassifier ), decision tree ( DecisionTreeClassifier ), and random forest ( RandomForestClassifier ). The most accurate classifier, LinearSVC , has an average accuracy score for all models of 63.5%, while the least accurate classifier, RandomForestClassifier , has an average accuracy score for all models of 52.6%. This latter percentage is not far off from the average accuracy scores of the other two classifiers. These scores are not particularly impressive, and one reason could be the data that this model is being trained on. Each site had differing processes, methods, and amounts of data they collected, so the lack of similarity could suggest less widespread generalizability. Future work into optimizing the parameters would be the logical next step.

Table 1: Group 10-fold cross validation average accuracy scores (percentages) per classifier algorithm

Algorithm Average Score Maximum Score Minimum Score
LinearSVC 63.5% 72.0% 52.3%
KNeighborsClassifier 55.2% 66.7% 48.7%
DecisionTreeClassifier 54.3% 61.6% 44.9%
RandomForestClassifier 52.6% 60.5% 39.7%

What were the overall results?

  • downloads the data
  • extracts the time series of 64 regions of interest defined by the BASC brain atlas
  • computes the correlations between time series for each participant
  • uses a principal component analysis for dimensionality reduction

To fetch and prepare the data set you can call the prepare_data.py script like this:

./prepare_data.py data_dir output_dir

python prepare_data.py data_dir output_dir

  • data_dir is the directory where you want to save the data or have it already saved and
  • output_dir is the directory where you want to store the outputs the script generates.

The notebooks also call the prepare_data function from the preparation script.

This file increases reproducibility by helping to ensure that the scripts run correctly on any machine. To make sure that all scripts work correctly, you can create a virtual environment using pythons built-in library venv . To do this, follow these steps:

  1. Clone repo and navigate to folder in a shell
  2. Create a virtual environment in a folder of your choice: python -m venv /path/to/folder
  3. Activate it (bash command, see here how to activate in different shells): source /path/to/folder/bin/activate
  4. Install all necessary requirements from requirements file: pip install -r requirements.txt
  5. Create kernel for jupyter notebooks: ipython kernel install --user --name=abide-ml
  6. Open a jupyter notebook: jupyter-notebook , then click the notebook you want to run
  7. Select different kernel by clicking Kernel -> Change Kernel -> abide-ml
  8. Run the code!

Deliverable 4: Data visualizations (Week 3)

Deliverable 5: Presentation (Week 4)

The presentation slides can be viewed here on Canva, which is the platform we used to create the slides. A video of our presentation can be viewed on this project page. We presented our work to The BrainHack School on June 5, 2020 using the RISE integration in a Jupyter notebook, which can be found here.

Deliverable 6: Overview of the project and results in the README.md file

This README.md file contains the content that will be shown on The BrainHack School website project page.

Conclusion and Acknowledgement

First and foremost, we would like to thank our Weeks 3 and 4 peer clinic mentor and co-organizer of the Brainhack School, Pierre Bellec. A special thank you as well to our Week 2 mentors Désirée Lussier-Lévesque, Alexa Pichet-Binette, and Sebastian Urchs.

Thank you also to The BrainHack School 2020 leaders and co-organizers Jean-Baptiste Poline, Tristan Glatard, and Benjamin de Leener, as well as the instructors and mentors Karim Jerbi, Elizabeth DuPre, Ross Markello, Peer Herholz, Samuel Guay, Valerie Hayot-Sasson, Greg Kiar, Jake Vogel, and Agâh Karakuzu.

Thanks to all of you, this school was an amazing learning experience!

Benchmarking functional connectome-based predictive models for resting-state fMRI (for classification estimator inspiration) https://hal.inria.fr/hal-01824205

Scientific articles:

Anderson, J. S., Patel, V. B., Preedy, V. R., & Martin, C. R. (2014). Cortical underconnectivity hypothesis in autism: evidence from functional connectivity MRI. Comprehensive Guide to Autism, 1457, 1471.

Nielsen, J. A., Zielinski, B. A., Fletcher, P. T., Alexander, A. L., Lange, N., Bigler, E. D., . & Anderson, J. S. (2013). Multisite functional connectivity MRI classification of autism: ABIDE results. Frontiers in Human Neuroscience, 7, 599.


AMA Journal of Ethics

Psychiatric Diagnosis and Brain Imaging

The biological revolution in psychiatry, which started in the 1960s, has so thoroughly transformed the field that the phrase “biological psychiatry” now seems redundant. A huge literature exists on the biological correlates of psychiatric illness, including thousands of published research studies using functional neuroimaging methods such as SPECT, PET, and fMRI. In addition, most psychiatric treatment is biological in that it directly affects the brain through medication, stimulation, or surgery. Even “talking therapies” are now understood to change the brain in ways that have been visualized by neuroimaging [1].

Diagnoses in psychiatry, however, are based entirely on behavioral, not biological, criteria [2]. Depression is diagnosed by asking patients how they feel and whether their sleeping, eating, and other behaviors have changed. Attention deficit hyperactivity disorder (ADHD) is diagnosed by asking the patient, family members, and others about the patient’s tendency to get distracted, act impulsively, and so on. For these and all other psychiatric illnesses described by the Diagnostic and Statistical Manual of the American Psychiatric Association, findings from brain imaging do not appear among the diagnostic criteria. Aside from its use to rule out potential physical causes of a patient’s condition, for example a brain tumor, neuroimaging is not used in the process of psychiatric diagnosis.

In this article we review the current status of brain imaging for psychiatric diagnosis. Among the questions to be addressed are: why has diagnostic neuroimaging not yet found a place in psychiatric practice? What are its near-term and longer-term prospects? What obstacles block the use of such methods? The answers to these questions involve the nature of imaging studies and of psychiatric diagnosis.

Sensitivity, specificity and standardization in psychiatric brain imaging. The vast majority of psychiatric neuroimaging studies aggregate data from groups of subjects for analysis, whereas diagnosis must be made for individuals, not groups. Structural and functional studies reveal a high degree of variability within groups of healthy and ill subjects, often with considerable overlap between the distributions of the two groups [3]. In the language of diagnostic tests, imaging studies are generally not highly sensitive to the difference between illness and health.

Another current limitation concerns the specificity of candidate diagnostic markers from imaging. Most psychiatric imaging studies involve subjects from only two categories—patients from a single diagnostic category and people without any psychiatric diagnosis. The most that can be learned from such a study is how brain activation in those with a particular disorder differs from brain activation in those without a disorder. The dilemma faced by a diagnosing clinician, on the other hand, is rarely “Does this person have disorder X or is she healthy?” Rather, it is typically “Does this person have disorder X, Y, or Z?” The pattern that distinguishes people with disorder X from healthy people may not be unique to X but shared with a whole alphabet of other disorders.

Indeed, there is considerable similarity in the abnormalities noted in brain activation across different diagnoses. A meta-analysis of neuroimaging studies of anxiety disorders reported common areas of activation (amygdala, insula) across posttraumatic stress disorder, social phobia, and specific phobia—suggesting that neuroimaging has yet to reveal patterns of neural activity that are unique to specific anxiety disorders [4]. Abnormalities of amygdala activation also have been reported consistently in neuroimaging studies of depression [5], bipolar disorder [6], schizophrenia (a disorder primarily of thought rather than of mood) [7], and psychopathy (which shares features with the DSM diagnosis of antisocial personality disorder) [8].

More sophisticated methods of image analysis may hold promise for discerning the underlying differences among the many disorders that feature similar regional abnormalities, including the “usual suspects,” limbic hyperactivity and prefrontal hypoactivity. In addition, new multivariate statistical approaches to image analysis make it possible to discover spatial and temporal patterns that correspond to performance of specific tasks and specific diagnoses [9]. These methods have only begun to be applied to clinical disorders but show promise for increasing the specificity of brain imaging markers for psychiatric illness [10, 11].

Standardization is relevant in light of the many ways in which protocols differ from study to study, particularly among functional imaging studies. The patterns of activation obtained in studies of psychiatric patients depend strongly on the tasks performed by the subjects and the statistical comparisons examined by the researchers afterwards. Although the results of psychiatric imaging research are often summarized by stating that certain regions are under- or overactive or more or less functionally connected, such summaries are fundamentally incomplete unless they include information about what task evoked the activation in question: were the patients resting, processing emotional stimuli, trying not to process emotional stimuli, or engaged in effortful cognition? The fact that any imaging study’s conclusions are relative to the tasks performed adds further complexity to the problem of seeking consistently discriminating patterns of activation for healthy and ill subjects.

Reliability and validity of current diagnostic categories. Other reasons why progress toward diagnostic imaging in psychiatry has been slow stem from the nature of the diagnostic categories themselves. The categories of the DSM are intended to be both reliable and valid. That is, they are intended to be usable in consistent ways by any appropriately trained clinician, so that different diagnosticians arrive at the same diagnosis for each patient (reliability) and to correspond to the true categories of psychiatric illness found in the population, that is, to reflect the underlying psychological and biological commonalities and differences among different disorders (validity). Good, or at least improved, reliability was one of the signal achievements of the DSM-III, and was carried over to DSM-IV. Unfortunately, validity continues to be more difficult to achieve.

As an illustration of how far from being necessarily valid our current diagnostic categories are, consider the criteria for one of the more common serious psychiatric conditions, major depressive disorder. According to the DSM-IV-TR, patients must report either depressed mood or anhedonia and at least four of eight additional symptoms. It is therefore possible for two patients who do not share a single symptom to both receive a diagnosis of major depressive disorder. There are also commonalities of symptoms between categories. For example, impulsivity, emotional lability, and difficulty with concentration each occurs in more than one disorder. To the extent that our psychiatric categories do not correspond to “natural kinds,” we should probably not expect correspondence with brain physiology as revealed by imaging. Taken together, the fact that (a) different exemplars of a category can share no symptoms and (b) exemplars of two different categories may share common symptoms raises questions about the validity of the current diagnostic categories.

The Present and Future of Diagnostic Brain Imaging in Psychiatry

A defiant minority now use brain imaging for psychiatric diagnosis. Despite the challenges just reviewed, a small number of psychiatrists offer diagnostic neuroimaging to patients in their clinics. The imaging method used is single photon emission computed tomography (SPECT), which measures regional cerebral blood flow by detecting a gamma-emitting tracer in the blood. The best known of these clinics are the four Amen Clinics, founded by the psychiatrist and self-help author Daniel Amen. Others include the Clements Clinic, Cerescan, Pathfinder Brain SPECT, and Dr. Spect Scan. The use of brain imaging appears to be a selling point for these clinics their websites generally feature brain images prominently and the names of the last three leave no doubt about the emphasis they place on imaging.

These clinics promise to diagnose and treat a wide range of psychiatric disorders in children and adults based on patient history and examination along with the results of SPECT scans. The Amen Clinics use a system of diagnosis that does not correspond to the standard diagnostic categories defined by the American Psychiatric Association’s Diagnostic and Statistical Manual. For example, anxiety and depression are combined into a single superordinate category and then divided into 7 subtypes with names such as “temporal lobe anxiety and depression” and “overfocused anxiety and depression” [12]. Attention deficit hyperactivity disorder is also reconceptualized as having 6 subtypes, with names such as “limbic ADD” and “ring of fire ADD” [13].

The Amen Clinics website states that they have performed almost 50,000 scans [14], a huge number that, combined with associated clinical data including outcomes, could provide important evidence on the value of SPECT scanning in diagnosis and the efficacy of Amen’s approach to psychiatric care. Unfortunately, no such studies have been reported. The lack of empirical validation has led many to condemn the use of diagnostic SPECT as premature and unproven [15-18].

Why do people pay for an unproven, even dubious, diagnostic test? Brain imaging has a high-tech allure that suggests advanced medical care. People may assume that the treatments available at these clinics, as well as the diagnostic methods, are cutting-edge. In addition, there is a strong allure to the idea that imaging can give visual proof that psychological problems have a physical cause. The Amen Clinics cite several ways in which patients and their families may find this evidence helpful, including the reduction of stigma and guilt [14]. Of course, these considerations do not address the question of whether diagnosis is improved by the use of SPECT scans.

Diagnostic neuroimaging: prospects for the near-term and longer-term future. Few believe that brain imaging will play a role in psychiatric diagnosis any time soon. The forthcoming DSM-5, expected in May of 2013, will include reference to a variety of biomarkers for psychiatric disease, including those visible by brain imaging, but their role is expected to be in the validation of the categories themselves rather than in the criteria for diagnosing an individual patient [19].

In the long term, there is reason for optimism concerning the contribution of brain imaging to psychiatric diagnosis. This may happen first for differential diagnosis, particularly for diagnostic distinctions that are difficult to make on the basis of behavioral observations alone. In such cases potentially distinctive patterns of brain activation identified through imaging will be especially useful. For example, Brotman et al. have studied the patterns of brain activation evoked in the performing of various tasks with pictures of faces and found differences between the neural responses of children diagnosed with severe mood dysregulation and those with ADHD or bipolar disorder [20]. They and others [21] suggest that this finding could provide the basis for the future development of diagnostic imaging.

Diagnostic imaging in psychiatry could emerge from basic research on psychopathology, as in the example just cited. Alternatively, the relatively atheoretical multivariate statistical approach mentioned earlier could provide the first candidate neural signatures of psychiatric disorders. By whatever method the candidate neural signatures are identified, large-scale validation trials will be needed before they can enter routine clinical use. This process promises to be lengthy and expensive and could easily fill the interval between two or more editions of the DSM.

Coevolution of diagnostic methods and diagnostic categories. Whether the path to imaging-based diagnosis involves translation of newly discovered mechanisms of pathophysiology, brute-force number crunching, or both, we cannot assume that it will preserve current nosology. Indeed, given the overlap of imaging findings between diagnostic categories and the heterogeneity within categories mentioned earlier, it seems likely the widespread incorporation of imaging into diagnostic criteria will force our nosology to change. If the mismatch between imaging markers and diagnostic categories is not drastic, the DSM categories may change incrementally, for example by revisions of individual diagnostic criteria for specific disorders. However, if brain imaging reveals a radically different pattern of “natural kinds,” and if these kinds are proven to have clinical utility (e.g., enabling better treatment decisions), then imaging may prompt a radical reconceptualization of psychiatric diagnosis and entirely new diagnostic categories may emerge.

There are, however, strong arguments for conservatism. The current system of diagnostic categories is valuable in part simply because we have used it for so long and therefore much of our clinical knowledge is defined in relation to this system. DSM diagnoses have so far changed in a gradual and piecemeal manner through multiple editions of the manual, with most disorders retaining their defining criteria and a only minority being subdivided, merged, added, and eliminated in the light of new research findings. In keeping with this approach, the future influence of brain imaging on psychiatric diagnosis is likely to be more evolutionary than revolutionary.

An attempt to reconcile the need for consistency with the promise of more neurobiologically based classifications can be found in the Research Domain Criteria (RDoC) for psychiatry research proposed by the U.S. National Institute of Mental Health. This is “a long-term framework for research… [with] classifications based on genomics and neuroscience as well as clinical observation, with the goal of improving treatment outcomes” [22]. The RDoC system, still under construction at the time of writing [23], is meant to be used, in parallel with DSM categories, for research that may ultimately lead to more valid diagnostic categories, which might also be more consistent with the use of imaging as a diagnostic test.

Conclusions

Brain imaging will probably enter clinical use in other roles before it serves as a diagnostic laboratory test. For example, imaging has already guided clinical researchers in the development of new therapies [24] and in the customization of therapy for individual patients [25] it shows promise as a predictor of vulnerability [26] and treatment response [27] and has even been used as a therapy itself [28].

While some physicians insist that they are able to use brain imaging now for psychiatric diagnosis, there is currently no reliable evidence supporting this view. On the contrary, there are many reasons to doubt that imaging will play a role in psychiatric diagnosis in the near future. As argued here, much psychiatric imaging research remains to be done to achieve sensitivity, specificity, and standardization of imaging protocols.

In addition, the nature of current psychiatric diagnosis may not even correspond to the categories of brain dysfunction that imaging reveals. Finally, the practical value of maintaining continuity in diagnostic classifications requires a cautious and incremental approach to redrawing diagnostic classifications on the basis of imaging research.


Stephan F. Taylor

STEPHAN F. TAYLOR is a professor of psychiatry and Associate Chair for Research and Research Regulatory Affairs in the Department of Psychiatry and an adjunct professor of psychology.

His work uses brain mapping and brain stimulation to study and treat serious mental disorders such as psychosis, refractory depression and obsessive-compulsive disorder. Data science techniques area applied in the analysis of high dimensional functional magnetic resonance imaging datasets and meso-scale brain networks, using supervised and unsupervised techniques to interrogate brain-behavior correlations relevant for psychopathological conditions. Clinical-translation work with brain stimulation, primarily with transcranial magnetic stimulation, is informed by mapping meso-scale networks to guide treatment of conditions such as depression. Future work seeks to use machine learning to identify treatment predictors and match individual patients to specific treatments.


Watch the video: Neuroimaging Technologies: MRI (January 2022).