Information

What's the name of the quantifying preference for visual stimulus by varying size or distance as compared to a reference task?

What's the name of the quantifying preference for visual stimulus by varying size or distance as compared to a reference task?

Say that you enjoy looking at two beautiful paintings (A and B). But you really can't tell which one you like more, and you want to (quantitatively).

So, you compare them both to your favorite painting (C), which you know you really like to look at (adjusting for novelty).

In order to do this, you vary the size of your favorite painting C on a computer screen (or its distance from you in real life), and find the point at which C is so much smaller that you'd rather look at A. You do the same for painting B.

You then attempt to quantify how much you prefer A to B by asking: for which painting (A or B) was C larger when you first preferred to look at that painting rather than C? This is the one that you prefer more, and the distance between the C "breakpoints" gives some measure of your relative preference.

Of course this difference will be at best an interval measurement, not a ratio one, but that's usually better than no measurement.

Here is my question: what is this procedure (or some variant of it) called? This seems like a basic enough test that it must already be in use, and I am curious to hear more about it.

Related question: How is the size of a video related to its perceived quality?


Dude. This is an awesome experiment.

Emotional Design is written by my favorite cognitive psychologist, Donald Norman. It's full of research on taste. In it, he says taste is highly subjective and based on culture.

C is probably equally distant in both cases, at the point where it becomes unreadable. Unless at that point you'd rather look at C and say, "Hey, what's that?!" Like ship sails far away in the ocean.

One last point. We have no idea why we behave. We just rationalize it afterward. What we say we like and the way we behave are totally different. Girls just want to be with a nice guy.

There was a Harvard study for a painting class. Each class painted 5 paintings (or something). At the end of the semester, the teacher gave a class one of their paintings back. For the other class, the students took home their favorite painting. They waited 6 months (or something) and interviewed the students. The students who were given their painting LOVED it. They had their painting proudly displayed somewhere. The students who were given a choice were unhappy. They kept wondering what-if? They never felt right about their painting and most of them hid it away, maybe under a bed.

There's good evidence that if you want to stay happily in a relationship with someone, DON'T THINK ABOUT WHY YOU LIKE THEM. It will end badly. There's a whole study on this as well. It got ugly.


1. Motivation

Functional near-infrared spectroscopy (fNIRS) is a noninvasive, easy-to-use, and portable brain imaging technology that enables studies of normal brain function and alterations that arise in disease, both in the laboratory as well as in real-world settings. 1 4 In 1977, Jཫsis used the technique for the first time to noninvasively assess changes in human brain oxygenation due to hyperventilation. 5 Since then, the tool has evolved into an established noninvasive brain imaging modality and has been applied to a wide range of different populations and research questions.

The volume of fNIRS research has dramatically increased over the last two decades 1 in parallel with the growing availability of commercial fNIRS systems. This rapid growth has resulted in a great diversity in methodological practices, data processing methods, and statistical analyses. 6 While the diversification of research methods is expected and welcomed in such a fast-growing field, it can present challenges in the interpretation, comparison, and replication of different fNIRS studies. The lack of standardized pipelines in the analysis of neuroimaging data and the resulting differences in study results is not unique to fNIRS, with concerns also being raised by the Functional Magnetic Resonance Imaging (fMRI) community. 7 This problem is exacerbated by poor reporting practices that can considerably hinder or bias the review process and dramatically reduce a given paper’s impact and subsequent replicability. The purpose of this paper is to offer researchers guidelines on how to report fNIRS studies in a comprehensive, transparent, and accessible way. These guidelines are not intended as standards rather, they are best practices on how to report an fNIRS study to ensure the full impact of the findings is achieved.

This paper follows the structure of a typical fNIRS research paper and each section (Introduction, Methods, etc.) discusses the guidelines relevant to that section. We provide a comprehensive checklist in the Appendix ( Tableਁ ) with references to the relevant sections in order to facilitate revisiting of the text for more details. It is worth noting that, for the sake of brevity, instrument-related guidelines presented here focus on continuous wave NIRS (CW-NIRS) technology and only briefly refer to the other existing NIRS technologies [frequency-domain NIRS (FD-NIRS), time-domain NIRS (TD-NIRS), and diffuse correlation spectroscopy (DCS)].

Table 1

The following checklist is provided as a means to summarize the guidelines in this article to help the reader cross-check whether s/he can further improve the manuscript before submission. Each question refers to a numbered section in the main text that can be consulted again for more detail.

TopicChecklist
2.1.1𠀬hoosing a good titleIs the title short, specific, and informative about the results?
2.1.2 Structured abstract: Clarity and consistencyIs the most relevant information described in a motivating way? Can you reduce the abstract further to improve clarity? Is the abstract structured similarly to the structure of the main body of the paper? Is the data in the abstract and main manuscript consistent and complete?
2.2.1 Scope, context, significance, and aim of the work.Is the scope, context, and significance of the work established? Has the previous work been described and cited properly? Are the aim and hypothesis clearly defined?
3.1.1 Human participantsAre all relevant demographic, clinical, and other relevant characteristics described? Are all participant and data inclusion/exclusion criteria clearly defined? Are all ethical issues and procedures discussed? Is approval from the local ethics committee clearly addressed? Are excluded participants disclosed and well justified?
3.1.2 Sample size and statistical power analysisIn cases where no effect is observed: Was a power analysis performed? Was the selection of sample size, power, alpha levels, and effect size reported and justified? A posthoc power analysis may state the sample size needed to achieve statistical significance in case the study was underpowered.
3.2.1𠀮xperimental design (or “study design”)Is the following information provided for the study design? All studies: The duration of recording the environment in which the participant is placed (e.g., lighting conditions, auditory conditions, objects or displays in their visual field, etc.). Specific to block- and event-related designs: The number of conditions the number of blocks or trials per condition the order in which the blocks or trials are presented the duration of each block or trial and the duration of interblock or intertrial intervals. A diagram that provides details of the timings of stimulus and images of the stimuli themselves.
3.2.2 Participant instructions, training, and interactionsWere incentives, instructions, and feedback to the participants clearly outlined? What experimental conditions could have influenced the participant’s performance?
3.3.1𠀯NIRS device and acquisition parameters descriptionIs the acquisition set up and instrumentation sufficiently described? (system, wavelengths, sample rate, number of channels, and other parameters)
3.3.2 Optode array design, cap, and targeted brain regionsIs the description of optode array design, cap, and targeted brain regions complete?
3.3.3 For publications on instrumentation/hardware developmentAre all crucial hardware and software performance characteristics and validation steps reported? Are the architecture and all crucial components (light source, detector, and multiplexing strategies) sufficiently described? What standards/norms were followed and what safety regulations were considered (i.e., maximum permissible skin exposure)? For instrumentation or methods development papers: Is phantom-based performance characterization reported? For application studies: Are regular system quality checks reported?
3.4.1 fNIRS signal quality metrics and channel rejectionHow was signal quality of fNIRS channels checked and were bad channels rejected?
3.4.2 Motion artifactsHow were motion artifacts identified and removed?
3.4.3 Modified Beer–Lambert law, parameters and correctionsWhat were the assumptions, parameters, and models selected to derive concentrations from the raw fNIRS signals using the mBLL? How were estimation errors corrected/what are the signals’ units?
3.4.4 Impact of confounding systemic signals on fNIRSHow did your study distinguish between the variety of physiological processes that comprise fNIRS signal changes? Have you considered all factors of possible physiological confounds?
3.4.5 Strategy for statistical tests and removal of confounding signalsHave the overall preprocessing and statistical testing strategies clearly been identified and outlined?
3.4.6 Filtering and drift regressionHow were confounding signals outside of the main fNIRS band of interest tackled? (High/low-pass filtering/GLM drift regression)
3.5.1 Strategies for enhancing the reliability of brain activity measurementsWhat strategies were pursued to correct for physiological confounds and changes in the extracerebral tissue compartment? How were confounding signals identified and separated and what was done to reduce the likelihood of false positives/negatives?
3.5.2 Strategy 1: Enhance depth sensitivity through instrumentation and signal processingHow was depth sensitivity achieved? If multidistance measurements were performed, what are the source�tector separations used? What signal processing methods were applied to remove confounding physiological components in the fNIRS signals? How are the limitations discussed?
3.5.3 Strategy 2: Signal processing without intrinsic depth-sensitive measurementsIf no depth-sensitivity/multidistance measurements are available: What signal processing methods were applied to minimize confounding physiological components? How are the limitations discussed?
3.5.4 Strategy 3: Incorporating measurements of changes in systemic physiology in the fNIRS signal processingIf other physiological signals were used for the removal of confounding signals in the fNIRS signals, which ones? Are all relevant parameters and steps sufficiently described?
3.6.1 Hemodynamic response function estimation: Block averaging versus general linear modelWhat is the effective number of trials used for HRF estimation? In GLM approaches: What confounding signal regressors were used and how were they modeled? What method was used to estimate regressor weights?
3.6.2 HRF estimation: Selection of the HRF regressor in GLM approachesIn GLM approaches: How was the HRF modeled? What shape/function was used for the HRF regression? What are the parameters? If a fixed shape was used, what is the justification?
3.6.3 Statistical analysis: General remarksWhat statistical tests were performed and are all corresponding parameters (e.g., assumed distribution, degrees of freedom, p -values, etc.) reported? Is the effect size stated?
3.6.4 Statistical analysis of GLM resultsWhat regressors were included in GLM to explain effects of interest and confounds for fNIRS data? What statistical model and methods have been used for testing the hypothesis at the first and second levels?
3.6.5 Statistical analysis: Multiple comparisons problemIf statistical analysis was performed on multiple regions/voxels/network components, were family-wise errors corrected? What correction method was applied?
3.6.6 Specific guidelines for data processing in clinical populationsAre clinical variability and expected alterations of behavioral, neuronal, and vascular responses considered when interpreting the results?
3.6.7 Specific guidelines for data processing in neurodevelopmental studiesHow were the increased noise, artifacts, and analysis handled specifically for the developmental populations? Is the artifact rejection procedure well documented in the manuscript?
3.6.8 Connectivity analysisWhat correlation indices have been used? How were the statistical thresholds determined?
3.6.9 Image reconstructionWhat head anatomy was used and how was coregistration between optical elements and head geometry performed? How was the head anatomy segmented and into what tissue types? How was the head mesh generated? What optical properties were used for each tissue type? What model/approach was used for the generation of sensitivity profiles and image reconstruction?
3.6.10 Single trial analysis and machine learningWhat efforts were undertaken to understand and interpret the classifier weights and outputs? What was the training and test size, how were (hyper-) parameters selected? Was training and test data strictly separated, especially in approaches that use learned filters, regressors, or the GLM? Was cross validation performed and if yes, what kind?
3.6.11 Multimodal fNIRS integrationWas the sensor coplacement/localization/registration sufficiently described? What were the methods used for data fusion and multimodal analysis?
4.1 Figures and visualizationWas the measurement set up, optode array configuration and placement, and experimental protocol visualized? Is a sensitivity analysis included? If the processing pipeline is complex, is it depicted in a simplified block diagram? Are both brain maps and time courses available and provided? Are results linked to anatomical locations? Are both HbO 2 and Hb reported? Are higher order statistics of the data visualized as well?
4.2 Concise text and rigorAre the results presented in a concise and well-organized manner? What efforts were undertaken to minimize confirmation bias? Are negative results reported, if present?
5.1 Discussion of the results in light of existing studies: Strengths, limitations, and future workAre all relevant results discussed? Is any part of the discussion based on results that were not presented? Were caveats from confounding physiology sufficiently addressed? Is the presented work sufficiently compared and contextualized with existing studies? Are strengths and weaknesses clearly outlined and discussed? Are potential next steps discussed?
5.2 ConclusionAre 3 to 5 conclusions drawn that summarize the main findings of the study in a concise way? Do they include the significance of the result? Are the conclusions based on the results of the study?
6.1 Proper citationsAre all the statements that reference to an original work agree with the information provided therein?
7.1 Preregistration, data, and code sharingIs data/code made available to other researchers to reproduce the results? Is data shared in a common data format that the community supports (e.g., snirf)?

Quantifying responses

Responses were computed from the mean firing rate averaged over the full number of stimulus presentations. Typically we presented 3–5 stimulus cycles of each stimulus condition repeated over 5–20 trials. Cells were regarded as orientation biased if the ratio of the response to the optimal versus the nonoptimal orientation was 3:2 or greater and as orientation selective if the ratio was 3:1 or greater. We calculated a patch suppression index using the formula [1 − (Rplat/Ropt)] × 100, where Ropt andRplat denote the responses to optimal and plateau stimuli, respectively (see Jones et al. 2001).

We compared the responses evoked by iso-oriented and orientation contrast stimulus configurations. Cells were regarded as showing “orientation pop-out” if the response to an optimally oriented center stimulus in the presence of an orthogonal outer stimulus (±30°) was significantly larger than the response when both stimuli were present at the optimal orientation (P < 0.05, paired t-test). In some cases, responses to orientation contrast stimuli exceeded the response to the center stimulus alone. Cells were only classed as “orientation contrast facilitation cells” if the response to the orientation contrast configuration was significantly larger (P < 0.05) than the responses to the iso-orientation configuration and the inner stimulus alone, and if the response enhancement evoked by the orientation contrast condition (normalized with respect to the response to the center stimulus alone) exceeded 10%. Cells were classed as “nonorientation specific surround cells” if there was no significant difference between the responses to iso-orientation and orientation contrast stimulus conditions (P ≥ 0.05). Cells were classed as “orientation contrast suppression cells” if the response to orientation contrast was significantly smaller (P < 0.05) than to the iso-orientation stimulus.

We graphically represented the data collected with bipartite concentric stimuli using two-dimensional iso-response contour maps and three-dimensional surface maps. Distance between contours was defined by (RmaxRmin)/(1 + number of levels)Rmin defined the first level, and we used a spline fitting algorithm to interpolate between responses. We used the same procedure to represent the data from the two patch experiments.

To explore the location of zones driving orientation contrast facilitation, we adapted methodology previously used in area MT (Xiao et al. 1997, see also Jones et al. 2001). We calculated the strength of facilitation for each surround stimulus location according to the formula F = [(Rcs/Rc) − 1] × 100, where F is the enhancement elicited by a surround stimulus location, Rc is the response to the center stimulus, andRcs is the response to the combination stimulus. We then computed two selectivity indices (FIs) by calculating the length of the mean vector

We identified the most effective location by calculating the mean vector angle (Batschelet 1981). Thus the optimal angle OPA = arctan Σ i = 1 n Fi · sin ( α i ) Σ i = 1 n Fi · cos ( α i ) . For bilaterally symmetric surround cells, the OPA represents the angle of the axis through the two optimal surround locations.


Characteristics of a Measurement Scale

Identity

Identity refers to the assignment of numbers to the values of each variable in a data set. Consider a questionnaire that asks for a respondent's gender with the options Male and Female for instance. The values 1 and 2 can be assigned to Male and Female respectively.

Arithmetic operations can not be performed on these values because they are just for identification purposes. This is a characteristic of a nominal scale.

Magnitude

The magnitude is the size of a measurement scale, where numbers (the identity) have an inherent order from least to highest. They are usually represented on the scale in ascending or descending order. The position in a race, for example, is arranged from the 1st, 2nd, 3rd to the least.

This example is measured on an ordinal scale because it has both identity and magnitude.

Equal intervals

Equal Intervals means that the scale has a standardized order. I.e., the difference between each level on the scale is the same. This is not the case for the ordinal scale example highlighted above.

Each position does not have an equal interval difference. In a race, the 1st position may complete the race in 20 secs, 2nd position in 20.8 seconds while the 3rd in 30 seconds.

A variable that has an identity, magnitude, and the equal interval is measured on an interval scale.

Absolute zero

Absolue zero is a feature that is unique to a ratio scale. It means that there is an existence of zero on the scale, and is defined by the absence of the variable being measured (e.g. no qualification, no money, does not identify as any gender, etc.


ORIGINAL RESEARCH article

  • 1 Hans Berger Department of Neurology, Jena University Hospital, Jena, Germany
  • 2 Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany

Older adults show higher dual task performance decrements than younger adults. While this is assumed to be related to attentional capacity reductions, the precise affected functions are not specified. Such specification is, however, possible based on the “theory of visual attention” (TVA) which allows for modeling of distinct attentional capacity parameters. Furthermore, it is unclear whether older adults show qualitatively different attentional effects or whether they show the same effects as younger adults experience under more challenging conditions. By varying the complexity of the secondary task, it is possible to address this question. In our study, participants performed a verbal whole report of briefly presented letter arrays. TVA-based fitting of report performance delivered parameters of visual threshold t0, processing speed C, and visual short-term memory (VSTM) storage capacity K. Furthermore, participants performed a concurrent motor task consisting of continuous tapping of a (simple or complex) sequence. Both TVA and tapping tasks were performed under single and dual task conditions. Two groups of 30 younger adults each performed either the simple or complex tapping, and a group of 30 older adults performed the simple tapping condition. In older participants, VSTM storage capacity declined under dual task conditions. While no such effect was found in younger subjects performing the simple tapping sequence under dual task conditions, the younger group performing the complex tapping task under dual task conditions also showed a significant VSTM capacity reduction. Generally, no significant effect on other TVA parameters or on tapping accuracy was found. Comparable goodness-of-fit measures were obtained for the TVA modeling data in single and dual tasks, indicating that tasks were executed in a qualitatively similar, continuous manner, although quantitatively less efficiently under dual- compared to single-task conditions. Taken together, our results show that the age-specific effects of motor-cognitive dual task interference are reflected by a stronger decline of VSTM storage capacity. They support an interpretation of VSTM as central attentional capacity, which is shared across visual uptake and concurrent motor performance. Capacity limits are reached earlier, and already under lower motor task complexity, in older compared to younger adults.


Beauty and the beholder: the role of visual sensitivity in visual preference


Branka Spehar 1* , Solomon Wong 1 , Sarah van de Klundert 1 , Jessie Lui 1 , Colin W. G. Clifford 1 and Richard P. Taylor 2
  • 1 School of Psychology, UNSW Australia, Sydney, NSW, Australia
  • 2 Department of Physics, University of Oregon, Eugene, OR, USA

For centuries, the essence of aesthetic experience has remained one of the most intriguing mysteries for philosophers, artists, art historians and scientists alike. Recently, views emphasizing the link between aesthetics, perception and brain function have become increasingly prevalent (Ramachandran and Hirstein, 1999 Zeki, 1999 Livingstone, 2002 Ishizu and Zeki, 2013). The link between art and the fractal-like structure of natural images has also been highlighted (Spehar et al., 2003 Graham and Field, 2007 Graham and Redies, 2010). Motivated by these claims and our previous findings that humans display a consistent preference across various images with fractal-like statistics, here we explore the possibility that observers’ preference for visual patterns might be related to their sensitivity for such patterns. We measure sensitivity to simple visual patterns (sine-wave gratings varying in spatial frequency and random textures with varying scaling exponent) and find that they are highly correlated with visual preferences exhibited by the same observers. Although we do not attempt to offer a comprehensive neural model of aesthetic experience, we demonstrate a strong relationship between visual sensitivity and preference for simple visual patterns. Broadly speaking, our results support assertions that there is a close relationship between aesthetic experience and the sensory coding of natural stimuli.


Materials and Methods

Animals.

A total of 196 P. kuhlii (Kuhl’s pipistrelles) and 167 R. aegyptiacus (Egyptian fruit bats) performed the experiments. The bats were caught at roosts housing several thousands of bats in central Israel. They were brought to Tel Aviv University in groups of 16 to 30, tested within a few days, and then released at their roost. Most Kuhl’s pipistrelles were ringed for future identification. In all other bats (all Egyptian fruit bats and some Kuhl’s pipistrelles that were not ringed), the fur was slightly cut so that we could avoid reusing the same individuals during that season. During their stay in the laboratory, the bats were housed on a natural day/night cycle and were provided with water and food ad libitum.

All experiments were performed with permission of the Israeli National Park Authority and the Tel-Aviv University IACUC (permit number L15-007). All bats were tested (see below) within 1 to 4 d from capture except bats that participated in the learning experiment, which were released within 11 d. The number of bats that took part in each experiment is specified in Table 1.

Experimental Setup and Procedure.

The experiments took place in an acoustic room in the zoological garden in Tel Aviv University (2.5 × 4 × 2.5 m 3 ). A corridor (0.9 × 3 × 1.9 m 3 ) was set up in the middle of the room (SI Appendix, Fig. S1). The corridor’s walls and ceiling were made of white reflective tarpaulin and did not allow the bats to land on them. Different obstacles, varying in their acoustic reflectivity and sonar aperture, were placed in the middle of the corridor, and the bats’ reaction to them was tested (colliding or not colliding Video and Audio Analysis). A high-speed infrared camera (125 fps, OptiTrack NaturalPoint) was situated near the target (0.3 m above ground) facing upward to document the bats’ interaction with the obstacle. An ultrasonic microphone (UltraSoundGate CM16/CMPA Avisoft) was placed near the camera pointing 45° upward, facing the release point of the bats. The microphone was connected to an A/D converter (Hm116 Avisoft) and recorded audio at a sampling rate of 250 kHz for Egyptian fruit bats and 375 kHz for Kuhl’s pipistrelles. The A/D converter was connected to a desktop located in a control room outside the experimental room (SI Appendix, Fig. S1).

All experiments were conducted in light level of <10 −7 lux. (All lights from electronic devices were abolished by covering them with black felt.) Light levels were measured inside the corridor with a light detector (SPM068 with ILT1700 International Light Technologies) with a resolution of 10 −7 lux. Therefore, the light level in the corridor was below the threshold of the device and below the visual threshold of Egyptian fruit bats (59). Kuhl’s pipistrelles have much smaller eyes, providing low visual sensitivity and acuity (60) therefore, it is safe to assume that they could not see in this light level either. We thus refer to this light level as complete darkness. Night-vision goggles were used by the experimenter throughout the experiments.

The bats were kept in a dark carrying cage for 15 min before the experiment to allow their eyes to adapt to the dark. Then, they were released one at a time at 1.5 m above ground by an experimenter seated on a chair at the corridor entry. Each bat was tested in a single trial without any training or accommodation (except for in the learning, spheres, and virtual reality experiments see below). The microphone and camera were triggered by another experimenter sitting in the control room.

Experimental Treatments.

Reflective walls.

In this experiment, the obstacle completely blocked the corridor (SI Appendix, Fig. S1). The obstacle was an identical-area wall with different acoustic reflectivity hung from the ceiling. The different target strengths of the targets were −30 dB (3 cm foam), −25 dB (3 cm foam with 2-mm plastic board), −15 dB (2-mm plastic board covered with felt), −7 dB (2-mm plastic board see Acoustic Measurements). Both Kuhl’s pipistrelles and Egyptian fruit bats were tested in this experiment.

Foliage.

To test whether a low-reflectivity wall might be perceived by the bats as something they can pass through, we tested Kuhl’s pipistrelles with a plastic mesh (3-mm diameter, 10-mm opening) covered with Tamarix branches and leaves (SI Appendix, Fig. S2). This “wall” was hung from the ceiling in the same way as the reflective walls. The wall’s target strength was measured to be −25 dB, the same as one of our least reflective walls.

Learning.

A subset of Kuhl’s pipistrelles first used for the reflective walls experiment was then flown daily, one trial every day, with a low-reflectivity wall (−25 dB), to document their learning.

Spheres.

To tile the intensity–aperture space, we flew Kuhl’s pipistrelles with three targets with smaller aperture: a 20-cm-diameter plastic sphere with −25 dB reflectivity, a 20-cm-diameter foam sphere with −45 dB reflectivity, and a 40-cm-diameter plastic sphere with −19 dB reflectivity. Each bat was tested in many trials, and the percentage of collisions was calculated per bat (the chances of colliding on a single trial were very low because of the small object size, so hundreds of naïve bats would be required if we did not reuse the same individuals).

The two 20-cm-diameter spheres were tested in the same corridor as the wall. The 40-cm-diameter sphere was tested in a slightly bigger corridor (1.5 × 3.5 × 2.1 m), the same corridor used for the virtual reality experiment (see below).

The sphere was placed at the center of the corridor (hanging on a 0.4-mm fishing wire SI Appendix, Fig. S1) at four different heights in a random order. At each height, the bat was released from the same position as noted above, and its number of collisions in ∼10 flights past the sphere (back or forth in the corridor) was estimated. Each bat performed three rounds of these four heights (12 rounds in total). The height of the spheres was changed while the bats were in a fabric bag.

In this experiment, a second high-speed infrared camera was used. It was placed behind the take-off location facing the sphere to allow a better estimate of collisions (along with the camera below the sphere, which was present in all experiments).

Virtual reality.

Kuhl’s pipistrelles were flown with a playback system that allowed recording the bat’s echolocation signals and playing them back (while controlling their intensity) in real time. The delay of the system was 1 μs, thus placing the virtual object at the same distance from the bat as the physical speaker emitting it (i.e., there was virtually no delay between the real echo from the speaker and the played echo). The system included a condenser ultrasound microphone (CM16/CMPA40-5V Avisoft) and an ultrasonic speaker (Vifa) connected to an amplifier. It was controlled by a NUCLEO-F446RE processor (STMicroelectronics). The program controlling the system was written in an Mbed complier workspace. We accounted for the frequency response of the system by high-pass filtering the output according to the inverse response of the system’s frequency response. Note that it is very hard to get a “flat” frequency response, but, as we were comparing three playback conditions among each other (see below) and they all used the same system, this should have been a fair comparison for our purpose.

In these experiments, the corridor was replaced with a slightly bigger corridor (1.5 × 3.5 × 2.1 m) to avoid resonances caused by echoes reflected from the walls and picked up by the playback system. The tarpaulin wall at the rear end of the corridor was replaced with fine mesh to enable bats to land after crossing the corridor. Since the speaker of the system is directional, we wanted the bats to fly as little as possible in the wrong direction. The playback speaker (6.5 cm) was placed on a pole in the center of the corridor at 1.60 m height, facing the release point. The playback microphone was attached 0.25 m below the speaker, tilted upward∼25° (SI Appendix, Fig. S1). The target strength of the entire system (microphone and speaker) was −28 dB (at 1 m), thus representing a small, weakly reflecting object. All other parts of the system were situated on the floor of the corridor and covered with black felt. Two cameras were used one was placed on a small tripod at 0.3 m height below the playback system facing upward and another on a tripod behind the release point at 1.5 m height facing the corridor. Two ultrasonic microphones were used (in addition to the playback’s microphone): one situated on the camera behind the releasing point facing the corridor (to record the playback emission) and the other at the same height on the other end of the corridor behind the mesh wall (to record the bat). These microphones were used to ensure the playback system was working during the experiments.

The virtual reality experiment had three conditions. 1) In the playback condition, the bat’s own calls were recorded, amplified, and played back, imitating real echoes but returning from a highly loud object. The intensity of the playback system was calibrated to emit echoes with a target strength of a −7-dB wall from 1 m, and it behaved like a real target: echoes were weaker when the bat was far from the playback system and stronger when it was closer. Since the playback was of the bat’s own calls, if the bat changed its signal (e.g., shortening it), the playback changed as well. 2) In the noise condition, a time-reversed synthesized Kuhl’s pipistrelles signal was played back to the bats in response to every echolocation call. This signal always had the same intensity (target strength of −7 dB at 1 m), regardless of the distance of the bat, and it was fixed (i.e., always the same synthesized signal with the same spectrotemporal characteristics). This condition allowed us to examine whether an observed response was merely a reaction to the sound produced by the system. 3) In the silence condition, the system was turned off to test the bats’ response to the natural echoes coming back from the playback system.

All bats flew in all conditions. The bats were assigned to three groups, each performing the conditions in a different order. The different conditions were counterbalanced throughout the day. To get an accurate measurement, each bat was flown for many trials and the distance was averaged across trials. The bats were flown until they performed at least 10 flights in which they passed the playback system in a direct flight. There was no statistical difference between naïve bats performing the condition for the first time and bats performing their second or third condition in any of the parameters tested (2D distance from playback system, U > 18, P > 0.05 for all conditions, Mann-Whitney test crossing rate at the first trial, P > 0.15 for all conditions, Fisher’s exact test see Video and Audio Analysis). This was true for all three conditions, and we therefore pooled all of the data for each condition from all bats together.

A subset of animals from the reflective walls experiment was also trained and tested in a landing task with the playback system. Experiments took place in a large acoustic room in the zoological garden (5.6 × 4.5 × 2.5 m 3 ) in a light level of 9*10 −3 lux. This light level was the minimal limit of the tracking system used (see below), but since Kuhl’s pipistrelles have small eyes and low acuities even in high light levels [for comparison, closely related species from the Pipistrellus genus were found to have acuities of 0.9 in >3,000 lx, while humans have acuity of 1.3 in 5*10 −4 lux (30)] and rely extensively on echolocation, it is safe to assume that vision was not used. The bats were trained two or three times a week for 4 wk to land on the wooden cubic target (15 × 15 × 15 cm 3 ) placed on a pole to receive a food reward. We tested whether and how long it will take bats to land on a playback target with the same reflectivity as the wooden target. We compared landing time between three conditions: 1) a wooden target (−7 dB), 2) a foam cubic target of the same size (−26 dB will be referred to as “foam target”), and 3) a foam cubic target that amplified its echoes using the playback system (−7 dB will be referred to as “playback target”). Both the foam target and the playback target had a speaker implemented inside the foam and a microphone attached to the pole 0.3 m below, but only in the playback target was the speaker playing. Note that reflectivity was much higher with the speaker playing. The location of the target (wood, foam, or playback) was randomly moved in the room after every trial. The target was placed with the speaker and microphone directed toward the direction from which the bat approached.

A tracking system with 16 Raptor 1,280*1,024-pixel cameras and 4 Raptor-12 4,096*3,072-pixel cameras (Motion-Analysis) was used to track the bats’ flight. The system operated at 200 frames per second. Center of mass of the bat was tracked by gluing three spherical reflective facial markers (3×3 Designs) to the bat’s back with mounting tape. Each bat performed 15 trials per day, 5 in each condition in a random order, for 2 consecutive days. We found that the bats landed on all targets (they readily landed on the virtual target) and that there were no differences in the time it took them to find and land on each of the targets from the moment of release, suggesting that they perceived the playback target as a real one (P = 0.22, df = 2, Friedman test).

Egyptian fruit bat pups (6 to 7 wk old) that were born in the laboratory were also tested in the reflective walls experiment with a −30-dB wall. These bats had not experienced foam walls before being tested.

Acoustic Measurements.

We ensonified the different objects to estimate their target strength. All ensonifications were conducted in a quiet anechoic room. A speaker (Vifa) connected to an UltraSoundGate player 116 device (Avisoft) was placed on a tripod at 1.3 m height. It played a 3-ms-long, 40-khz sine and kuhlii-like synthesized signal 90 to 30 kHz down-sweep with a peak intensity at 40 kHz with a sampling rate of 500 kHz. Recordings were performed using a 46DD-FV 1/8-inch CCP-calibrated microphone (GRAS) placed on top of the speaker and digitized using an UltraSoundGate 116Hm device (Avisoft). Sampling rate of recording was 375 kHz.

For target strength measurements, the different targets were placed in front of the speaker and microphone at 1-m distance, and the relative echo sound pressure reflecting from them was recorded. Then, the microphone was removed and placed instead of the targets to measure the relative incident sound pressure. We then calculated the target strength by dividing the peak intensities of the incident and echo sound pressure for each target. The measurements for the −19-dB sphere were conducted the same way but from a 1.5-m distance. The calculation was corrected for 1 m. To ease the reading by a nonexpert, we will refer to target strength as reflectivity. Both signals (sine and kuhlii-like signal) gave similar results. The measurements from multiple ensonifications were averaged per target.

Although the target strength throughout the paper is the target strength calculated as described above, for peak frequency of 40 kHz, we also ensonified the four wall targets with a 25- to 45-kHz down-sweep to calculate their target strength at 30 kHz (the peak intensity of Egyptian fruit bats’ echolocation call). The results were only slightly different (40-kHz signal, −7 dB, −15 dB, −25 dB, and −30 dB 30 kHz, −5 dB, −19 dB, −25 dB, and −27 dB).

For power spectra and impulse response measurements, we used the same setup and played a 2-ms-long linear chirp that sweeps from 90 to 30 kHz. All targets were measured at 1 m distance except for the two low-reflectivity walls (−25 dB and −30 dB) that were very weak and therefore measured at 0.4 m distance in order to better represent the higher and weaker frequencies. For each target, 24 recordings were averaged and normalized by the peak intensity, thus making any differences caused by the different distances of the targets negligible.

Hearing Directionality Measurements.

Ear width was measured on three Kuhl’s pipistrelles and three Egyptian fruit bats with a caliper. Ear width was 12 and 6 mm in Egyptian fruit bats and Kuhl’s pipistrelles, respectively. We estimated the ear width to echolocation signal wavelength (the main factor influencing the horizontal hearing directionality) close to the frequency with the most energy in each species (25 kHz and 40 kHz in Egyptian fruit bats and Kuhl’s pipistrelles, respectively).

Video and Audio Analysis.

Each trial of the experiments (except for virtual reality trials) was behaviorally classified as a collision or noncollision trial based on the video recording. In case of the walls, noncollision trials included attempting to land on the target or turning back when near it. Bats whose videos did not enable us to see the moment of contact with the wall were omitted from the study (13 Egyptian fruit bats and 10 Kuhl’s pipistrelles). Three independent observers classified each video. The scores of the observers (i.e., proportion of colliding) were then averaged.

In case of the first two spheres (−25-dB plastic sphere and −45-dB foam sphere targets 6 and 7 in Fig. 1B), noncollision trials included avoiding the target (flying through the tunnel without touching the sphere), attempting to land on the sphere, or turning back when near it. In this experiment, touching the target is considered as a collision (i.e., everything from a full body collision to touching it with the tip of the wing, apart from landing attempts). Two independent observers classified each video. For each bat, the number of collisions was divided by the number of times the bat flew in the tunnel. These proportions were then averaged across bats. For the third larger sphere (−19-dB plastic sphere target 8 in Fig. 1B), we scored collisions during the experiment itself with one observer viewing the sphere from below and one from the side, both noting collisions. Because collisions were extremely rare, there was no point in watching all of the videos again.

For the virtual reality experiment, one observer classified the videos as crossing the playback speaker or not crossing, which included turning back or attempting to land on the walls or on the playback speaker. For trials in which the bat passed the playback system, the 2D position of the bat in relation to the playback speaker at the moment of passing it was manually marked on the movie frame (using an in-house Matlab software MathWorks). We then calculated the Euclidean distance between the passing points and the playback speaker and compared the different conditions.

For audio analysis, all recordings of Egyptian fruit bats were manually examined using Saslab (Avisoft) to ensure that the bats echolocated (unlike Kuhl’s pipistrelles, Egyptian fruit bats can stop echolocating entirely).

We analyzed the echolocation behavior of a subset of Kuhl’s pipistrelles tested in the reflective walls experiment (30, 21, 8, and 9 bats for −30-dB, −25-dB, −15-dB and −7-dB walls, respectively half of the bats turned back when approaching a −7-dB wall and their audio files could not be used). Audio and video files were first synchronized using a synchronized event visible in both the audio and video recordings (a finger snap). We then detected and analyzed the echolocation signals emitted by the bats along the approach, aiming to examine changes in repetition rate (interpulse intervals) using in-house Matlab software. The interpulse intervals were binned into 150-ms bins with 100 ms overlap between bins to see how they change over time as the bat was approaching the target. This analysis was not conducted on Egyptian fruit bats because, in complete darkness, they operate at their maximal echolocation rate and thus cannot increase it when approaching the target (61).

The same analysis was conducted on Kuhl’s pipistrelles approaching −25-dB and −45-dB spheres (for seven of the eight bats in each condition). Since, in these conditions, the bats flew multiple times and the sphere was placed at four different heights throughout each bat’s session, we analyzed the first collision for each height (hence, four recordings per bat when possible). The interpulse intervals were first averaged per bat and then between bats.

Statistical Analysis.

In the reflective walls experiment, we compared collision rate between walls of different reflectivity using χ 2 for independence. Post hoc analysis included testing all possible comparisons with an FDR correction. When the assumptions of χ 2 were not met, Fisher’s exact test was used instead. The same tests were used for comparing collision rate between foliage and −25-dB wall (foliage experiment) and to compare pups and adults that were flown with a −30-dB wall (pups experiment). In the spheres experiment, each bat was flown for multiple trials, and collision rate was calculated for all these trials. We then compared collision rates between bats that were flown with different spheres using Kruskal–Wallis test due to the small sample size (n = 8 or 9 per group). Post hoc analysis included testing all possible comparisons between spheres with a Mann–Whitney test with an FDR correction. In the playback experiment, we averaged the distance at which the bat passed the playback speaker in 10 flights. Since all bats participated in all conditions, the average distance was compared between conditions with a Friedman’s test. We also tested whether the rate of bats crossing vs. not crossing the playback speaker on their first flight differed between conditions with Cochran’s Q test for related samples. In the learning experiment, we tested the correlation between the day of trial and collision rate with Pearson’s correlation.

We used a generalized linear mixed-effects model with a logit link to explain perception across all aperture–intensity experiments (reflective walls, spheres, and virtual reality). We used the proportion of behavioral responses that suggest perceiving the object as an obstacle (e.g., turning around and landing) as a proxy for perception (we will term these two behaviors as “perception” in the following discussion). Therefore, the behavioral response (i.e., perception) was the dependent variable, while reflectivity and sonar aperture were used as independent fixed factors and bat ID as a random factor. Sonar aperture was calculated as the percentage of the cross section that the object covered out of the entire cross section of the corridor (with the walls being 1).

Because the spheres and playback speaker covered only a small section of the corridor, the probability of encountering (i.e., flying directly toward them) by chance was much lower than with the wall, and we thus had to normalize the bats’ performance in these conditions. We used the response rate to the −25-dB plastic sphere as a reference for the same-sized −45-dB foam sphere, and the playback speaker was used as reference for the silent-speaker condition. Even if this assumption is partially wrong, it alters the results in a direction that strengthens our conclusions because it underestimates the number of random encounters, so it is a fair assumption. For example, the bats perceived the foam sphere in 7% of the trials and the plastic sphere in 12%. The normalized perception rate for the plastic sphere was thus 7/12 × 100%. Overall, we used the behavior in the following conditions in the model: −30-dB wall, −25-dB wall, −15-dB wall, −7-dB wall, silent speaker, and foam sphere.

In the playback condition, we used only the first flight of each bat in the corridor. Bats that turned back before reaching the speaker or landed on the walls before the speaker were considered as “perceiving.” Normalization was performed as for the spheres. The perception rate for the playing-back speaker was taken as the encounter rate, and the perception rate with the silent speaker was normalized accordingly.


Reviewing Editor: Tatyana Sharpee, The Salk Institute for Biological Studies

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: Davide Zoccolan

The study reports a set of very systematic and comprehensive measurements of the impact of different kinds of anesthetics on the activity of different cell types in the brain. The results will have important impact on the field. However, both reviewers felt strongly, and I agree, that the presentation needed substantial improvements including figure re-arrangements. The sections that were especially not clear are on rank correlations. The detailed comments are below. Also, some discussion is needed as to the choice of particular anesthetics used, and some additional detail to characterized the “sessile” state. Further, the work by Churchland et al. Nature Neuroscience 2010 is relevant to results described on lines 425-432.

In this study, the authors compare some neuronal response properties of rat V1 neurons in the anesthetized and awake state. These properties include magnitude of the spontaneous and evoked responses, latency of the response and the temporal pattern of firing of ensembles of simultaneously recorded neurons. The main strength of the study is the fact that it based on chronic recordings from implanted animals, which were tested in either the awake or anesthetized state, thus allowing probing the same neurons under both states. In addition three different kinds of anesthetics were used. On the other hand, the main weakness of the study is that the writing is quite poor, especially in the Results section, where not only analyses and results are described approximately, but the material presented in the figures is not described in sequential order: the authors jump back and forth among different panels of a figure or even among different figures, sometime mislabeling the figures they refer to in the main text. This has made the Results extremely difficult and tiring to read, with the consequence of lowering considerably the understanding of the authors' conclusions. As a consequence, I found myself unable to fully gauge the solidity and originality of the findings reported by the authors. On top of that, I also found that the discussion of similar results form previous studies is too slim and does not quite cover the existing literature.

In any event, given that the experiments the authors carried out seem to be solid enough (although not perfect in their design: see the lack of tracking of head direction in the awake recordings), and given the fact that the topic is of some interest (not many papers have followed the same neurons across awake and anesthetized states), I think that the manuscript deserve the chance to be revised and reconsidered. But this revision needs be very deep and careful: 1) the order of the figures should be totally revised, making it sequential and aligned to the description provided in the text 2) the metrics used in the analysis should be completely described in the Methods (it is not enough to refer to previous studies that have used them) and also described again (shortly) in the Results, in the first instance that they are used 3) and the whole text should be revised by some colleague of the authors, possibly mother-tongue, that should critically read the full manuscript and honestly point to the authors what she/he does and does not understand. Only after that this is done, and the manuscript has been considerably improved, it will be possible to truly assess the solidity and impact of the manuscript.

Below are my specific comments.

1. Introduction and Discussion: the recent article of Durand et al (J Nerosci, 2016) should be mentioned somewhere in the introduction and then discussed at length in the Discussion, since that articles address many of the same questions investigated by the authors. The authors should compare the two studies and explain what similarities and differences are found across them.

2. Lines 210-211: 𠇎voked firing rates were calculated as the maximum firing rate of each unit across all bins following stimulus presentation”. What bins are the authors referring to here? I guess they binned the time axis and computed the average firing rate (FR) in consecutive time bins, but it is not mentioned the width of these bins neither that the time axis has been binned. The authors cannot simply assume that the reader will know what they are talking about. They need to provide all the details of their methodological approach, and possibly in a sequential, logical order.

3. It was not clear at which cortical depth the presented data were collected. The authors state in the methods that tetrodes were implanted “into the primary visual cortex, about 300 um below the dura mater at an angle of 30-40 degrees in the lateral to medial direction” (L102). Then they point out that the electrodes could be easily moved up and down and that “the recording location was extrapolated from deepest trace identified by histological inspection of the sections and the tetrode-tuning log” (L 157-168). Thus is not clear whether the recorded neurons are all from the same layer, or are presented independently from their depth. From what I understood, most likely recordings come from layer 2/3. This issue should be explained by the authors as, especially in the anesthesia state, this could affect the results. More importantly, if the authors sampled multiple layers, they should try to segregate their data also on a layer-by-layer fashion. If they cannot do so, they should explain why.

4. In the methods, the author state that the Isoflurane concentration was set at 1.5% for the condition 'Isoflurane only“ because this was the ”minimum alveolar concentration for adult rats (Mazze et al. 1985), while for the Isoflurane/Dormicum 𠇌onditions were initially tested with pilot studies to achieve the lightest possible level of anesthesia” (starting from line 120, to line 130). As also the authors seem to be aware of (because they adjust the concentrations), the correspondence between depth of anesthesia and % of isoflurane varies a lot across animals, and across studies in the literature. For example, 1.5% is reported as “light anesthesia” in Hudetz et al 2003, while is defined as �p” in Silva et al 2010 (Silva and colleagues define light anesthesia at 0.8% Isoflurane). I would specify this in the methods and report a wider range of literature, with some more recent study.

5. The author reported the “low signal-to noise ratio” as a method for units exclusion from the analysis (L 208). This should be specified better. What was the threshold for exclusion? The SNR was calculated on spontaneous or stimuli-evoked activity?

6. The units selection reported at line 229-232 is the same of point 4? Please clarify

7. Lines 214-215: “The ratio between maximum evoked rate (R1) and spontaneous rate (R2) was calculated as calculated as (R1-R2)/(R1+R2) for each unit”. This should not be called a ratio. It is confusing. A ratio would be R1/R2. If you call it a ratio and then you put labels as those in Fig.1 1K-M, the reader can easily be misled. At first, I was very confused by looking at those panels, I did not understand why the ratio was always < 1, even if the evoked activity should have been larger than the spontaneous activity. Then I remembered that with “ratio” the authors did not mean a real ratio, but the index defined here in the methods. But this is terribly confusing. This is not a ratio, it is an index. The authors should call it an 𠇎voked rate index” or whatever they like that, but they should not call it a ratio. More importantly, they should write in the labels of Figs. K-M this name they will come about and not stuff like 𠇎v/Sp Awake” because otherwise the reader will think that axis reports what they wrote on it: the ratio Ev/Sp, which, unfortunately, it is not what the axis shows.

8. Line 225. It is not said explicitly, but it looks like the authors take the time of the peak of the response (relative to stimulus onset I guess - again, this is not specified explicitly) as the response latency. This is unusual, since the neurons obviously start to fire before the FR reaches its peak. This is why, most studies take as the response latency the time at which the driven (i.e., background subtracted) FR increases of a given percentage or of a given number of SD, relative to the background. See, for instance, these articles (Brincat and Connor, 2006 Zoccolan et al., 2007) for monkey studies and this article for a rat V1 study (Vermaercke et al., 2014). I would advise the authors to do the same, otherwise comparisons with latencies reported in other studies (e.g., (Vermaercke et al., 2014)) would be difficult.

9. Lines 283-288. Here is one of the many parts where the writing and the logic of the description must be improved. The authors first refer to panel D, where data from a single session with 17 simultaneously recoded cells are shown, then they comment panel C (where the full data set is shown, as far as I can tell) and then in their final sentence they comment again on the variety of behaviors cells may display within a single ensemble. This is confusing. They should follow the logic of their figure: comment first panel C (full data set), then panel D (single ensemble of 17 cells).

10. Lines 308: 𠇊ll recorded units revealed a bimodal distribution (Figure 1F)”. It looks that the assessment of bimodality was done only qualitatively, by visual inspection. It would be better if the authors used a more quantitative and object approach. I suggest either running a clustering algorithm (e.g., k-means with k = 2) or trying to fit the density of cells on the plane defined by the axes of Fig. 1F with two 2-d Gaussians, and then assign the cells to either Gaussian based on their proximity to their center (in terms of units of SD).

11. Line 321: “The GABAergic antagonistic anesthestic regimes (Isoflurane and Isoflurane/Dormicum)”. This should be an agonistic (not antagonistic) regime, as far as I underdtand.

12. Lines 322-323. Here the authors for the first time segregate their results based on the kind of anesthetics they uses, and they report radical differences. But what about the previous figures (panels A-G)? It is not specified whether the reduction of FR reported in those panels was obtained with isoflurane or ketamine. We should assume that those data were obtained with isoflurane? But this should be specified clearly in describing panels A-G. Similarly, do all the panels in figure 1 up to panel J refer to evoked or spontaneous FR? This is not told. At some point a full section starts (line 333) about the difference between spontaneous and evoked activity, but what about the data shown before? To what part of the response do they refer to?

13. Line 337: the authors should avoid jumping ahead of their figures, making a quick reference to Fig. 2F (without explaining what it shows) while describing Fig. 1 still. This is extremely confusing.

14. Lines 344-346: “our results support the proposed effect from previous investigations that an effect of anesthesia on unit activity is mainly a reduction in spontaneous activity”. This is all very confusing. The explanation of these panels in Fig.1 is hard to follow because too many bits of information are missing or are scattered through too many paragraphs. The issues are:

- In panels A-G we are told that anesthesia produce a reduction of FR

- In panel H we leaned that not all anesthetics produce such a reduction: only those based on isoflurane? But then, I wander: was the reduction of FR shown in panels A-G present because those data refer to isoflurane anesthesia? Impossible to know because this information is missing.

- Finally, in panels K-M, we learn that there is a differential effect of the anesthetics on spontaneous vs. evoked activity. But then I wonder: were the data shown in panels A-J referring to spontaneous or evoked FR. Impossible to know because this information is missing.

To complicate even more things, in panel M we see that also for ketamine there is a significant effect of anesthesia: larger ev/sp ratio than awake. But didn't ketamine failed to produce any reduction of FR (panel H)?

Finally, in the sentence above, references are missing.

15. Line 366: �ter trough (awake: 168 49 ms, anesthesia: 307 140 ms, p=0.003, Wilcoxon).” make reference to Fig. 2B here!

16. A very important issue of this study is that large parts of the figures are orphan of any comments or any description in the main text. An example is Fig. 2E and Fig. 2F (barely mentioned). But here, it would be important to understand what the figures mean and imply. For instance, in the case of Fig. 2F, there are many issues:

- the y label says: “# units”, but then the scale goes up to 50,000. So from the legend I understood that each line is actually a trial of a unit and all trials of all units are shown. Is it correct? If so, this should be more explicitly said in the text.

- there seem to be some raking of the units, but according to what exactly? Overall FR? Spontaneous FR? Evoked FR? This should be written somewhere.

- Finally, and more importantly. We are told in Fig. 1 that ketamine does not significantly reduce FR. But here it looks like it does dramatically reduce FR. more than any other anesthetic, in both the spontaneous and evoked phase of the response. How is it possible this incongruence? Is it possible, by any chance, that the authors have managed to mislabel the figure and the green dots actually refer to isoflurane, while the blue dots to ketamine?

17. Lines 378-380. It looks like the authors are making reference to Fig. 4B, while they should have made reference to Fig. 2F and G, and they made reference to Fig. 4C, while they should have made reference to Fig. 2H. Well, this is my guess at least .

18. Lines 379-380: “normalizing the firing rates to baseline levels”. How was this normalization done exactly? Why are curves in H starting from negative values?

19. Line 391: “latency under anesthesia was largely independent of latency in the awake state.” It is ok to report this result for the latency of the peak of the response, but this analysis should be done also on the actual latency of the response (see my comment #4).

20. Section starting at Line 407. This comments applies here and to the many other places of the manuscript where the authors do not comment some of the panels of their figures. If you have decided to show Fig 3A, you must describe it in the text. You cannot simply jump to Fig. 3B. If you do not feel the need to describe Fig. 3A, then why have you shown this panel? Do not even bother showing it then. Or describe it and explain it to the reader.

21. Outliers were removed from figure 3C, as mentioned at line 412. Which was the method for outliers identifications? Usually should be based on distribution sigma (for example all data exciding 10 SD are classified as outliers)

22. Line 427. If Fig 3F is the next figure you want to describe then call it Fig. 3D. But do not jump to describe panel F without having described panels D and E. This is very bad writing.

23. Line 450: “We find a higher CV in the awake state (Figure 3G”. Here as in many other figures where the anesthetic is not specified: do these data refer to neurons recorded under all three anesthesia conditions? Or just one specific anesthetic? The labels of the figures, the legends and the main text should all be revised so that the reader can immediately and effortlessly know under what kind of anesthesia the data were recorded.

24. Section titled “Preservation of temporal sequences”. The result about the preservation of the temporal sequence of neuronal firing in awake and anesthetized state and among the two is presented as one of the main findings of the paper. I agree that this can potentially be an interesting result, but, although I was able to follow the overall meaning of this section and of figure 4, many of the details of the analyses are unclear. So this section needs to be fully revised to make it intelligible, by describing step by step (in sequential order) the panels of Fig. 4, and explaining carefully why the analysis shown in a panel differ from the analysis shown in the previous and next one. Also, it is not enough to make reference to the literature when explaining your method/metric - you must fully describe it.

For instance, from the description of Fig. 4C provided in the main text, it seem that the bar plots refer to CC between the latencies of the units within an ensemble in two different states/sessions. But how this is different from what the authors later call MSL? And why, in the legend of Fig. 4C, these CC are called MSL? And are these MSL in Fig. 4C the same of those shown in Fig. 4B? And why, again and again, is Fig. 4C described before Fig. 4B? Do we need Fig 4B to understand Fig 4C? Is there any reason why one panel comes before the other?

To summarize: this is, as usual, very confusing. I do not have the patience and the strength of trying to figure out something so obscure. I can only suggest to the reviewers two things:

- provide a detailed explanation of the metrics you compute (here as everywhere else), possibly with the aid of some apposite figures.

- find one or more colleagues (better more) that befriend you and ask them to read through your manuscript and honestly tell you if they understand what you are trying to explain. Engage in long discussions with them until you manage to put together some intelligible text.

25. Lines 511-513. “More populations are included in this measure versus the single population- MSL rank correlations because the high demand of unit number for MSL individual spike times in sequence relevant correlation comparisons.” This is simply one of the most unclear sentences I ever happed to read!

26. Lines 521-522. 𠇊s expected, for all measures the shuffling resulted in no skewness and normal distribution of events”. Is that really true? At least for the first and third panel (especially for the third), the yellow bar plots seem positively skewed. The authors should quantify the skewness of the yellow bar plots, compare it to the skewness of the gray bar plots and compare the two distributions statistically. My impression is that a fraction of the sequence effect is preserved after shuffling.

27. Lines 548-549. �h grating cycle size was estimated at the location furthest and closest from the screen to the rat's eyes”. Why the need of estimating a max and min SF. If the authors tracked the rat head I assume that they did not just track the position but also the direction where the head was pointing. This should allow estimating precisely the actual SF of the stimuli. To do this is necessary just to track two points (LEDs). Did the author really only track one single LED, knowing the kind of analysis they planned to do?

28. Lines 561-564. I can understand that the fig. 5D shows a preservation of activity for neurons preferring lower SF, but not high TF, but I do not understand how this figure relates to the claim of this sentence.

29. Lines 572-573: “unit preferences was found between awake and anesthesia (Figure 5H,I)”. Again, if panels H and I are what you want to describe next, why don't you just name them panels E and F?

30. Section titled “Modelling awake-anesthesia data”. I don't feel that the modeling is adding anything to this study. It is a too simplistic model of visual cortex to be meaningful. A model is useful if it shows an emerging property. Is this the case? Is it any unexpected that by increasing the inhibition in the network the spontaneous and evoked responses are lower? I don't think so. What do we learn about the processing in the network? My opinion is that the whole modeling work should be dropped, unless the authors can convincingly show that we can learn something from it.

31. There are a few things reported in the results that are not properly discussed.

- The latencies in the results (Lines 365, 383-387) are discussed in the paragraph starting at line 701, but focusing only on the relative increase/decrease across brain states. No comparison of absolute values during anesthesia and wakefulness is made with values in other studies. Are the obtained values comparable with what reported for visual neurons in literature?

- No comparison with literature is reported for the delays measure in the ketamine/xylazine condition (L 725)

- Always on latencies, the studies cited in the paragraph starting at line 727 are from different species (Angel et al 1973 on Rabbits, Ter-Mikaelian et al 2007 on Mongolian gerbil, Cheung et al 2001 on Cats). First of all it should be specified, but things should be compared with studies on visual (and other sensory stimuli) in anesthetized rats, as there are studies that report latencies on rats' V1.

- In line 283-284, the authors state that “neighboring neurons may respond differently to the same level of anesthesia.” This affirmation should be properly discussed and framed in the wide range of literature on the topic.

- Same thing for the statement at lines 369: “the LFP signature is not only shifted in time, but appears to last longer during anesthesia ” and at line 430: �tivity more correlated when there is no stimulus”. Those are all arguments extensively studied and should be discussed a bit more extensively.

- A suggestion for the authors of a paper on neural correlation and synchrony across areas under different levels of anesthesia in rats: Bettinardi et al, Neuroimage 2015

32. General comment for the Discussion: make again reference to specific figure panels while discussing specific results.

33. Lines 713-714. “on separate pools of an unknown number of units.” What does it man “unknown?” in this paper, is the number of units really not reported? Please double check.

34. Minor problems in the figures:

3A - colorbar label missing

3D, left panel - time and amplitude units missing

4B - colorbar label missing

4C - x axis label and color-code label missing

1. Why did the experimenters chose the three types of anesthesia? It appears for a systematic approach relating to a host of previously generated data under anesthesia, at least the fentanyl condition and also urethane should be added.

2. The stimulus control is worrisome. The authors compare conditions in which an anesthetized animal was simply placed in front of a screen with situations, in which a mobile awake animal was roaming in front of the screens. They chose to pick the intervals in which the animal was sessile. However, only head position was assessed for this purpose. What about other behaviorally important parameters. What were the animals doing while being sessile? Sleeping, drowsing, grooming, etc.? In none of the reported situations eye / gaze position was monitored. Nevertheless, the authors present firing rates at a high temporal resolution (milliseconds). What is the effect of different spatial frequencies of the stimulus and receptive field position on the grating on latency and pattern estimations like the one presented in Figs. 2 and 4? Why wasn't at least the distance varied in anesthetized experiments.

3. Figure 1: What is meant with anesthesia? An average of all three types? I seriously doubt that this makes sense as in the rest of the paper several significant differences between these regimes are shown. I strongly recommend to abolish all averages across anesthesia regimes from the manuscript!

4. Figure 4 and related text: This whole section needs to be reworked completely. It is barely intelligible. The analysis related to patterning has to be succinctly explained here. What is 'average mean (!) spike latency' .

5. In this section the impact of lack of stimulus control is most critical. The authors need to be very meticulous here and argue well which of their results and interpretations still stand when they consider (perhaps systematic) different stimulus conditions in the awake vs. the anesthetized case (eye / whole body movements, distance / angle to the screen etc.).

1. What is a 16 electrode tetrode array? 4 tetrodes?

2. Figure 1: What is the n e.g. in panel B and G? Trials? Spikes? Why is the n only given for one group in panel B?

4. Fig. 1G: Show median and iqr (or similar) as well!

5. Spike types: Demonstrate example waveforms form all three types (BS, NS, triphasic).

6. Line 346: The statement warrants references!

7. Figure 3D: The abbrev. BS was broad spiking before. What is the purpose of this panel anyway? In a burst suppression situation, enhanced correlation is highly expected!

8. Fig. 4 is inaccessible! Referencing other papers is not good enough here. Explain all strategies of analysis here in the text!

Additional minor changes that I found:

Lines 355-357: completely unclear. Please revise. Maybe also be OK to completely remove and just proceed to the section on “Visually evoked latencies are delayed under anesthesia”

Lines 347-353 seem inconsistent with line 289-297 that state that only sessile sessions were analyzed.

Line 280: the firing rates were . (instead of was)

Line 292-293: text is not clear, and seem to disagree with Neil and Stryker results.

Line 329: consider the wording such as “had most stable response dynamics” instead of “has the lowest changes in firing rate”

Line 381 . stimulus onset was .

Line 459: 𠇌ompared to awake state.”


Contents

Evolutionary ancestry has hard-wired humans to have affective responses for certain patterns and traits. These predispositions lend themselves to responses when looking at certain visual arts as well. Identification of subject matter is the first step in understanding the visual image. Being presented with visual stimuli creates initial confusion. Being able to comprehend a figure and background creates closure and triggers the pleasure centers of the brain by remedying the confusion. Once an image is identified, meaning can be created by accessing memory relative to the visual stimuli and associating personal memories with what is being viewed. [4]

Other methods of stimulating initial interest that can lead to emotion involves pattern recognition. Symmetry is often found in works of art, and the human brain unconsciously searches for symmetry for a number of reasons. Potential predators were bilaterally symmetrical, as were potential prey. Bilateral symmetry also exists in humans, and a healthy human is typically relatively symmetrical. This attraction to symmetry was therefore advantageous, as it helped humans recognize danger, food, and mates. Art containing symmetry therefore is typically approached and positively valenced to humans. [4]

Another example is to observe paintings or photographs of bright, open landscapes that often evoke a feeling of beauty, relaxation, or happiness. This connection to pleasant emotions exists because it was advantageous to humans before today's society to be able to see far into the distance in a brightly lit vista. Similarly, visual images that are dark and/or obscure typically elicit emotions of anxiety and fear. This is because an impeded visual field is disadvantageous for a human to be able to defend itself. [5]

Meta-emotions Edit

The optimal visual artwork creates what Noy & Noy-Sharav call "meta-emotions". These are multiple emotions that are triggered at the same time. They posit that what people see when immediately looking at a piece of artwork are the formal, technical qualities of the work and its complexity. Works that are well-made but lacking in appropriate complexity, or works that are intricate but missing in technical skill will not produce "meta-emotions". [6] For example, seeing a perfectly painted chair (technical quality but no complexity) or a sloppily drawn image of Christ on the cross (complex but no skill) would be unlikely to stimulate deep emotional responses. However, beautifully painted works of Christ's crucifixion are likely make people who can relate or who understand the story behind it weep.

Noy & Noy-Sharav also claim that art is the most potent form of emotional communication. They cite examples of people being able to listen to and dance to music for hours without getting tired and literature being able to take people to far away, imagined lands inside their heads. Art forms give humans a higher satisfaction in emotional release than simply managing emotions on their own. Art allows people to have a cathartic release of pent-up emotions either by creating work or by witnessing and pseudo-experiencing what they see in front of them. Instead of being passive recipients of actions and images, art is intended for people to challenge themselves and work through the emotions they see presented in the artistic message. [6]

There is debate among researchers as to what types of emotions works of art can elicit whether these are defined emotions such as anger, confusion or happiness, or a general feeling of aesthetic appreciation. [8] The aesthetic experience seems to be determined by liking or disliking a work of art, placed along a continuum of pleasure–displeasure. [8] However, other diverse emotions can still be felt in response to art, which can be sorted into three categories: Knowledge Emotions, Hostile Emotions, and Self-Conscious Emotions. [8]

Liking and comprehensibility Edit

Pleasure elicited by works of art can also have multiple sources. A number of theories suggest that enjoyment of a work of art is dependent on its comprehensibility or ability to be understood easily. [9] Therefore, when more information about a work of art is provided, such as a title, description, or artist's statement, researchers predict that viewers will understand the piece better, and demonstrate greater liking for it. [9] Experimental evidence shows that the presence of a title for a work increases perceived understanding, regardless of whether that title is elaborative or descriptive. [9] Elaborative titles did affect aesthetic responses to the work, suggesting viewers were not creating alternative explanations for the works if an explaining title is given. [9] Descriptive or random titles do not show any of these effects. [9]

Furthering the thought that pleasure in art derives from its comprehensibility and processing fluency, some authors have described this experience as an emotion. [10] The emotional feeling of beauty, or an aesthetic experience, does not have a valence emotional undercurrent, rather is general cognitive arousal due to the fluent processing of a novel stimuli. [10] Some authors believe that aesthetic emotions is enough of a unique and verifiable experience that it should be included in general theories of emotion. [10]

Knowledge emotions Edit

Knowledge emotions deal with reactions to thinking and feeling, such as interest, confusion, and surprise. [8] They often stem from self-analysis of what the viewer knows, expects, and perceives. [8] [12] This set of emotions also spur actions that motivate further learning and thinking. [8]

Interest Edit

Interest in a work of art arises from perceiving the work as new, complex, and unfamiliar, as well as understandable. [8] [12] This dimension is studied most often by aesthetics researchers, and can be equated with aesthetic pleasure or an aesthetic experience. [8] This stage of art experience usually occurs as the viewer understands the artwork they are viewing, and the art fits into their knowledge and expectations while providing a new experience. [12]

Confusion Edit

Confusion can be viewed as an opposite to interest, and serves as a signal to the self to inform the viewer that they cannot comprehend what they are looking at, and confusion often necessitates a shift in action to remedy the lack of understanding. [8] [12] Confusion is thought to stem from uncertainty, and a lack of one's expectations and knowledge being met by a work of art. [12] Confusion is most often experienced by art novices, and therefore must often be dealt with by those in arts education. [8]

Surprise Edit

Surprise functions as a disruption of current action to alert a viewer to a significant event. [8] The emotion is centered around the experience of something new and unexpected, and can be elicit by sensory incongruity. [8] Art can elicit surprise when expectations about the work are not met, but the work changes those expectations in an understandable way.

Hostile emotions Edit

Hostile emotions toward art are often very visible in the form of anger or frustration, and can result in censorship, but are less easily described by a continuum of aesthetic pleasure-displeasure. [8] These reactions center around the hostility triad: anger, disgust, and contempt. [8] These emotions often motivate aggression, self-assertion, and violence, and arise from perception of the artist's deliberate trespass onto the expectations of the viewer. [8]

Self-conscious emotions Edit

Self-conscious emotions are responses that reflect upon the self and one's actions, such as pride, guilt, shame, regret and embarrassment. [8] These are much more complex emotions, and involve assessing events as agreeing with one's self-perception or not, and adjusting one's behavior accordingly. [8] There are numerous instances of artists expressing self-conscious emotions in response to their art, and self-conscious emotions can also be felt collectively. [8]

Sublime feelings Edit

Researchers have investigated the experience of the sublime, viewed as similar to aesthetic appreciation, which causes general psychological arousal. [13] The sublime feeling has been connected to a feeling of happiness in response to art, but may be more related to an experience of fear. [13] Researchers have shown that feelings of fear induced before looking at artwork results in more sublime feelings in response to those works. [13]

Aesthetic chills Edit

Another common emotional response is that of chills when viewing a work of art. The feeling is predicted to be related to similar aesthetic experiences such as awe, feeling touched, or absorption. [14] Personality traits along the Big 5 Inventory have been shown to be predictors of a person's experience of aesthetic chills, especially a high rating on Openness to Experience. [14] Experience with the arts also predicts someone's experience of aesthetic chills, but this may be due to them experiencing art more frequently. [14]

Effects of expertise Edit

The fact that art is analyzed and experienced differently by those with artistic training and expertise than those who are artistically naive has been shown numerous times. Researchers have tried to understand how experts interact with art so differently from the art naive, as experts tend to like more abstract compositions, and show a greater liking for both modern and classical types of art. [15] Experts also exhibit more arousal when looking at modern and abstract works, while non-experts show more arousal to classical works. [15]

Other researchers predicted that experts find more complex art interesting because they have changed their appraisals of art to create more interest, or are possibly making completely different types of appraisals than novices. [16] Experts described works rated high in complexity as easier to understand and more interesting than did novices, possibly as experts tend to use more idiosyncratic criteria when judging artworks. [16] However, experts seem to use the same appraisals of emotions that novices do, but these appraisals are at a higher level, because a wider range of art is comprehensible to experts. [16]

Expertise and museum visits Edit

Due to most art being in museums and galleries, most people have to make deliberate choices to interact with art. Researchers are interested in what types of experiences and emotions people are looking for when going to experience art in a museum. [17] Most people respond that they visit museums to experience 'the pleasure of art' or 'the desire for cultural learning', but when broken down, visitors of museums of classical art are more motivated to see famous works and learn more about them. [17] Visitors in contemporary art museums were more motivated by a more emotional connection to the art, and went more for the pleasure than a learning experience. [17] Predictors of who would prefer to go to which type of museum lay in education level, art fluency, an socio-economic status. [17]

Researchers have offered a number of theories to describe emotional responses to art, often aligning with the various theories of the basis of emotions. Authors have argued that the emotional experience is created explicitly by the artist and mimicked in the viewer, or that the emotional experience of art is a by-product of the analysis of that work. [1] [2]

Appraisal theory Edit

The appraisal theory of emotions centers on the assumption that it is the evaluation of events, and not the events themselves, that cause emotional experiences. [1] Emotions are then created by different groups of appraisal structures that events are analyzed through. [1] When applied to art, appraisal theories argue that various artistic structures, such as complexity, prototypically, and understanding are used as appraisal structures, and works that show more typical art principles will create a stronger aesthetic experience . [1] Appraisal theories suggest that art is experienced as interesting after being analyzed through a novelty check and coping-potential check, which analyze the art's newness of experience for the viewer, and the viewer's ability to understand the new experience. [1] Experimental evidence suggests that art is preferred when the viewer finds it easier to understand, and that interest in a work is predictable with knowledge of the viewer's ability to process complex visual works, which supports the appraisal theory. [1] People with higher levels of artistic expertise and knowledge often prefer more complex works of art. Under appraisal theory, experts have a different emotional experience to art due to a preference for more complex works that they can understand better than a naive viewer. [1]

Appraisal and negative emotions Edit

A newer take on this theory focuses on the consequences of the emotions elicited from art, both positive and negative. The original theory argues that positive emotions are the result of a biobehavioral reward system, where a person feels a positive emotion when they have completed a personal goal. [18] These emotional rewards create actions by motivating approach or withdrawal from a stimuli, depending if the object is positive or negative to the person. [18] However, these theories have not often focused on negative emotions, especially negative emotional experiences from art. [18] These emotions are central to experimental aesthetics research in order to understand why people have negative, rejecting, condemning, or censoring reactions to works of art. [18] By showing research participants controversial photographs, rating their feelings of anger, and measuring their subsequent actions, researchers found that the participants that felt hostile toward the photographs displayed more rejection of the works. [18] This suggests that negative emotions towards a work of art can create a negative action toward it, and suggests the need for further research on negative reactions towards art. [18]

Minimal model Edit

Other psychologists believe that emotions are of minimal functionality, and are used to move a person towards incentives and away from threats. [19] Therefore, positive emotions are felt upon the attainment of a goal, and negative emotions when a goal has failed to be achieved. [19] The basic states of pleasure or pain can be adapted to aesthetic experiences by a disinterested buffer, where the experience is not explicitly related to the goal-reaching of the person, but a similar experience can be analyzed from a disinterested distance. [19] These emotions are disinterested because the work of art or artist's goals are not affecting the person's well-being, but the viewer can feel whether or not those goals were achieved from a third-party distance.

Five-step aesthetic experience Edit

Other theorists have focused their models on the disrupting and unique experience that comes from the interacting with a powerful work of art. An early model focused on a two-part experience: facile recognition and meta-cognitive perception, or the experience of the work of art and the mind's analysis of that experience. [20] A further cognitive model strengthens this idea into a five-part emotional experience of a work of art. [20] As this five-part model is new, it remains only a theory, as not much empirical evidence for the model had been researched yet.

Part one: Pre-expectations and self-image Edit

The first stage of this model focuses on the viewer's expectations of the work before seeing it, based on their previous experiences, their observational strategies, and the relation of the work to themselves. [20] Viewers who tend to appreciate art, or know more about it will have different expectations at this stage than those who are not engaged by art. [20]

Part two: Cognitive mastery and introduction of discrepancy Edit

After viewing the work of art, people will make an initial judgment and classification of the work, often based on their preconceptions of the work. [20] After initial classification, viewers attempt to understand the motive and meaning of the work, which can then inform their perception of the work, creating a cycle of changing perception and the attempt to understand it. [20] It is at this point any discrepancies between expectations and the work, or the work and understanding arise. [20]

Part three: Secondary control and escape Edit

When an individual finds a discrepancy in their understanding that cannot be resolved or ignored, they move to the third stage of their interaction with a work of art. [20] At this point, interaction with the work has switched from lower-order and unconscious processes to higher-order cognitive involvement, and tension and frustration starts to be felt. [20] In order to maintain their self-assumptions and to resolve the work, an individual will try to change their environment in order for the issue to be resolved or ignored. [20] This can be done by re-classifying the work and its motives, blaming the discrepancy on an external source, or attempting to escape the situation or mentally withdraw from the work. [20]

Part four: Meta-cognitive reassessment Edit

If viewers cannot escape or reassess the work, they are forced to reassess the self and their interactions with works of art. [20] This experience of self-awareness through a work of art is often externally caused, rather than internally motivated, and starts a transformative process to understand the meaning of the discrepant work, and edit their own self-image. [20]

Part five: Aesthetic outcome and new mastery Edit

After the self-transformation and change in expectations, the viewer resets their interaction with the work, and begins the process anew with deeper self-understanding and cognitive mastery of the artwork. [20]

In order to research emotional responses to art, researchers often rely on behavioral data. [21] But new psychophysilogical methods of measuring emotional response are beginning to be used, such as the measurement of pupillary response. [21] Pupil responses have been predicted to indicate image pleasantness and emotional arousal, but can be confounded by luminance, and confusion between an emotion's positive or negative valence, requiring an accompanying verbal explanation of emotional state. [22] Pupil dilatations have been found to predict emotional responses and the amount of information the brain is processing, measures important in testing emotional response elicited by artwork. [21] Further, the existence of pupillary responses to artwork can be used as an argument that art does elicit emotional responses with physiological reactions. [21]

Pupil responses to art Edit

After viewing Cubist paintings of varying complexity, abstraction, and familiarity, participants' pupil responses were greatest when viewing aesthetically pleasing artwork, and highly accessible art, or art low in abstraction. [21] Pupil responses also correlated with personal preferences of the cubist art. [21] High pupil responses were also correlated with faster cognitive processing, supporting theories that aesthetic emotions and preferences are related to the brain's ease of processing the stimuli. [21]

Left-cheek biases Edit

These effects are also seen when investigating the Western preference for left-facing portraits. This skew towards left-cheek is found in the majority of Western portraits, and is rated as more pleasing than other portrait orientations. [23] Theories for this preference suggest that the left side of the face as more emotionally descriptive and expressive, which lets viewers connect to this emotional content better. [23] Pupil response tests were used to test emotional response to different types of portraits, left or right cheek, and pupil dilation was linearly related to the pleasantness of the portrait, with increased dilations for pleasant images, and constrictions for unpleasant images. [23] Left-facing portraits were rated as more pleasant, even when mirrored to appear right-facing, suggesting that people are more attracted to more emotional facial depictions. [23]

This research was continued, using portraits by Rembrandt featuring females with a left-cheek focus and males with a right-cheek focus. [22] Researchers predicted Rembrandt chose to portray his subjects this way to elicit different emotional responses in his viewers related to which portrait cheek was favored. [22] In comparison to previous studies, increased pupil size was only found for male portraits with a right-cheek preference. This may be because the portraits were viewed as domineering, and the subsequent pupil response was due to unpleasantness. [22] As pupil dilation is more indicative of strength of emotional response than the valence, a verbal description of emotional responses should accompany further pupillary response tests. [22]

Art is also used as an emotional regulator, most often in Art Therapy sessions. Studies have shown that creating art can serve as a method of short-term mood regulation. [24] [25] This type of regulation falls into two categories: venting and distraction. [24] Artists in all fields of the arts have reported emotional venting and distraction through the creation of their art. [24] [25]

Venting Edit

Venting through art is the process of using art to attend to and discharge negative emotions. [24] However, research has shown venting to be a less effective method of emotional regulation. Research participants asked to draw either an image related to a sad movie they just watched, or a neutral house, demonstrated less negative mood after the neutral drawing. [24] Venting drawings did improve negative mood more than no drawing activity. [24] Other research suggests that this is because analyzing negative emotions can have a helpful effect, but immersing in negative emotions can have a deleterious effect. [25]

Distraction Edit

Distraction is the process of creating art to oppose, or in spite of negative emotions. [24] This can also take the form of fantasizing, or creating an opposing positive to counteract a negative affect. [25] Research has demonstrated that distractive art making activities improve mood greater than venting activities. [24] Distractive drawings were shown to decrease negative emotions more than venting drawings or no drawing task even after participants were asked to recall their saddest personal memories. [24] These participants also experienced an increase in positive affect after a distractive drawing task. [24] The change in mood valence after a distractive drawing task is even greater when participants are asked to create happy drawings to counter their negative mood. [25]


Conclusions

The concept of similarity is a pivotal aspect of many psychological studies, particularly those pertaining to visual search (Duncan & Humphreys, 1989 Treisman & Gelade, 1980). MDS is a simple and robust way of quantifying similarity scores between stimuli on any dimension(s). Data collection methods with regard to MDS are easy to implement (Hout, Goldinger & Ferguson, 2013), and the analysis tools for MDS are already included in many popular statistical programs (e.g., SPSS, R, or the statistics toolbox for Matlab). Although combining MDS with standard behavioral measurements (e.g., RTs) is useful, we feel that the inclusion of eye-tracking is a particularly fruitful avenue for future research. Eye-tracking analyses have the ability to uncover more nuanced effects of similarity in visual search (Hout, Goldinger & Brady, 2014 Maxfield & Zelinsky, 2012 Stroud et al., 2012). For instance, by deconstructing the search RT into periods of scanning behavior (i.e., how efficiently are the eyes guided to the target) and decision-making behavior (i.e., how quickly targets identified following fixation, or how quickly distractors are rejected), researchers can better elucidate the manner in which similarity relationships affect one’s ability to perform a search. Moreover, computational approaches to similarity also hold a great deal of promise by combining human ratings with those derived from computer vision, researchers have the potential to greatly inform theories of visual search. Computational approaches may even someday prove to be better predictors of human behavior, and thereby supplant the need for human ratings. For now, more generally, we hope that the use of MDS in psychological studies will soon be commonplace, ensuring the ease and accuracy of quantifying similarity to better examine theories in visual search and beyond.

Finally, throughout the present review, we have focused on outlining how MDS ratings can be used to quantify the visual similarity between objects. It also is possible to use MDS to go beyond the visual modality and compute the similarity between other forms of stimuli, objects, concepts, and information. For example, in a recent study, Montez, Thompson, and Kello (2015) asked participants to recall as many animal names as possible within a given time period. The purpose in doing so was to examine memory recall. Participants arranged the animal names on a whiteboard and were then asked to categorize groups of animal names. Again, although there was a visual component to this task, the core usage of MDS was that it enabled the researchers to quantify the semantic rather than visual similarity between items. This demonstrates that MDS is not just beneficial in the visual domain, but in other domains and modalities as well. Indeed, it may be the case that MDS will ultimately prove to be highly beneficial beyond the visual modality, enabling researchers to capture, for example, both the visual and semantic similarity between objects and items together.


Watch the video: SEIKO SYNERGY - Version française (January 2022).