Poster will be displayed during the day. Morning for posters with odd number, and Afternoon for posters with even number.
Click to Enlarge
Floorplan is here: Floorplan
Monday, 9:30 AM - 10:30 AM
Reconstructing visual and subjective experience from the brain
Yukiyasu Kamitani (Kyoto University and ATR Computational Neuroscience Laboratories)
Yukiyasu Kamitani, Ph.D.. Professor at Kyoto University and Department Head and ATR Fellow at ATR Computational Neuroscience Laboratories. He received B.A. in Cognitive Science from University of Tokyo, and Ph.D. in Computation and Neural Systems from California Institute of Technology. He continued his research in cognitive and computational neuroscience at Harvard Medical School and Princeton University. In 2004 he joined ATR Computational Neuroscience Laboratories, and since 2015 he is Professor at Kyoto University. He is a pioneer in the field of "brain decoding", which combines neuroimaging and machine learning to translate brain signals to mental contents. He was named Research Leader in Neural Imaging on the 2005 “Scientific American 50”, and received many awards including Tsukahara Memorial Award (2013), JSPS Prize (2014), and Osaka Science Prize (2015). In 2018, he was selected as an ATR Fellow. He is recently engaged in activity in contemporary art with Pierre Huyghe (“UUmwelt” at Serpentine Galleries, London, 2018) and other artists.
Moderator: Shin'ya Nishida (Kyoto University, NTT)
Tuesday, 17:15 PM - 18:15 PM
Pluripotent stem cell derived photoreceptor transplantation
Masayo Takahashi (Center for Biosystems Dynamics Research, RIKEN)
Now we are investigating how to increase the number of synapses and efficacy of transplantation. However, immunohistochemical characterization of synapse in the degenerated retina or of grafted retina is often challenging, as traits are not as clear as in the wild type retina. Therefore, using postnatal wild type mouse retina as a training data, we developed a new method to objectively count synapses. Using this method, we evaluated the synapses of iPSC-retina after transplantation to rd1 mice. The number of synapses increased on 30 days after transplantation, while we could not found any synapse formation in in vtro retinal organoids. The synapse numbers were more in the light/dark cycle environment than completely dark one.
I will talk about the future strategy of outer retinal transplantation.
Masayo Takahashi M.D., Ph.D. Project leader at Laboratory for Retinal Regeneration, RIKEN Center for Biosystems Dynamics Research. She received her Ph.D. from Kyoto University. After serving as an assistant professor in the Kyoto University Hospital, she moved to the Salk Institute, U.S., where she discovered the potential of stem cells as a tool for retinal therapy. She has joined RIKEN since 2006.
Moderator: Ichiro Fujita (Osaka University)
Wednesday, 17:15 PM - 18:15 PM
A motor theory of sleep-wake control
Yang Dan (University of California, Berkeley)
Yang Dan, Ph.D. She studied physics as an undergraduate student at Peking University and received her Ph.D. training in Biological Sciences at Columbia University. She did her postdoctoral research at Rockefeller University and Harvard Medical School. She is currently Paul Licht Distinguished Professor in the Department of Molecular and Cell Biology and an investigator of the Howard Hughes Medical Institute at the University of California, Berkeley.
Moderator: Izumi Ohzawa (Osaka University)
Keynote 4 / free public lecture
Thursday, 11:15 AM - 12:15 PM
ISETBIO: Software for the foundations of vision science
Brian Wandell (Stanford University)
Collaborative work with David Brainard, Nicolas Cottaris, Trisha Lian, Zhenyi Liu, Joyce Farrell, Haomiao Jiang, Fred Rieke, and James Golden.
Professor Brian A. Wandell, Ph.D. He is the first Isaac and Madeline Stein Family Professor. He joined the Stanford Psychology faculty in 1979 and is a member, by courtesy, of Electrical Engineering and Ophthalmology. He is Director of the Center for Cognitive and Neurobiological Imaging, and Deputy Director of the Wu Tsai Neuroscience Institute. He graduated from the University of Michigan in 1973 with a B.S. in mathematics and psychology. In 1977, he earned a Ph.D. in social science from the University of California at Irvine. After a year as a postdoctoral fellow at the University of Pennsylvania, he joined the faculty of Stanford University in 1979. Professor Wandell was promoted to associate professor with tenure in 1984 and became a full professor in 1988.
Moderator: Kaoru Amano (CiNet, NICT)
Physiological, psychological, and computational foundations of scene understanding
Yukako Yamane (Osaka University, Okinawa Institute of Science and Technology)
Ko Sakai (University of Tsukuba)
Charles Ed Connor (Johns Hopkins University)
Mark Lescroart (University of Nevada, Reno)
Ernst Niebur (Johns Hopkins University)
Steven W. Zucker (Yale University)
Vision science has revealed the nature of human vision and the visual functions in cognition in various ways, although ‘how humans understand an entire scene?’ is still a challenging problem. How does the visual system segregate images into meaningful parts and then assemble those parts into informative representations of the outside world? How do those representations support our immediate, intuitive knowledge about where we are, what things are present, and how those things relate to each other and to the overall scene? About what just happened in the scene, and what might happen next and how we should react for? Recent rapid advancement of machine learning algorithms enabled the identification, description, and even generation of a scene, however, they are still incapable of providing clues for understanding a scene as we humans do. We invite world-leading scientists to discuss the physiological, psychological, and computational foundations of scene understanding.Objects, Scenes, and Gravity in Ventral Pathway Visual Cortex
Alexandriya Emonds, Siavash Vaziri, Charles E. Connor Krieger Mind/Brain Institute, Johns Hopkins University
It has long been thought that the ventral pathway is dedicated exclusively to visual object processing, while scene understanding is primarily a dorsal pathway function. However, we reported recently that many neurons in macaque monkey ventral pathway, including a majority in the TEd channel, are far more responsive to large-scale scene structure. These neurons are particularly tuned for tilt and slant of planar surfaces and edges, in ways that suggest they represent the direction of gravity. In new experiments, we have begun to examine how object information and scene information are integrated in the ventral pathway. Early results show that individual neurons can be congruently selective for ground tilt, object tilt, and object balance (distribution of mass with respect to points of ground contact). This is consistent with the theory that visual cortex serves as an intuitive physics engine for understanding the natural world, in particular the energetic potentials and constraints imposed on objects by the ubiquitous force of gravity.Human Scene-selective Areas Represent the Orientation of and Distance to Large Surfaces
Mark Lescroart University of Nevada, Reno
A network of areas in the human brain—including the Parahippocampal Place Area (PPA), the Occipital Place Area (OPA), and the Retrosplenial Complex (RSC)—respond to images of visual scenes. Long-standing theories suggest that these areas represent the 3D structure of the local visual environment. However, most experiments that study representation of scene structure have relied on operational or categorical definitions of scene structure—for example, comparing responses to open vs closed scenes. It is not clear, based on such studies, how these areas might respond to scenes that do not fall into one of the investigated categories. Furthermore, it has been hypothesized that sceneselective areas represent 2D cues for 3D structure rather than 3D structure per se. To evaluate the degree to which each of these hypotheses explain variance in scene-selective areas, we develop an encoding model of 3D scene structure and test it against a model of low-level 2D features. Our 3D structural model uses continuous parameters based on 3D data (surface normals and depth maps) rather than human-assigned categorical labels such as “open” or “closed”. We fit the models to fMRI data recorded while subjects viewed visual scenes. Variance partitioning on the fit models reveals that scene-selective areas represent the distance to and orientation of large surfaces, at least partly independent of low-level features. Individual voxels appear to be tuned for combinations of the orientation of and distance to large surfaces. Principal components analysis of the model weights reveals that the most important dimensions of 3D structure are distance and openness. Finally, reconstructions of the stimuli based on our model demonstrate that the model captures unprecedented detail about the 3D structure of local visual environment from BOLD responses in scene-selective areas.Perceptual Organization and Attention to Objects
Ernst Niebur Solomon Snyder Department of Neuroscience and Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University
One of the most important strategies of dealing with the extremely high complexity presented by the visual signal from a cluttered scene is to organize the input into perceptual objects. The task of scene understanding is then transformed from interpreting ~10^6 rapidly changing input signals in terms of a much smaller number of spatio-temporal patterns that mostly correspond to structures in the real world, and are constrained by physical laws. This task is, however, highly non-trivial and requires to group those elements of the raw input that correspond to the same object, and segregate them from those corresponding to other objects and the background. We propose that primates solve this perceptual organization task using small populations of dedicated neurons that represent different objects. We note that this segregation process does not require the formation of fully-defined recognizable objects: computational models show that this can be accomplished on perceptual pre-cursors of objects with very simple properties that we call proto-objects. Key features of the models are "grouping" neurons integrating local features into coherent proto-objects and excitatory feedback to the same local feature neurons which caused the associated proto-object’s grouping neuron's activation. Organization of the scene into proto-objects thus transforms the seemingly impossible task of scene understanding into manageable sub-tasks. For instance, object recognition can then proceed in a sequential fashion, by operating on one proto-object at a time. A more general mid-level task is attention to objects and the model explains how attention can be directed (top-down) to objects even though the central structures that control top-down attention do not have a representation of the detailed features of these objects.How Interactions Between Shading and Color Inform Object and Scene Understanding
Steven W. Zucker Department of Computer Science, Yale University
What is the shape of an apple, and what color is it?
These are instances of classical questions about object perception: (i) how is it possible
that we can infer the three-dimensional shape of an object (e.g., an apple) from its shading,
and (ii) how is it possible that we can separate the intensity changes due to material effects
(the apple's pigmentation) from intensity changes due to shading. Clearly mistakes in
solving (ii) would impact (i). Importantly, these two problems arise in scene perception as
well. Objects are described by surfaces and their parts; scenes are described by surface
arrangements and their interactions. For example, when walking along a path through
the woods, where is the ground surface and why do shadows not effect its perceived
shape? Again, mistakes would impact performance if shadows were interpreted as holes.
We introduce a computational approach to these questions grounded in basic neurophysiology and computational theory. Regarding problem (i), we outline a novel approach to shape-from-shading inferences that is based on visual flows, a mathematical abstraction of how information is represented in visual cortex. Regarding (ii), we introduce a model of color representation (hue flows) also based on visual cortex, that is analogous to the shading flows. Then, given both flows, we can determine when hue co-varies generically with shading, thereby addressing problem (ii) and implying a material effect. We demonstrate psychophysically how hue flows can be designed to alter shape percepts, and we demonstrate computationally how hue flows pass through cast shadows. Taken together, key aspects of our abilities to wander through scenes and to describe and manipulate objects are both supported by the foundational interactions of shading and hue flows.
Unpacking cognitive and neural mechanisms underpinning the recognition and representations of unfamiliar and familiar faces and facial expressions: behavioral, eye movement and ERP studies
Kazuyo Nakabayashi (University of Hull)
Sakura Torii (Kobe Shoin Women’s University)
Kazuyo Nakabayashi (University of Hull)
Alejandro Estudillo (University of Nottingham)
Christel Devue (Victoria University of Wellington)
Holger Wiese (Durham University)
This symposium addresses some of the key debates in studies of face recognition and expressions. Here, we provide novel evidence demonstrating perceptual, cognitive and neural mechanisms underlying the representations and recognition of familiar and unfamiliar faces, and facial expressions across a range of paradigms. One study concerns the role of facial features in detecting facial expressions through manipulation of spatial frequencies. Two studies report perceptual matching of unfamiliar faces, with one study concerning effects of culture (Japanese vs. British) on eye movements, inversion effects and Thatcherization. The other study sheds light on the relative contribution of featual and holistic processing to same- and different-identity matching. The two remaining studies investigate cognitive and neural representations of familiar faces, with one study manipulating appearance of famous faces and their popularity in order to elucidate the representations of familiar faces in the cognitive system. The other study measures event-related brain potentials (N250) to reveal how semantic and visual information may interact, giving a rise to the recognition of familiar faces. The symposium will provide a comprehensive view towards how faces and expressions are processed and stored in memory.Three dimensional configuration and spatial frequency properties of facial expression
Sakura Torii Kobe Shoin Women’s University
The features of happy, angry, sad and surprised faces were studied. In the happy face, it was found that distinct characteristic changes occurred around the cheeks by the three dimensional analysis, and that the deviation range of gray level was wider than that measured in other faces by gray level analysis. By face recognition experiment using frequency-filtered face images, it was found that only happy face was easily recognized than other faces even under low-pass condition. These results suggested that emphasizing the contrast in a wide region of the facial surface increased the three-dimensional features, which possibly resulted in the selective enhancement of the “happy face features”. I foresee new make-up foundations that can only emphasize and develop the “happy face feature”.The role of inversion and Thatcherization in matching own- and other-race faces
Kazuyo Nakabayashi University of Hull
In the Thatcher illusion a face in which the eyes and mouth are inverted looks grotesque when shown upright but not when inverted. Two experiments examined the contribution of feature-based and configural processing to matching normal and Thatcherized pairs of isolated face parts (i.e., the eyes and mouth), and how perceptual matching would be influenced by the race of face and inversion (better performance for upright than inverted faces). White British and native Japanese groups made same/different judgements to isolated face parts. Experiment 1 had the same identity pairs, encouraging feature-based processing while Experiment 2 having different identity pairs, which induced configural. Across experiments, effects of inversion and Thatchierzation were found, but these effects varied depending on the race of observer and the race of stimulus face. In addition, eye movements demonstrated increased sampling of inverted compared with upright faces and for Thatcherized compared with normal faces. The findings demonstrate that perceptual biases, shaped by culture-specific strategies and task-based attentional demands, can determine sensitivity to feature-based and local configural information during perceptual encoding.Matching faces: the facial features are important, but so does the whole face
Alejandro J. Estudillo, Nate Frida University of Nottingham
Matching two unfamiliar faces is of paramount importance in forensic scenarios but the cognitive mechanisms behind this task are poorly understood. In fact, in contrast to the notion that faces are processed at holistic level, it has been suggested that face matching is driven by featural processing. The present study looks to shed light on this issue by exploring the role of holistic and featural processing for match (i.e., both faces depict the same identity) and mismatch (i.e., both faces depict different identities). Across two experiments, observers were asked to decide whether a pair of faces depicted the same or two different identities, while their eyes were being tracked. In Experiment 1, a gaze-contingent paradigm was used to manipulate holistic/featural processing. In Experiment 2, in addition to a standard face matching task, observers also performed a part/whole task, which provides an index of holistic processing. Results showed that both holistic and featural processing are important for face matching and that neither of them individually suffices for face matching.When Brad Pitt is more refined than George Clooney: The role of stability in developing parsimonious facial representations
Christel Devue Victoria University of Wellington
Most people can recognise large numbers of faces, but the facial information we rely on is unknown despite decades of experimentation. We developed a theory that assumes representations are parsimonious and that different information is more or less diagnostic in individual faces, regardless of familiarity. Diagnostic features are those that remain stable over encounters and so receive more representational weight. Importantly, coarse information is privileged over fine details. This creates cost-effective facial representations that may refine over time if appearance changes. The theory predicts that representations of people with a consistent appearance (e.g., George Clooney) should include stable coarse extra-facial features, and so their internal features need not be encoded with the same high resolution as those of equally famous people who change appearance frequently (e.g., Brad Pitt). In three preregistered experiments, participants performed a recognition task in which we controlled appearance of actors (variable, consistent) and their popularity (higher, lower). Consistent with our theory, in less popular actors, stable extra-facial features helped remember consistent faces compared to variable ones. However, in popular actors, representations of variable actors had become more refined than those of consistent actors. We will discuss broader implications of our theory for the field.The sustained familiarity effect: a robust neural correlate of familiar face recognition
Holger Wiese1, Simone C. Tüttenberg1, Mike Burton2, Andrew Young2 1Durham University, 2University of York
Humans are remarkably accurate at recognizing familiar faces independent of a particular picture. However, cognitive neuroscience has largely failed to show a robust neural correlate of image-independent familiar face recognition. Here, we examined event-related brain potentials elicited by highly personally familiar (close friends, relatives) and unfamiliar faces. We presented multiple “ambient” images per identity, varying naturally in lighting, viewing angles, expressions etc. Familiar faces elicited more negative amplitudes in the N250 (200–400 ms), reflecting the activation of stored face representations. Importantly, an increased familiarity effect was observed in the subsequent 400-600 ms time range. This Sustained Familiarity Effect (SFE) was reliably detected in 84% of individual participants. Additional experiments revealed that the SFE is smaller for personally, but less familiar faces (e.g., university lecturers) and absent for celebrities. Moreover, while the N250 familiarity effect does not strongly depend on attentional resources, the SFE is reduced when attention is directed away from the faces. We interpret the SFE as reflecting the integration of visual with additional person-related (e.g., semantic, affective) information needed to guide potential interactions. We propose that this integrative process is at the core of identifying a highly familiar person.
Recent studies on the mechanisms of color vision and its role in the society.
Ichiro Kuriki (Tohoku University)
Kowa Koida (Toyohashi University of Technology)
Bevil Conway (NIH)
Hisashi Tanigawa (Zhejiang University)
Chihiro Hiramatsu (Kyushu University)
The main topic of this symposium is to introduce the recent progress in the studies on color representation in primate (including human) visual cortex, especially *after* the level of cone-opponent stage, to the vision researchers in physiology, psychophysics, art, and computational field of study in Japan and other countries. Color vision is one of the fundamental features of primate vision. It is used not just in object search and social interactions during the survival of individual, but sometimes give a soul-steering impression in fine arts. The contribution of color on our life is tremendous. On the other hand, descriptions on color-vision mechanisms in most textbooks is stopping at the level of cone-opponent system, while it has been pointed out since early ‘90s that the outputs of cone opponent system do not directly correspond to pure red, blue, green, and yellow sensations (i.e., unique hues). Indeed, there has not been a clear-cut explanation about how our color perception is processed in our brain. However, various possibilities have been proposed on the structure of color-vision and related visual-processing mechanisms in the cortex, which may be related to our color perception (i.e., subjective experience of color appearance). Three leading studies on this topic will be introduced in this symposium.
Firstly, Dr. Conway will introduce recent studies on cortical structure and functionality for the processing of color and related high-level visual features in primate cortex. Secondly, Dr. Tanigawa will introduce their recent study on the neural structure of color-vision processing mechanisms in early visual cortex. Finally, Dr. Hiramatsu will introduce the recent studies on the polymorphism of color vision in primates using genetical and behavioral approach.
Bevil Conway National Institutes of Health
What is color for and how are color operations implemented in the brain? I will take up this question, drawing upon neurophysiological recordings in macaque monkeys, fMRI in humans and monkeys, psychophysics, and color-naming in a non-industrialized Amazonian culture. My talk will have three parts. First, I will discuss results showing that the neural implementation of color depends on a multi-stage network that gives rise to a uniform representation of color space within a mid-level stage in visual processing. Second, I will describe work suggesting that color is decoded by a series of stages within higher-order cortex, including inferior temporal cortex and prefrontal cortex (PFC). In a surprising twist, these discoveries reveal a general principle for the organization and operation of inferior temporal cortex and provide evidence for a stimulus-driven functional organization of PFC. Finally, I will describe two recent discoveries prompted by our neurobiological discoveries: a new interaction of color and face perception, which suggests that color evolved to play an important role in non-verbal social communication; and a universal pattern in color naming that reflects the color statistics of those parts of the world that we especially care about (objects). Together, the work supports the provocative idea that basic color categories are an emergent property arising from the needs we place on the brain (including object recognition and the assignment of object valence), rather than a constraint determined by color encoding.Hue maps of DKL space at columnar resolution in macaque early visual cortex
Hisashi Tanigawa Zhejiang University
It is a fundamental question about color vision how cone signals are transformed into perceptual colors in the cortex. Previous studies revealed functional structures for color processing in the early visual cortex, such as CO blobs in V1, CO thin stripes in V2, and color-sensitive domains in V4, and these structures are thought to play an important role in the transformation of cone signals. However, it is not known how hue selectivities based on cone opponency is organized as spatial arrangement in early visual cortex. Here we performed optical imaging in macaque V1, V2, and V4 to examine distribution of domains selective to individual hues of Derrington-Krauskopf-Lennie (DKL) color space that is based on the cone-opponent axes of the lateral geniculate nucleus (LGN). We presented visual stimuli with isoluminant color/gray or color/color stripes: the hue of color stripes was chosen from eight evenly spaced directions in an isoluminant plane of the DKL color space, in which four of them were along the L-M and S-(L-M) cardinal axes. In this talk, I will first introduce our past results of optical imaging in V4 and then show recent results of imaging in early visual cortex using the DKL color space.Polymorphic nature of color vision in primates
Chihiro Hiramatsu Kyushu University
Although mammals generally have dichromatic color vision, primates have uniquely evolved trichromatic vision. However, not all primates possess uniform trichromatic vision, and diverse color vision is ubiquitous in many Neotropical primates and human populations owing to polymorphisms of red-green visual pigment genes. The biological significance of polymorphic color vision and how it influences differences beyond perception are not fully understood. In this talk, I will present how color vision is polymorphic at genetic and perceptual levels, and how these traits affect behavior and even aesthetic impression. Then, I would like to discuss the potential influences of experience and social interaction, which may modify conscious perception of colors.
Modeling approaches to visual circuit function, pathology, and regeneration.
Katsunori Kitano (Ritsumeikan University)
Masao Tachibana (Ritsumeikan University)
Katsunori Kitano (Ritsumeikan University)
Alexander Sher (University of Calfornia, Santa Cruz)
The main topic of this symposium is to show how a combination of highly quantitative measurements and mathematical modeling can lead to insights into higher order visual processing in the retina, retinal rhythmogenesis, and the mechanisms that underlie the re-establishment of retinal connections. In the first talk, Dr Tachibana will describe a novel mechanism through which eye movements can dramatically change the group signaling properties of the ganglion cell network, altering its informational content. Second, Dr. Kitano will present a mathematical model for an unexpected oscillatory activity in degenerating retina. The model suggests a source for the abnormal oscillations, and may allow us to devise therapeutic interventions. Finally, Dr Sher will present a quantitative analysis of how connectivity is reestablished in the adult outer retina follow the removal of a subset of photoreceptor (rod and cone) targets.Rapid and coordinated processing of global motion images by local clusters of retinal ganglion cells
Masao Tachibana Ritsumeikan University
Our visually perceived world is stable, irrespective of incessant motion of the retinal image due to the movements of eyes, head, and body. Accumulating evidence indicates that the central nervous system may play a key role for stabilization of the visual world. Fixational and saccadic eye movements cause jitter and rapid motion of the whole retinal image, respectively. However, it is not yet evident how the retina processes visual information during eye movements. Furthermore, it is not clear whether multiple subtypes of retinal ganglion cells (GCs) send visual information independently or cooperatively. We performed multi-electrode recordings and whole-cell clamp recordings from ganglion cells (GCs) of the retina isolated from goldfish. GCs were physiologically classified into six subtypes (Fast/Medium/Slow, transient/sustained) based on the temporal profile of the receptive field (RF) estimated by reverse correlation method. We found that the jitter motion of a global random-dot background induced elongation and sensitization of the spatiotemporal RF of the Fast-transient GC (Ft GC). The following rapid global motion induced synchronous firing among local Ft GCs and cooperative firing with precise latencies among adjacent specific GC subtypes. Thus, global motion images that simulated fixational and saccadic eye movements were processed in a coordinated manner by local clusters of specific GCs. Stimulus conditions (duration, area, velocity, and direction of motion) that altered the properties of the receptive field (RF) were consistent with the characteristics of in vivo goldfish eye movements. The wide-range lateral interaction, possibly mediated by electrical and GABAergic synapses, contributed to the RF alterations. These results indicate that the RF properties of retinal GCs in a natural environment are substantially different from those under simplified experimental conditions. Processing of global motion images is already started in the retina and may facilitate visual information processing in the brain.Normal and pathological states generated by dynamical properties of the retinal circuit
Katsunori Kitano Ritsumeikan University
The retina numerous subtypes of neurons each of which, when embedded in a circuit, exhibits different dynamical properties. These dynamical properties can influence the output of the retina under normal conditions, and may also play a role in establishing aberrant rhythms under pathological conditions. Indeed, an understanding the dynamical properties of pathological states may help us to understand dynamic neural mechanisms under more normal conditions. Compared to normal adult retina which lacks oscillatory activity, the retina of the rd1 (retinal degeneration 1) mouse is known to exhibit spontaneous, low frequency (<10 Hz), oscillations. Two potential mechanisms for the spontaneous oscillation have been proposed. One mechanism involves the properties of a gap junction network between cone bipolar cells (BCs) and AII amacrine cells (AII ACs) and between AII-ACs (Trenholm et al., 2012), whereas in the other, the oscillations arise from the intrinsic properties of AII-ACs (Choi et al., 2014). We studied the mechanism of spontaneous oscillation using a computational model of the AII-AC, BC, and GC network. In particular, to solve the paradoxical phenomenon mentioned above, we incorporated a ribbon synapse model at the BC-GC synapse. Even when bipolar cells are held in the depolarized state, neurotransmitter release was not always enhanced because of short term depression (Tsodyks and Markram, 1997). If we assume upregulation of the synapses in the inner plexiform layer of the rd1 retina (Dagar et al., 2014), our model could reproduce both the normal and abnormal neural states in the absence of light stimuli: no response in the normal retina and spontaneous rhythmic activity in the abnormal retina.Restoration of selective connectivity in adult mammalian retina
Alexander Sher University of California, Santa Cruz
Specificity of synapses between neurons of different types is essential for the proper function of the central nervous system. While we have learned much about formation of these synapses during development, the degree to which adult CNS can reestablish specific connections following injury or disease remains largely unknown. I will show that specific synaptic connections within the adult mammalian retina can be reestablished after neural injury. We used selective laser photocoagulation to ablate small patches of photoreceptors in-vivo in adult rabbits, ground squirrels, and mice. Functional and structural changes in the retina at different time points after the ablation were probed via electrophysiology and immunostaining accompanied by confocal imaging. We found that deafferented rod bipolar cells located within the region where photoreceptors were ablated restructure their dendrites. New dendritic processes extend towards surrounding healthy photoreceptors and establish new functional synapses with them. To test if specific connectivity can be reestablished, we observed restructuring of deafferented S-cone bipolar cells that synapse exclusively with S-cone photoreceptors in the healthy retina. We discovered that deafferented S cone bipolar cells extend their dendrites in random directions within the outer plexiform layer. If the extended dendrite encounters a healthy S-cone, it forms a synapse with it. At the same time, it passes M-cone photoreceptors without making synapses. Finally, we used transgenic mice to investigate molecular mechanisms behind the observed restructuring. Our results indicate that the adult mammalian retina retains the ability to make new specific synapses leading to reestablishment of correct neural connectivity.
On the border of implicit and explicit processing
Shao-Min (Sean) Hung (James Boswell Postdoctoral Scholar, California Institute of Technology)
Shinsuke Shimojo (California Institute of Technology)
Shao-Min (Sean) Hung (California Institute of Technology)
Daw-An Wu (California Institute of Technology)
Naotsugu Tsuchiya (Monash University)
Implicit processing plays an important role in maintaining visual functions. After all, at a given moment, our phenomenal experience is inherently limited by various factors, including attention, working memory, etc. In the current proposal, we will tackle major questions in the field and challenge intuitions on implicit/unconscious processing. These questions include the fundamental relation between attention and consciousness, using the level of visual processing as a delineation of explicit and implicit processing, and how implicit decision making perturbs the explicit sense of agency.
Naotsugu Tsuchiya will show recent findings on how attention tracks suppressed stimulus under binocular rivalry. Shao-Min Hung will provide evidence from unconscious language processing, substantiating high-level implicit processing. Daw-An Wu further discusses how TMS alters our attribution of motor decision making. These topics will be integrated by Shinsuke Shimojo, providing an overall view of the current challenges and advances in the field, including “postdiction.” Some of these challenges can be better dealt with once we are equipped with more suitable views on implicit processing, such as a dynamic interaction among visual items across time, utilizing both predictive and postdictive factors.
Shinsuke Shimojo Gertrude Baltimore Professor of Experimental Psychology, California Institute of Technology
Can the implicit level of mind execute only simple sensory/cognitive functions? And is the bottleneck to consciousness single, or multi-gated? These questions are elusive, especially considering examples such as implicit semantic priming, and implicit stroop effect (Hung talk in this symposium). I will aim for taxonomy and integration of related findings including my own, to have a clearer overview. First, there are multiple definitions of implicit processing on top of “subliminal”, as exemplified in causal misattribution in action (Wu talk), and attention vs. consciousness (Tsuchiya talk). Second, the implicit/explicit distinction will NOT map onto the lower-/higher-levels of cognitive function (Hung talk). Rather, there are multiple gates to consciousness as indicated in the binocular rivalry debate (80s, up to now), and also quick interplays between implicit and explicit processes. Third, the implicit process may be dynamic spreading over time, operating predictively and postdictively. Auditory-visual “rabbit” effect would be a great example where implicit postdictive process leads to a conscious percept (Shimojo talk),. The implicit process is also based on separate dynamic sampling frequencies. Some evidence comes from interpersonal bodily and neural synchrony (Shimojo talk), and dependence of perceptual changes upon allocation of attention relying on different temporal frequencies (Tsuchiya talk). Thus all together, we may need to abandon several simplistic ideas of implicit processes, and rather take a more dynamic and interactive view.Attention periodically samples competing stimuli during binocular rivalry
Naotsugu Tsuchiya Monash University
The attentional sampling hypothesis suggests that attention rhythmically enhances sensory processing when attending to a single (~8 Hz), or multiple (~4 Hz) objects. Here, we investigated whether attention samples sensory representations that are not part of the conscious percept during binocular rivalry. When crossmodally cued toward a conscious image, subsequent changes in consciousness occurred at ~8 Hz, consistent with the rates of undivided attentional sampling. However, when attention was cued toward the suppressed image, changes in consciousness slowed to ~3.5 Hz, indicating the division of attention away from the conscious visual image. In the electroencephalogram, we found that at attentional sampling frequencies, the strength of inter-trial phase-coherence over fronto-temporal and parieto-occipital regions correlated with changes in perception. When cues were not task-relevant, these effects disappeared, confirming that perceptual changes were dependent upon the allocation of attention, and that attention can flexibly sample away from a conscious image in a task-dependent manner.Language processing outside the realm of consciousness
Shao-Min (Sean) Hung1,2 1California Institute of Technology, 2Huntington Medical Research Institutes
The concept “Out of sight, out of mind” has been repeatedly challenged by findings that show visual information biases behavior even without reaching consciousness. However, the depth and complexity of unconscious processing remains elusive. To tackle this issue, we examined whether high-level linguistic information, including syntax and semantics, can be processed without consciousness.
Using binocular suppression, we showed that after a visible sentential context, a subsequent syntactically incongruent word broke suppression and reached consciousness earlier. Critically, when the sentential context was suppressed while participants made a lexical decision to the subsequent visible word, faster responses to syntactically incongruent words were obtained. Further control experiments show that (1) the effect could not be explained by simple word-word associations since the effect disappeared when the subliminal words were flipped and (2) the effect occurred independent of accurate localization of the subliminal text.
In another study we utilized a “double Stroop” paradigm where a suppressed colored word served as a prime while participants responded to a subsequent visible Stroop word. In the word-naming task, we showed that word but not color inconsistency slowed down the response time to the target, suggesting that semantic retrieval was prioritized. However, when asked to name the color, the same effect was obtained only after a significant practice effect on the color naming (i.e. reduction of response time) occurred, suggesting a competition of attentional resources between the current conscious task and unconscious stimulus. These findings were later replicated in separate experiments.
Across multiple studies we showed that high-level linguistic information can be processed unconsciously and exert an effect. These findings push the limit of unconscious processing and further show that an interplay between conscious and unconscious processing is crucial for such unconscious effect to occur.
Daw-An Wu California Institute of Technology
We generally assume that intentions and decisions cause our voluntary acts: We form a conscious intention to do something, and then this mental act leads to a bodily act. Neuroscientific research into the timeline of volition faces the challenge of measuring and reconciling events along many unstably related timelines - external, neural and mental. We use motor TMS stimulation to create a reference event, allowing for single-trial temporal order judgements to be meaningful across all the timelines.
1) We use electromyography (EMG) to monitor the participant’s (e.g.) thumb. 2) TMS is targeted to motor cortex so as to elicit an involuntary thumb movement. 3) The participant is asked to relax, and at a time of their own choosing, to flex their thumb (a voluntary movement). When the EMG detect the initiation of this movement, it triggers the TMS to activate.
In many cases, the participants report that the TMS click and its resulting thumb movement happened prior to their own volition. Some describe it as if the machine was reading their mind, and just as they were about to decide to act, the TMS beat them to it. The way we have set up the system, however, the TMS cannot be triggered until the voluntary muscle movement has physically begun.
The initiation of a voluntary act is not a discrete, early event to which we have direct mental access. Instead, it is a process that continues to consolidate after the initiation of movement. Our perception of our intentions depends not only on neural signals generated at initiation onset, but also on the integration of information gathered later. This may be analogous to the role of re-entrant feedback to visual cortex in visual consciousness. Contrary to the Cartesian assumption that our introspective awareness is direct, our sense of agency is inferred based on predictive and postdictive inferences about its most likely cause.
The early development of face and body perception
Jiale Yang (Department of Life Sciences, University of Tokyo)
Yumiko Otsuka (Department of Humanities and Social Sciences, Ehime University)
Naiqi G. Xiao (Princeton University)
Sarina Hui-Lin Chien (China Medical University)
Elena Nava (University of Milan-Bicocca)
Jiale Yang (University of Tokyo)
Masahiro Hirai (Jichi Medical University)
Human possess remarkable capacities to process face- and body-related signals. Prior studies consistently reported visual sensitivities to face and body at birth (e.g., Filippetti et al., 2013; Johnson et al., 1991). Moreover, culture specific experience shapes the development of the visual system to develop expertise for specific types of faces and bodies (e.g., own-race faces and communicative body gestures). Furthermore, it is well known that the development of face and body perception is at the foundation of more complex perceptual and cognitive abilities, such as learning and social skills. In this symposium, we will present 5 talks focusing on the early development of face and body perception from infancy to childhood by using a broad range of research methods: skin conductance, electroencephalogram (EEG), eye-tracking, and psychophysics measurements.
Xiao will show how experience of face-race determines early development of infants’ social perception, social learning, and stereotype formation. Chien will show that the pervasive own-race face experience shapes the development of fine-grained and efficient face perception across childhood, which further links to biased social development in childhood. Nava examines the development of multisensory integration from early infancy to childhood and its contribution to the development of body representation. Yang will show tactile information facilitates visual processing in infants, and how body representation modulates this multisensory enhancement. Hirai will talk about infants’ perception of body movements and bodily gestures, and its role in social learning.
In sum, this symposium brings together the latest findings regarding face and body perception across various stages of life and in different culture settings. These studies shed insights into the current advances and future directions of the field of early development of face and body perception.
Naiqi G. Xiao Princeton University
Convergent evidence shows that experience shapes early perceptual development. For example, infants who grow up in a mono-racial environment would develop biased perceptual capabilities for own- vs. other-race faces. However, it is unclear regarding the breadth of experiential impacts on early development. To this end, we explored how face-race experience affects early social development.
In three lines of research, we investigated the impacts of face-race experience in Asian countries, where people mostly see Asian faces, but rarely see faces of other-races. Thus, the mono-racial environment in Asian countries provides an ideal tool to examine whether infants’ social development is biased by asymmetrical face-race experience.
We first studied how face-race affects infants’ social perception via their associations of face-races with different emotional signals (happy vs. sad music). With increased age, infants gradually associated own-race faces with happy music, but other-race faces with sad music, which were evident at 9 months of age. To probe how this biased social perception further influences infants’ social interactions, we investigated infants’ social learning behaviors via learning to follow other’s gaze. Seven-month-olds tended to learn from own- but not other-race adults under uncertain situations. Moreover, similar race-based social learning bias was found when infants learned from multiple own- vs. other-race adults: infants formed a stronger stereotype from a group of other-race adults as opposed to a group of own-race adults. Together, these evidence convergently demonstrate social consequences of asymmetrical face-race experience in infancy. These findings stress the broad experiential impacts on early development beyond perceptual domains.
Sarina Hui-Lin Chien China Medical University
People are remarkable at processing faces. In a split second, one can recognize a person’s identity, gender, age, and race. Importantly, such face processing expertise is not equally prominent for all classes of faces; it works the best for faces belonging to one’s own racial group. In this talk, I will highlight the development and challenges of becoming a native face expert based on my recent studies with Taiwanese children. First, despite many cross-cultural studies reported an early emergence of the own-race advantage (ORA) in the first year of life, adult-like proficiency in discriminating own-race faces is not fully manifested until late childhood. Second, although encoding of race is fast and automatic, categorizing racially ambiguous faces is biased and cognitively taxing. Adults and children with racial essentialist beliefs tend to categorize ambiguous bi-racial faces as other race. Third, when do children judge people by their races? We found that a rudimentary race-based social preference emerges in late preschool years, and the influence of social status becomes increasingly important as children go to elementary school. In sum, the collective findings suggest that our perception of race emerges early in life and continues to develop through childhood. Lastly, the implications for race-based perceptual and social biases and avenues for future research will be discussed.Multisensory contributions to the development of body representation
Elena Nava University of Milan-Bicocca
The representation of the body and the sense of body ownership are the product of complex mechanisms, and adult studies have suggested that a crucial role is played by multisensory interactions of body-related signals, such as vision, touch and proprioception. In my talk I will present a series of studies conducted at different stages of development (from infancy to childhood) that suggest that multisensory cues not only shape body representation but play either a facilitating or constraining role depending on age. In particular, I will show that very early in life, infants are able to extract the amodal invariant that is common across senses (e.g., rhythm, tempo), and this predisposes them to be naturally attracted to redundant multisensory stimuli. Infants can also extract the social component conveyed by multisensory stimuli, as observed in a recent study in which we found that 4 months-old infants show less arousing responses (as indexed through skin conductance response) to slow/affective touches coupled with a female face than to multisensory non-social stimuli (a discriminative-type of touch coupled with seeing houses). Interestingly, I will show that later in development, children lack to integrate the senses, and this prevents them from being susceptible to classical multisensory body illusions, such as the rubber hand illusion. Finally, I will show that sensory experience, such as vision, contributes to the development of multisensory interactions, and that lack of visual input – as in congenital blindness – prevents blind individuals to have a typical body representation.The effect of tactile-visual interactions on body representation in infants
Jiale Yang University of Tokyo
The representation of the body, which is closely related to motor control and self-awareness, relies upon complex multisensory interactions. Humans new born have been observed to perceive their own bodies (Rochat, 2010), and recent studies showed that visual tactile interactions facilitate the body perception in the early months of life (Filippetti et al., 2013; Freier et al., 2016). In the present study, we used the steady-state visually evoked potentials (SSVEP) to investigate the development of tactile-visual cortical interactions underlying body representations in infants. In Experiment 1, twelve 4-month-old and twelve 8-month-old infants watched a visual presentation in which a hand was stroked with a metal tube. To elicit the SSVEP, the video flashed at 7.5 Hz. In the tactile-visual condition the infant’s own hand was also stroked by a tube whilst they watched the movie. In the vision-only condition, no tactile stimulus was applied to the infant’s hand. We found larger SSVEPs in the tactile-visual condition than the vision-only condition in 8-month-old infants, but no difference between the two conditions in the 4-month-olds. In Experiment 2, we presented an inverted video to 8-month-old infants. The enhancement of tactile stimuli on SSVEP was absent in this case, demonstrating that there was some degree of body-specific information was required to drive the tactile enhancements of visual cortical processing seen in Experiment 1. Taken together, our results indicate that tactile influences on visual processing of bodily information develops between 4 and 8 months of age.Development of bodily movement perception in preverbal infants
Masahiro Hirai Jichi Medical University
Understanding another’s actions or behavior is one of the vital abilities that allows us to live in a dynamic and socially fluid world. In this talk, two aspects of body perception in preverbal infants will be discussed. The first aspect concerns the developmental mechanisms that underlie the perception of bodily movements—particularly the visual phenomenon of “biological motion” (Johansson, 1973), whereby our visual system detects various human actions through point–light motion displays. The second aspect concerns the cognitive mechanisms of the communicative aspect of bodily movement. The theory of natural pedagogy (Csibra & Gergely, 2009) proposes that infants use ostensive signals such as eye contact, infant-directed speech, and contingent responsivity to learn from others. However, the role of bodily gestures such as hand-waving in social learning has been largely ignored. We explored whether four-month-old infants exhibited a preference for horizontal or vertical (control) hand-waving gestures. We also examined whether horizontal hand-waving gestures followed by pointing gestures facilitated the process of object learning in nine-month-old infants. Results showed that four-month-old infants preferred horizontal hand-waving gestures to vertical hand-waving gestures. Further, horizontal hand-waving gestures enhanced identity encoding for cued objects, whereas vertical gestures did not. Based on our series of studies on body perception in preverbal infants, I will discuss the developmental model of body perception and its role in social communication.
Novel developmental, metabolic, and signaling mechanisms in the retina
Chieko Koike (Ritsumeikan University)
Steven H. DeVries (Northwestern University)
Wei Li (NIH)
Seth Blackshaw (Johns Hopkins University School of Medicine)
Steven H. DeVries (Northwestern University)
The retina is a laminarly organized, self-contained, and accessible piece of the central nervous system that performs the task of early visual processing. The ability to isolate piece of the nervous system that remains fully functional allows us to examine intact pathways including those that underlie development, metabolism, and neural circuits. This symposium will present recent progress in our understanding of mammalian retinal function and development in the areas of cell fate determination, regeneration, synaptic function, and hibernation by vision researchers in the United States. In this symposium, Dr Seth Blackshaw will exploit both transcriptomics and cross-species comparisons to identify the pathways that are essential for retinal cell fate determination and regeneration capacity. Dr Wei Li will focus on the thirteen-lined ground squirrel and describe metabolic pathway adaptations that permit the retina to tolerate long periods of time at near freezing temperatures during hibernation. Finally, Dr Steven DeVries will focus on the cone circuitry in the cone-dominant retina of the ground squirrel and describe how parallel processing pathways get their start.Seeing in the cold: vision and hibernation
Wei Li National Institutes of Health
The ground squirrel has a cone-dominant retina and hibernates during the winter. We exploit these two unique features to study retinal biology and adaptations during hibernation. In this talk, I will discuss an optic feature of the ground squirrel retina, as well as several forms of adaptation during hibernation in the retina and beyond. By exploring the mechanisms of adaptation in this hibernating species, we hope to shed light on therapeutic tactics for retinal injury and diseases, which are often associated with metabolic stress.Building and rebuilding the retina: one cell at a time
Seth Blackshaw Johns Hopkins University School of Medicine
The retina is an accessible system for identifying the molecular mechanisms that control CNS cell fate specification and is a prime target for regenerative therapies aimed at restoring photoreceptors lost to blinding diseases. I will discuss our recent large-scale single-cell RNA-Seq analysis of multiple vertebrate species that is aimed at identifying gene regulatory networks that drive the acquisition of neuronal and glial identity in the developing retina. I will discuss our identification of transcription factors that control both temporal identity and proliferative quiescence, new tools we and our collaborators have developed to identify core evolutionarily-conserved gene regulatory networks controlling retinal development, and mechanisms controlling injury-induced neurogenic competence in retinal glia.Parallel signal processing at the mammalian cone photoreceptor synapse
Steven H. DeVries Northwestern University
The brain has a massively parallel architecture that supports its prodigious computational abilities. In the visual system, parallel neural processing begins at the cone photoreceptor synapse. At this synapse, an individual cone signals to ~12 anatomically distinct bipolar cell types that comprise two main classes, On and Off, each consisting of about 6 types. To better understand the first steps in parallel visual signaling, we record in voltage clamp from synaptically connected pairs of cones and identified Off bipolar cells in slices from the cone-dominant ground squirrel retina. At the same time, we capture the detailed structure of the recorded synapse using super-resolution microscopy. Our results show how the molecular architecture of the synapse, including the placement of ribbon transmitter release sites, glutamate transporters, and postsynaptic ionotropic glutamate receptors, can enable the flow of different signals to the different bipolar cell types.
Studying attention without relying on behavior
Yaffa Yeshurun (University of Haifa )
Satoshi Shioiri (Tohoku University)
Yaffa Yeshurun (University of Haifa)
Satoshi Shioiri (Tohoku University)
Hsin-I Liao (NTT Communication Science Laboratories)
Hirohiko Kaneko (Tokyo Institute of Technology)
Attention – the selective processing of relevant information at the expense of irrelevant information – has been subject to scientific inquiry for over a century. One fundamental challenge to the study of attention is that most of our current knowledge was established using paradigms that depend on assumptions regarding the fate of unattended information, or rely in some other way on properties of the participants’ responses (e.g., accuracy or response time). Yet, the assumptions on which these paradigms are based may not always hold, and in general participants’ response can be influenced by many other factors than attention allocation, including response history, biases, higher-level strategies, experience, and so on. Moreover, response time, which is likely the most prevalent measurement in attention studies, is also linked to motor preparation, not just perception. Fortunately, several recent studies were set out to study attention with novel and exciting methodologies that do not rely on the participant’s response, and therefore provide a more objective measure of attentional deployment. Some of these novel methodologies rely on measurements of brain activity (e.g., SSVP, ERP) instead of accuracy or response time, while other methodologies rely on pupil size or eye movements. The four presentations included in this symposium illustrate how such methodologies can be utilized to overcome obstacles that prevail with more traditional paradigms.The characteristics of the attentional window when measured with the pupillary response to light
Yaffa Yeshurun University of Haifa
This study explored the spatial distribution of attention with a measurement that is independent of performance - the pupillary light response (PLR), thereby avoiding various obstacles and biases involved in more traditional measurements of spatial covert attention. Previous studies demonstrated that when covert attention is deployed to a bright area the pupil contracts relative to when attention is deployed to a dark area, even though display luminance levels are identical and central fixation is maintained. We used these attentional modulations of the PLR to assess the spread of attention. Specifically, we examined the minimal size of the attentional window and how it varies as a function of target eccentricity and the nature of other non-target stimuli (i.e., distractors). We found that when the target was surrounded by neutral task-irrelevant disks (i.e., bright/dark disks that did not include response-competing information) the attentional window had a diameter of about 2º. However, when the disks included competing information this size could be further reduced. Interestingly, the size of the attentional doesn’t seem to vary as a function of eccentricity, but it is affected by stimuli size. Finally, we also examined whether the spatial spread of attention is influenced by perceptual load. Load levels were manipulated by the degree of stimuli heterogeneity or task complexity. We found that the size of the attentional window was larger when load levels were low than when load levels were high. These findings demonstrate the flexibility and constraints of spatial covert attention.Differences in attention modulations measured by steady-state visual-evoked potentials and by behaviors.
Satoshi Shioiri Research Institute of Electrical Communication, Tohoku University
One of well-established methods to investigate spatial attention of the human visual system is to ask subjects to attend on a location intentionally (endogenous attention). The modulation in attention has been observed by subjective method such as reaction time and detection rate, and objective method such as electroencephalogram (EEG), fMRI or others. We have been comparing several aspects of spatial attention between behavioral measures and EEG measures, focusing on steady state visual evoked potential (SSVEP). Steady state visual evoked potential (SSVEP) is a technique to realize measurement of attentional effect at unattended locations with stimuli tagged by temporal frequency. We have succeeded to measure spatial and temporal characteristics of visual attention using SSVEP. The measurements showed both similarity and differences between the behavioral and EEG measurements and also between different measures of EEG measures. The EEG measures we compares were amplitude of SSVEP and phase of SSVEP as well as event-related potential (ERP). Time course of spatial attention shift estimated by detection rate is similar more to that by SSVEP phase than others, spatial spread of attention is similar to P3 of ERP but very different of either of SSVEP amplitude or SSVEP phase. Measurements of object-based attention showed similar object effect between SSVEP and reaction time. These results suggest that attention modulation is not at a single site of the visual process, but perhaps at multiple processes. Different effects of attention at different processes may be related to different role of attention at different processes.Unified Audio-Visual Spatial Attention Revealed by Pupillary Light Response
Hsin-I Liao NTT Communication Science Laboratories
Recent evidence shows that pupillary light response (PLR) reflects not only the physical light input to the retina, but also the mind’s eye, i.e., where covert visual attention is directed to (see review in Mathôt & Van der Stigchel, 2015). While visual and auditory systems rely on different peripheral mechanisms to represent locations of distal stimuli in the environment, it remains unclear how the spatial representations of the visual and auditory objects are formed in the brain, and how attention plays a role there. Do audio- and visual-spatial attention share the common mechanism or not? To investigate the issue, we examined whether PLR also reflects spatial attention to auditory object. In series of studies, participants paid attention to an auditory object, which was defined by a spatial (e.g., sounds presented in the left or right ear) or non-spatial (e.g., voices from a male or female talker) cue. Results showed that PLR reflected the focus of spatial attention regardless of whether the auditory object was defined by the spatial or non-spatial cue. Furthermore, the amount of spatial attention induced PLR was modulated by the reliability of the spatial information of the auditory object. Cognitive effort (e.g., task difficulty) or physical gaze position could not explain the result. Taken together, the overall results indicate that PLR reveals not only the focus of covert visual attention but also that of auditory attention. Auditory objects share the common space representation associated with visual spatial attention.Estimation of attentional location based on the measurement of unconscious eye movements.
Hirohiko Kaneko1, Kei Kanari2 1Tokyo Institute of Technology, 2Tamagawa University
Eye and attentional locations are closely related but they are not always the same. Although many eye tracking systems have been developed and used to roughly estimate the location of attention in the scene, but attention tracking system to accurately estimate the location of attention has not been developed yet. In our series of studies, we found that the relationships between the characteristics of eye movements and stimulus properties can be used to estimate attentional location. One of the examples is the relationship between optokinetic nystagmus (OKN) and motion in attention area. We presented two areas with different directions of motion arranged on the left and right, top and bottom, or center and surrounding (concentric) areas in the display. Observers kept their attention to one of the areas by an attention task, which was to count targets appearing on the area. The results indicated that attention enhanced the gain and frequency of OKN corresponding to the attended motion. Another example is small vergence eye movements that occurs when paying attention to an approaching or receding object while fixating a stationary object. The magnitude of the eye movements when paying attention to a certain area was smaller than those when directing eyes to the area, but the relationships between the characteristics of eye movements and stimulus were the same in both cases. Using these relationships, it is possible to determine the attentional location in the visual scene containing objects with various depth and motion. We also mention some applications of the present method for estimating attentional location based on the measurement of unconscious eye movements.
Neural oscillations and behavioral oscillations
Kaoru Amano Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT)
Rufin VanRullen (Centre de Recherche Cerveau et Cognition (CerCo), CNRS)
Kaoru Amano (CiNet, NICT)
Ryohei Nakayama (The University of Sydney, CiNet, NICT)
Nai Ding (Zhejiang University)
Huan Luo (Peking University)
Rufin VanRullen (Centre de Recherche Cerveau et Cognition (CerCo) , CNRS)
Neural oscillations, such as delta (0.5-4 Hz), theta (4-8 Hz), alpha (8–13 Hz), and gamma (30–100 Hz), are widespread across cortical areas and are related to feature binding, neuronal communication, and memory. Accumulating evidence suggests that alpha oscillations correlate with various aspects of visual processing. Typically, the amplitude of intrinsic alpha oscillation is predictive of the performance on a visual or memory tasks, while the frequency of intrinsic occipital alpha oscillations is reflected in temporal properties of visual perception. Other lines of evidence suggest that behavioral performance such as detection thresholds oscillates at the theta or alpha frequencies. While the connection between neural oscillations and behavior seems to be tight, the underlying mechanisms of these phenomena are not fully understood.
In this symposium, five researchers will present their recent studies on neuronal and/or behavioral oscillations and will discuss the possible functional roles of these oscillations. Dr. Amano will show the causal relationship between intrinsic alpha oscillations and a visual illusion called the motion-induced spatial conflict, possibly suggesting cyclic processing at the frequency of alpha oscillations. Dr. Nakayama will report discretized motion perception at 4-8Hz, which may reflect the slow attentional process. Dr. Ding will present about the relation of temporal attention to sensorimotor processes. Dr Luo will present the evidence suggesting the causal role of temporally ordered reactivations in mediating sequence memory. Finally, Dr. VanRullen will present perceptual echoes originating from alpha oscillations and their relation to predictive coding.
Kaoru Amano Center for Information and Neural Networks(CiNet), National Institute of Information and Communications Technology(NICT)
Although accumulating evidence suggests that alpha oscillations correlate with various aspects of visual processing, the number of studies proving their causal contribution to visual perception is limited. Here we report that illusory visual vibrations are consciously experienced at the frequency of intrinsic alpha oscillations. We employed an illusory jitter perception termed the motion-induced spatial conflict that originates from the cyclic interaction between motion and shape processing. Comparison between the perceived frequency of illusory jitter and the peak alpha frequency (PAF) measured using magnetoencephalography (MEG) revealed that the inter- and intra-participant variations of the PAF are mirrored by an illusory jitter perception. More crucially, psychophysical and MEG measurements during amplitude-modulated current stimulation showed that the PAF can be artificially manipulated, which results in a corresponding change in the perceived jitter frequency. These results suggest the causal contribution of neural oscillations at the alpha frequency in creating temporal characteristics of visual perception. Our results suggest that cortical areas, dorsal and ventral visual areas, in this case, are interacting at the frequency of alpha oscillations. Possible neuroanatomical basis of the inter-individual differences in the PAF and the peak alpha power (PAP) will also be discussed.Temporal continuity of vision and periodic feature binding
Ryohei Nakayama1, 2 1The University of Sydney, 2Center for Information and Neural Networks(CiNet), National Institute of Information and Communications Technology(NICT)
Psychophysical and physiological evidence reveal that sensory information is processed periodically despite the subjective continuity of perception over time. How does the visual system accomplish the subjectively smooth transitions across perceptual moments? To address this issue, we analyzed a novel illusion: a continuously moving Gabor pattern appears temporally discrete when its spatial window moves over a carrier grating that remains stationary or drifts in the opposite direction. This discretization depends on the speed difference between window and grating, but the apparent rhythm is constant at 4-8 Hz regardless of the stimulus speeds (Nakayama, Motoyoshi & Sato, 2018). In the light of recent studies reporting the theta-rhythmic function of attention, we hypothesize that different dimensional features of window and grating whose positional estimates are biased by opposite directional motions would be bound in a periodic manner. Accordingly, we found that temporal binding of visual features is performed periodically at ~8 Hz between spatially separated locations, while depending on pre-stimulus neural oscillatory phases locked by voluntary action (Nakayama & Motoyoshi, under review). (As one would expect from previous studies on pre-attentive binding, such a periodicity was not observed between superimposed locations.) Therefore, periodic attention serves to bind sensory information across different dimensions and locations to produce unified perception, subserved by neural oscillations in synchrony with action. The present combined results imply that slow attentional process can cause the discretized perception, while perhaps fast automatic process may underlie the temporal continuity of vision.Temporal attention requires sensorimotor mechanisms during visual and auditory processing
Nai Ding Zhejiang University
We live in a dynamic world and sensory information comes rapidly and overwhelmingly. Temporal attention provides a mechanism to preferentially process time moments that carry more critical sensory information. It has been proposed that the motor system is critical to implement temporal attention and here I will present recent evidence that temporal attention involves sensorimotor processes. It is shown that blinks and related eyelid activity are synchronized to temporal attention. When processing a visual sequence, a task is used to force the participants to attend to specific time moments. We find that blinks are suppressed at the attended time moment and the blink rate rebounds about 700 ms after the attended moment. This phenomenon can be interpreted as a form of active sensing that actively avoids the loss of important visual information caused by blinks. Nevertheless, further evidence from the auditory modality shows that attention-related eyelid activity is a more general intrinsic property of the brain. It is shown that blinks are similarly modulated by temporal attention in auditory tasks. Even when the eyes are closed, eyelid activity measured by EOG is still suppressed at the attended time moment. Furthermore, when listening to speech and performing a speech comprehension task that does not explicitly requires temporal attention, eyelid activity is synchronized to spoken sentences. Taken together, these results suggest that the motor cortex is activated when allocating attention in time and activity in motor cortex can be reflected in eyelid activity.Serial, compressed memory-related reactivation in human sequence memory: neural and causal evidence
Huan Luo Peking University
Storing temporal sequences of events in short-term memory (i.e., sequence memory) is fundamental to many cognitive functions. However, it is unknown how the sequence order information is maintained and represented in human subjects. I will present two studies in the lab to address the question. First, using electroencephalography (EEG) recordings in combination with a temporal response function (TRF) approach, we probed the item-specific reactivation activities in the delay period when subjects held a sequence of items in working memory. We demonstrate that serially remembered items are successively reactivated, in a backward and time-compressed manner. Moreover, this fast-backward replay is strongly associated with recency effect, a typical behavioral index in sequence memory, thus supporting the essential link between the item-by-item replay and behavior. Based on the neural findings, we further developed a “behavioral temporal interference” approach to manipulate the item-specific reactivations in retention, aiming to disrupt and change the subsequent sequence memory behavior in recalling. Our results show that the temporal manipulation on the replay patterns – synchronization and order reversal –successfully alters the serial position effect, as typically revealed in sequence memory behavior. Taken together, the results constitute converging evidence supporting the causal role of temporally ordered reactivations in mediating sequence memory. We also provide a promising, efficient approach to rapidly manipulate the temporal structure of multiple items held in working memory.Alpha oscillations, travelling waves and predictive coding
Rufin VanRullen Centre de Recherche Cerveau et Cognition, CNRS
Alpha oscillations are not strictly spontaneous, like an idling rhythm, but can also respond to visual stimulation, giving rise to perceptual "echoes" of the stimulation sequence. These echoes propagate across the visual and cortical space with specific and robust phase relations. In other words, the alpha perceptual cycles are actually travelling waves. The direction of these waves depends on the state of the system: feed-forward during visual processing, top-down in the absence of inputs. I will tentatively relate these alpha-band echoes and waves to back-and-forth communication signals within a predictive coding system.
Science of facial attractiveness
Tomohiro Ishizu (University College London)
Zaira Cattaneo (University of Milano-Bicocca)
Tomoyuki Naito (Osaka University)
Chihiro Saegusa (Kao Corporation)
Koyo Nakamura (Waseda University)
Tomohiro Ishizu (University College London)
Visual attraction pervades our daily lives. It influences and guides our moods, behaviours, and decisions. Scientists apply psychological and cognitive neuroscientific methods to disentangle the seemingly complex attractiveness evaluation, and the rigorous scientific findings are growing quickly. Facial attractiveness has been a central interest in the science of attraction. In this symposium, we present new insights on attractiveness judgments with a focus on face perception from a wide range of methods including behavioural testing, computational modelling, neuroimaging, and brain-stimulation. We anticipate that it will engage interests of the APCV attendance and that it will draw a large and lively audience.
Firstly, we show a data-driven mathematical modelling which reveals physical features of a face contributing to attractiveness judgments (Nakamura). We, then, present a study which visualises a 'mental template' of attractive faces by applying the reverse correlation technique and deep convolutional neural network (Naito). The first two talks can elucidate physical and measurable features of attractive faces and what contributes to the judgment. Secondly, we show how facial attractiveness judgment can be formed. We present evidence that attractiveness judgment is a dynamic process in which each facial feature (e.g. eyes, nose, hairstyle) is integrated over time to construct a final evaluation (Saegusa). Next, we discuss the brain systems, that are possibly underlying in attractiveness judgment on faces and compare them with non-facial/non-biological stimuli in relation to cortical-subcortical networks (Ishizu). Finally, we demonstrate the 'causal role' of those brain sites when judging attractiveness of faces and other visual stimuli with the application of non-invasive brain-stimulation techniques (Cattaneo). Understanding attractiveness evaluations and the impact of visual experiences is an indispensable part of understanding human interaction with the visual world. This symposium, showcasing diverse methods to approach the question, will provide new insights into the studies on attraction and attractiveness.
Zaira Cattaneo University of Milano-Bicocca
Neuroimaging evidence has shown that beauty appraisal correlates with activity in a complex network of brain areas involved in sensory processing, reward, decision making, attentional control, and the retrieval of information from memory. In my talk, I aim to present an overview of a series of recent experiments I conducted with non-invasive brain stimulation (both transcranial magnetic and transcranial direct current stimulation) that shed light on the causal role of different brain regions in underpinning aesthetic appreciation for different stimulus categories, ranging from faces to paintings. By informing about whether the activation of a particular cortical site is necessary (vs. epiphenomenal) for an ongoing task, brain stimulation critically adds to the correlational evidence provided by neuroimaging techniques, and may be a promising tool in the field of neuroaesthetics.Transplantation of taste for facial attractiveness of individuals to deep convolutional neural network
Tomoyuki Naito Osaka University
Deep convolutional neural network (DCNN) has a lot of attention for its capability of image category classification that is comparable to that of human. A recent study reported that the DCNN obtained hierarchical representations and learn the concept of facial attractiveness. However, it is still unclear whether the judgment mechanisms of the DCNN under the facial attractiveness judgment was similar with humans or not. In this study, we show that from facial attractiveness judgment scores of individuals, DCNN learned the taste of individuals with high accuracy. Then, we reconstructed visual mental template of facial attractiveness of both participants and the DCNN using psychological reverse correlation technique. For all participants and corresponding DCNN, the visual mental template was successfully reconstructed. We confirmed that the mental template of DCNN was significantly correlated with that of the participant who provided the facial attractiveness scores for learning. Our results suggested that the DCNN that learned one’s taste for facial attractiveness reconstructed similar judgments mechanisms with humans in it.Judgments of facial attractiveness as a dynamic combination of internal/external parts
Chihiro Saegusa1, Katsumi Watanabe2,3, Shinsuke Shimojo4, Janis Intoy4 1Kao Corporation, 2Waseda University, 3The University of Tokyo 4California Institute of Technology
Although the importance of facial attractiveness has been widely researched, how attractiveness of internal/external facial parts and whole interacts in a time course of attractiveness judgment is still unclear. In our research, visual information integration in the facial attractiveness judgment has been investigated in a series of psychological experiments in which presentation of facial images to be evaluated their attractiveness was constrained spatially and/or temporally. Attractiveness evaluation of briefly-presented facial images demonstrated that 1) contribution of the eyes to the whole facial attractiveness judgment remains high even after short exposure duration as 20 milliseconds to the face, while contribution of other facial parts changed over time, and 2) either the gaze of the face is directed to or averted from the evaluator affected the dynamic integration of facial parts information to the judgments of whole facial attractiveness. Different experiments examining the influence of external feature on the perceived facial attractiveness revealed the mutual, but not symmetrical influence between facial attractiveness and hair attractiveness. These findings together suggest the dynamic feature of facial attractiveness judgment where information from internal/external features is integrated over the time while it is affected by social cue such as gaze direction of the faceData-driven mathematical modeling of facial attractiveness
Koyo Nakamura1,2,3 1Waseda University, 2Japan Society for the Promotion of Science, 3Keio Advanced Research Centers
Facial attractiveness can be assessed in a rapid and spontaneous manner, long before we can tell what features make a face attractive. Faces vary in many different ways and many of these variations affect facial attractiveness judgments. In this talk, I will present a series of studies that examine how people judge facial attractiveness from a combination of multiple cues such as facial shape and skin properties. To identify important cues to attractiveness, we first sampled many different East-Asian faces and collected ratings on their attractiveness, then built a data-driven mathematical model for how facial features vary on attractiveness. The results revealed that faces with larger eyes, smaller noses, and brighter skin are judged as more attractive, regardless of the sex of the faces. Furthermore, our model allows for quantitatively manipulating the attractiveness of any arbitrary faces by transforming such facial features, thus helping discover as yet unidentified cues to attractiveness. The attractiveness manipulation technique provides a tool to produce well-controlled East-Asian face stimuli that quantitatively differ in attractiveness, which can be used to elucidate further the visual processes related to attractiveness judgments.Varieties of attractiveness and their brain responses
Tomohiro Ishizu University College London
Over the past decade, cognitive neuroscience of attractiveness has been maturing and has found that experiencing something as attractive, such as viewing a beautiful face, engages brain's reward circuit, namely the medial orbitofrontal cortex/ventromedial prefrontal cortex (mOFC/vmPFC) and structures in the ventral striatum (VS). Interestingly, these core regions are thought to be stimulated by attractiveness regardless of their source and to encode a 'common neural currency' (Levy & Glimcher, 2012). This is not contradicting to daily experiences: we feel pleasure when we find something attractive. However, assuming that attractiveness is closely related and intertwined to pleasure, it gives rise to the question; the activation within the mOFC/vmPFC and the VS with attractiveness experience may be merely attributed to the pleasurable experience, and it is little to do with attractiveness per se. To address this question, I propose to categorise attractiveness into two types; attractiveness derived from biologically-based stimuli, such as faces, bodies, or nutritious foods (biological attractiveness), and one derived from higher cognitive processes, such as art appreciation or a person with good morality (higher-order attractiveness). The stimuli categorised in the former relate to the fulfilment of biological needs, such as mating, having sex, intake of nutrition (primary rewards), whereas, stimuli of the latter category do not require biological needs and primary rewards. Recent findings and discussions from our lab and others (e.g. Wang et al., 2015; Ishizu & Zeki, 2017) suggest that judgments of both biologically-based attractiveness (i.e. facial attractiveness) and higher-order attractiveness (i.e. moral attractiveness) similarly engage both the mOFC/vmPFC and VS, but that the higher-order attractiveness alone lacks the VS activity. These findings suggest that, even though attractiveness is hardwired to pleasure, it may not be a one-to-one relationship and different types of attractiveness may engage dissociable brain mechanisms.
Two-photon calcium imaging of architecture and computation of primate visual cortex
Ichiro Fujita (Osaka University)
Kristina Nielsen (Johns Hopkins University)
Ian Nauhaus (University of Texas Austin)
Shiming Tang (Peking University)
Ian Nauhaus (University of Texas Austin)
Kristina Nielsen (Johns Hopkins University)
Ichiro Fujita (Osaka University)
Kenichi Ohki (The University of Tokyo)
Nicholas Priebe (University of Texas Austin)
We aim to bring together researchers working actively on the primate visual system using 2-photon calcium imaging and related techniques. Two-photon imaging is now a standard tool in systems neuroscience. It provides unique advantages for addressing questions at multiple levels of function and structure which are not accessible by other techniques; e.g., high temporal resolution of detecting signals originating from single action potentials and high spatial resolution of determining the position and distribution of individually monitored neurons. These technical merits allowed us to reveal the functional microarchitecture of the visual and other cortices. Thus far, the vast majority of two-photon imaging studies in the mammalian brain have been performed in the rodent. However, the primate is the preferred animal model for linking cells and circuits to more sophisticated sensory, motor, and cognitive functions. This holds true especially for visual functions which have evolved elaborately in primates. Given the merit and potential, there currently is a large push to overcome the technical challenges of performing two-photon imaging in primates. In this symposium, we provide a platform for discussing and sharing the scientific impacts, current technical advancement, and ideas of future directions in this research field. The speakers include researchers from Asia (Fujita, Ohki, Tang) and USA (Nauhaus, Nielsen, Priebe), working on various stages of the visual cortical hierarchy (V1, V2, V4) in macaque and marmoset monkeys.Zooming in on neural circuits in macaque visual cortex
Shiming Tang Peking University
My lab focuses on the neural mechanisms of visual object recognition and the development of techniques for neuronal circuit mapping. We have established long-term two-photon imaging in awake monkeys — the first and critical step toward comprehensive circuit mapping — to identify single neuron functions. We have systematically characterized the V1 neuronal responses with unprecedented detail in awake monkeys. We found a large percentage of V1 neurons exhibit complex pattern selectivity beyond orientation tuning, to sparsely responded to natural scenes. This finding suggests an early stage of local complex pattern detection in V1 in the visual object recognition hierarchy. Recently, we performed high resolution dendritic imaging in macaque monkeys, and mapped the fine spatiotemperal organization of excitatory inputs on dendrites of macaque V1 neurons. Our results suggested V1 neurons integrate rich and heterogeneous inputs for complex local pattern detecting. With these modern imaging technologies now functioning in macaque monkeys, we can untangle the neuronal micro-circuits in visual cortex and finally uncover fundamental computational principles in visual information processing.Neural tuning in superficial V1 as a function of scale invariant inputs
Ian Nauhaus University of Texas Austin
Recent imaging studies in macaque primary visual cortex (V1) have revealed maps of spatial frequency (SF) preference that systematically align with maps of orientation preference and ocular dominance. Additional V1 maps are predicted under the assumption of “scale invariance”, whereby scale parameters - RF size, SF bandwidth, SF preference – are proportional to one another. However, prior studies show that scale invariance fails for “complex cells”, which make up the majority of our two- photon recordings in layer 2. Given that scale parameters cannot be predicted from SF preference for complex cells, a general model of the V1 architecture is still lacking for superficial V1. Here, we compared maps of SF preference to maps that would be correlated under scale invariance (RF size and SF bandwidth), in addition to maps that would be independent from SF under scale invariance (orientation and phase selectivity). In each case, the data deviated from scale invariance much more strongly than a population of V1 simple cells. Finally, we are fitting a model whereby scale invariant simple cells converge to build for a population of RFs like the ones in our data. In summary, we have provided a more complete description of V1 architecture in L2/3 – the principal output layer - that provides improved constraints to decoding models over basic assumptions of scale invariance.Clustering of 3D and 2D shape information in area V4
Kristian Nielsen Johns Hopkins University
long the ventral pathway, image information is converted into object and scene understanding. Area V4, an intermediate stage in this pathway, has previously been shown to represent 2D contour shape. We have recently demonstrated that a substantial fraction of V4 neurons are more responsive to 3D volumetric shape (shape-in-depth) than to 2D shape in the image plane. Here, using 2-photon functional microscopy, we investigate the spatial organization of 3D and 2D shape tuning in V4. Our results demonstrate that neurons with 2D and 3D shape tuning form segregated clusters in V4.
We used realistic shading cues to render simple volumetric shapes (Cs and Vs with cylindrical cross-sections and smoothly curved joints and endcaps), presented at a range of 3D orientations. In addition, we measured responses to the 2D silhouettes of the same stimuli. More precisely, for each 3D stimulus there was a matching 2D stimulus that shared the same 2D contours. While in the 3D case these contours appeared as self-occlusion boundaries of volumetric objects, they appeared as sharp edge boundaries of planar shapes in the 2D case.
3D and 2D stimuli were used to probe the responses of V4 neurons in 2-photon experiments in anesthetized animals, in which neurons were labeled with the calcium indicator Oregon Green BAPTA-1AM. In each imaging region, we consistently observed strong local clustering of 3D- and 2D-responsive neurons in separate patches on the order of several hundred microns. At the same time, neighboring 3D and 2D patches were most responsive to congruent 3D and 2D shapes. These results suggest that derivation of 3D volumetric shape from 2D image information is a major constraint on micro-organization in area V4.
Ichiro Fujita Osaka University
Visual texture is an important clue for recognizing objects. Representing a texture requires computation of a collection of products between V1-like filter outputs across scales, orientations, and positions (i.e., higher-order image statistics). Previous studies using static texture stimuli demonstrate that neuronal selectivity for higher-order image statistics is not evident in V1, and gradually develops in mid-level ventral visual areas, V2 and V4. Here, we aimed to extend our understanding on the processing of texture in V4 by examining the functional architecture of texture representation. We recorded activity of V4 neurons with the aid of in vivo 2-photon calcium imaging in two immobilized macaque monkeys under opiate-analgesia. We presented naturalistic movie stimuli, in which higher-order image statistics dynamically changed. We evaluated contributions of higher-order image statistics on V4 responses by using a general encoding-model approach in which regularization and cross-validation were implemented. Consistent with the previous studies, V4 neurons overall preferred higher-order image statistics over low-level image statistics (i.e., V1-like filter outputs), whereasV1 neurons recorded from the same animals preferred spectral stimulus features such as orientation and spatial frequency. The 16 different sites we examined in V4 exhibited a variety in preference for higher-order image statistics. Most sites contained many neurons preferring higher-order image statistics. Some other sites were abundant with neurons preferring low-level image statistics, and thus were indistinguishable from V1 sites. From these results, we conclude that neurons representing higher-order image statistics are locally clustered in V4, and the cluster size is between several hundreds of micrometers (size of a recording site) and several millimeters (distance between recording sites). Together with known functional structures in V4 for color and orientation with this scale, we suggest that V4 consists of mosaic-like compartments (~ mm size) each responsible for a specific visual feature such as color, orientation, and texture.Multiscale calcium imaging of the visual cortex in marmoset monkeys
Kenichi Ohki The University of Tokyo
Primate neocortex analyzes visual scenes with a hierarchical neuronal network. To understand how such network interactively process visual scenes, we developed a method to monitor neuronal activity at multiple spatial scales. Based on Tet-off system (Sadakane et al., 2015), we first designed new AAV2/9 vectors which contain TLoop system (Cetin and Callaway, 2014) or two in-tandem of GCaMP, and successfully increased the level of GCaMP expression in a large volume of the marmoset neocortex.
Using the improved vectors, we first performed wide-field 1-photon calcium imaging. In addition to orientation map in the primary visual cortex (V1), we found that full-field luminance increment and decrement evoked regular patches of responses in V1 (luminance polarity map). We then studied cellular activity using 2-photon imaging. In addition to orientation-selective cells, we found “non-tuned cells” that were responsive to drifting gratings but not selective for orientation, and “non-responsive cells”. Interestingly, non-tuned cells selectively responded to the luminance increment, whereas non-responsive cells selectively responded to the luminance decrement.
The present method is applicable to higher visual areas beyond V1. Smooth neocortex of marmosets allowed us to monitor neuronal activity in multiple brain areas spanning occipital to parietal cortices These results demonstrate usefulness of the marmoset brain to study the visual cortical network.
Nicholas Priebe, Jagruti Pattadkal and Boris Zemelman University of Texas at Austin
Area MT contains neurons that are exquisitely sensitive to visual motion and, based on extracellular recordings, is functionally organized for direction. To uncover the fine-scale functional architecture of area MT and assay the selectivity of inhibitory neurons, we used the marmoset (Callithrix jacchus). These primates have lissencephalic brains in which we have access to activity of large neuronal populations. We used 2-photon microscopy to record from several hundred neurons at single-cell resolution over a 1 mm2 region of area MT in awake marmosets. GCaMP expression was induced by injecting AAV constructs with promoters that provided specific expression in interneurons within area MT. GCaMP signals from inhibitory neurons revealed similar degrees of motion selectivity as that found from excitatory neurons (median DSI = 0.38, n = 301 cells). Nearby neurons tend to share direction preference, forming a map of direction preference with a period of approximately 300 microns. Finally, we found that the degree of orientation selectivity in MT neurons is weaker (median OSI=0.13) than direction selectivity. In sum, we have revealed the fine functional organization of area MT using 2-photon microscopy in awake marmosets and have demonstrated that MT inhibitory neurons are as direction selective as their excitatory neuron counterparts.
Color vision in naturalistic objects and environments
Yoko Mizokami (Chiba University)
Michael A. Webster (University of Nevada, Reno)
Maria Olkkonen (Durham University)
Toni Saarela (University of Helsinki)
Takehiro Nagai (Tokyo Institute of Technology)
Yoko Mizokami (Chiba University)
Color perception in real life is adjustable and stable. Color adaptation and color constancy are good examples of this flexibility of color vision, but recent researches have revealed much more complexity and the influence of various factors such as color distribution in an environment, naturalistic change in illumination, the property of material, cue integration from different visual dimensions, memory and learning, the recognition of naturalness of objects and scenes. This symposium focuses on the property of our color vision for real or realistic objects and environment and also discuss how we should test those properties, by introducing the latest researches of five researchers working on complex color vision in different viewpoints, but their interests are overlapping each other.
Dr. Webster will talk about environmental influences on color appearance through his extensive work related to adaptation to various environments. Dr. Olkkonen will talk about how learning and memory affect our color perception. Dr. Saarela will talk about how cue integration from different visual dimensions helps color and material perception. Dr. Nagai will talk about the effects of specular reflection components on color constancy using the various combination of complex stimuli generated by computer graphics. Dr. Mizokami will talk about color and material perception under different lighting conditions in a real environment.
Michael A. Webster University of Nevada
Blue-yellow variations are a prominent property of the natural environment. For example, different phases of daylight vary along a blue-yellow axis, and the gamut of colors in many natural scenes varies from blue sky to predominantly yellowish or brown terrain. These stimulus biases may have shaped many aspects of human color vision, including chromatic sensitivity and color appearance. Sensitivity can be weaker along the blue-yellow axis, and this may reflect adaptation to the stronger blue-yellow contrasts in scenes. This bias is also manifest in the scaling chosen for many perceptually uniform color spaces. Similarly in color appearance, blue and yellow can seem more pure or “unique,” even though these hues do not clearly reflect special states in the underlying neural code for color. There are also important differences between blue and yellow percepts which suggest they are not treated as two poles of a common underlying dimension. In particular, bluish tints are more likely to be attributed to the illuminant, while yellowish tints are more likely to be associated with surface color. These asymmetries are largely restricted to the blue-yellow axis, and may again be shaped by high-level inferences about the chromatic properties of the world.Learning and memory in color perception
Maria Olkkonen Durham University
We often have the experience of perceiving the colors and materials of objects in our environment effortlessly. But estimating the material properties of objects is in fact a computationally hard problem for the visual system, because the light signal that reaches our eyes from surfaces in our environment depends not only on the reflectance of the surface, but also on the illumination impinging on the surface. Statistical regularities about surfaces and illuminants, learned through interacting with our environment and through social communication, may contribute to our ability to compensate for changes in illumination when estimating object color. In this talk, I will link human color constancy to a probabilistic framework of perceptual estimation, and will give an overview of experiments testing predictions from this framework. The results so far suggest that human color constancy can be modeled in the probabilistic framework, but more work remains to be done to uncover the computational mechanisms of color constancy in natural scenes. I will end by discussing ongoing research to push the study of color constancy to more realistic scenes and tasks.Cue integration in color and material perception
Toni Saarela University of Helsinki
Visual features do not occur in isolation, and surfaces and materials in our visual environment differ from each other in several respects: in color, lightness, glossiness, and texture, for example. Integrating information from several such sources, or "cues", is a fundamental property of the visual system and can enable us to perform better in several visual tasks. Visual cue integration can occur "across space", as when integrating color information from distinct spatial locations to estimate the body color of an object. It can also happen "across dimensions", as when integrating different types of spatially overlapping cues, for example color and glossiness when discriminating or identifying materials. I will present examples of both types of integration. First, as an example of spatial integration, I will discuss results from experiments characterizing the sampling of hues when estimating the mean hue of an ensemble of colors. Color discrimination was measured for a range of stimulus sizes, and stimulus colors where perturbed by noise. Through modeling, we derived estimates of the observers' ability to sample the stimulus when discriminating mean color. The estimates far exceed those previously reported in the literature based on a more limited set of experimental conditions. To illustrate spatially local integration of cues from different visual dimensions, I present results from optimal integration color, texture, and glossiness cues when evaluating surface properties. Finally, I will highlight the importance of manipulating the extrinsic uncertainty in a psychophysical task when measuring integration, and will show how mandatory integration of visual cues can also have its downsides, as it can in some cases prevent the selection of individual feature dimensions for further processing.Effects of specular reflection components on color constancy
Takehiro Nagai Tokyo Institute of Technology
The visual system is considered to employ various heuristics in retinal images which reflect illumination colors in scenes for color constancy. Specular highlight is one of such candidate cues, because they typically reflect spectral components of illumination directly. Although some previous studies reported small improvement of color constancy in scenes with glossy objects, the degree of the improvement largely differed across the studies. We have investigated psychophysically 1) if effectiveness of specular highlights is valid under different stimulus conditions, and 2) which image features explain such highlight effects on color constancy.
The stimulus consisted of a test sphere and many background objects, which had different levels of specular reflectance (SR) and fixed diffuse reflectance. Observers performed achromatic-setting tasks on the test sphere under D65, A, or 25000K illuminant.
First, the color constancy index increased by maximum of 30% from minimum to maximum SR under A. In contrast, SR did not significantly affect color constancy index under 25,000K. These results suggest that the conditions under which specular components contribute to color constancy are limited; the roles of specular components seem just supportive, not very pronounced. Second, when we performed almost the same experiments except that the images of background objects were phase-randomized while keeping luminance-chromaticity histograms unchanged, the improvement of color constancy was dramatically diminished. Also, by removing high-luminance components in specular reflections the improvement of color constancy was completely lost. The high-luminance components look like specular reflections seem to be crucial for the improvement effect in color constancy.
Yoko Mizokami Chiba University
It has been suggested that the specular reflection occurring on a surface of an object would contribute to color and material perception. First, we examined the effect of the surface and specular reflection of objects on color constancy using real vegetables as familiar objects in real space. Observers evaluated stimuli with different glossiness under white and reddish color illumination, and we compared those color appearances. As a result, in the real space, specular reflection hardly affected color constancy, but under the limited view condition, color constancy was a little bit better for the glossy surface than the matte surface. These results suggest that the specular reflection slightly contributes to color constancy under limited conditions. Second, we examined how the color appearance of object surface was influenced by the diffuseness of lighting in real miniature rooms. We used two miniature rooms illuminated by a diffused light and a direct light, respectively. We presented a test sample with sine-wave surface. Both glossy and matte surface materials with five colors were prepared for the samples. Observers judged the color appearance of samples by selecting their corresponding colors. The corresponding color for test samples were similar under both diffused and direct lighting conditions. The color appearance of object surface would be quite stable among the change in material and illumination.
Multi-dimensional approach to understand anatomical basis of visual functions
Hiromasa Takemura (Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT))
Toru Takahata (Zhejiang University)
Hiromasa Takemura (CiNet, NICT)
David Lyon (University of California, Irvine)
Toru Takahata (Zhejiang University)
James Bourne (Monash University)
Although visual system has been widely studied over several last decades, there is one major question remains largely unanswered: how functions of visual system are related to underlying anatomical properties. This symposium features investigators working with cutting edge approach for addressing this question, by using a various type of methods spanning from molecular, microscale to macro-scale level. The symposium will address how anatomical measurements will help to understand disorders, organization, plasticity and evolution of the visual system.Understanding major white matter pathways in visual system: from neuroimaging to neuroanatomy
Hiromasa Takemura1,2 1Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), 2Osaka University
Human and non-human primate visual system is composed of a number of geniculo-cortical and cortico-cortical white matter pathways, which support communication between distinct visual areas. This talk describes recent progress in analyzing these pathways to understand disorders, organization and function of the visual system. First, I will demonstrate the evidence showing retinal ganglion cell disease, Leber’s Hereditary Optic Neuropathy, caused different types of neurobiological change among different part of visual pathways (optic tract and optic radiation) by combining two types of neuroimaging measurements, diffusion MRI (dMRI) and quantitative MRI (qMRI; Mezer et al., 2013). Second, I will describe recent progress in analyzing the vertical occipital fasciculus (VOF; Yeatman et al., 2014) by combining dMRI and anatomical measurements. The VOF is an important white matter tract to understand visual processing streams because it connects dorsal and ventral streams (Takemura et al., 2016). To improve our understanding of this pathway, we first analyzed high-resolution dMRI data obtained from non-human primate brains. The analysis of dMRI data reveals that inter-species similarities of VOF across primate species, but also provides consistency of VOF cortical endpoints among dMRI and previous invasive studies. Furthermore, I will also describe an analysis of data obtained by using polarized light imaging (PLI; Axer et al., 2011), which provide fiber orientation at micrometer resolution. PLI data not only supports the existence of the VOF, but also disentangles current controversies in the visual white matter pathways, such that how much the VOF is distinct from a pathway connecting occipital and inferotemporal cortex. Finally, I will discuss how accurate understanding of white matter pathways help us to understand the organization of extrastriate cortex.Across the V1 orientation map long-range lateral inputs onto local inhibitory neurons sharpen orientation tuning of principal neurons
David C. Lyon University of California, Irvine
Specific cell types and their connectivity are a key determinant in neural function and selectivity. Visual cortex is among the most complex and detailed brain structures and several recent technological advances have enabled more detailed probing of cell type specific relationships to connectivity and function. Yet, such studies leave many questions unresolved and are largely limited to transgenic mice which lack more complex organization found in higher visual species such as cat and monkey. Of particular interest, is the role of inhibitory neurons in modulating orientation selectivity. Orientation tuning has been shown to improve dramatically when visual stimuli expand beyond the classical receptive field (CRF) into the extraclassical surround (ECS). Moreover, in addition to sharper orientation tuning, firing rate is also reduced, suggesting a role of inhibition. Long-range horizontal projections, which allow for integration across the visual field within V1 and represent visual space corresponding to parts of the ECS, preferentially connect regions, or domains, of neurons with like-orientation preference. Using a novel cell-type specific rabies virus tracing strategy we have shown that a major target of these orientation tuned inputs is local inhibitory neurons (Liu et al., 2013, Curr Biol). We hypothesize that these inputs play a key role in the orientation selectivity of the suppressive effects attributed to the ECS. To test this, we retrogradely delivered light gated opsin, ChR2 or ArchT, to these long-range inputs through our rabies virus technique. We then measured the effects of their light mediated activation or blockade, respectively, on single unit responses to various center-surround visual stimulus conditions. When only a CRF sized stimulus was shown, ChR2 activation simulated surround suppression, reducing firing rate and sharpening orientation tuning. Conversely, ArchT mediated suppression of long range inputs under conditions including the ECS, blocked the effects of suppression and orientation tuning broadened. Because the labeled long range inputs are largely excitatory neurons synapsing onto local inhibitory neurons these results show that interconnectivity between orientation domains play a major role in modulating orientation tuning.Possible parallel visual pathways between the lateral pulvinar and V2 thick/thin stripes in macaques
Toru Takahata Interdisciplinary Institute of Neuroscience and Technology (ZIINT), Zhejiang University
It has been known that primate V2 is subdivided into at least three sub-compartments: thick stripes, thin stripes, and pale stripes, according to their reactivity to cytochrome oxidase (CO) histochemistry. Later, it has been revealed that these histochemical sub-compartments are associated with functional reactivity to distinct types of visual stimuli, such that thick stripes are more responsive to directional movement and depth coding, thin stripes are more responsive to color stimuli, and pale stripes are more responsive to form and orientation of the visual stimuli. Furthermore, these physiological properties are reasonably associated with connectivity from V1, as color-coding neurons in CO blobs preferentially project to thin stripes and neurons in interblobs preferentially project to thick and pale stripes. They are recognized as “parallel visual pathways”: The “P” pathway that goes through geniculate parvocellular layers/V1 CO blobs/V2 thin stripes and the “M” pathway that goes through geniculate magnocellular layers/V1 interblobs/V2 thick stripes. On the other hand, it was previously revealed that thick and thin stripes receive direct projections from the pulvinar complex of the thalamus, but pale stripes do not. Furthermore, previous electrophysiological studies also revealed that there are two distinct visuotopic maps within the lateral pulvinar. Thus, we hypothesized that there is another set of parallel pathways between the pulvinar and V2. To address this possibility, we injected different kinds tracers, BDA, CTB-Alexa-488 and CTB-Alexa-555, into three consecutive thick/thin stripes in V2 after identifying V2 stripe maps by intrinsic signal optical imaging and examined retrograde labeling in the pulvinar of macaques. As a result, we found that there are a few patchy distinct labeling for each retrograde tracer, and that thick stripe-projecting compartments and thin stripe-projecting compartments are segregated, although they are located next to each other within the lateral pulvinar. Our study indicates a possibility that there are several parallel pathways within the pulvinar-V2 projection, similar to the manner of geniculo-striate projections.Development of pulvino-cortical circuits: implications for visual behaviours and disorders
James Bourne Australian Regenerative Medicine Institute, Monash University
The pulvinar is the largest collection of nuclei of the thalamus in primates, including humans, comprising 3 nuclei and further subdivisions. Even though it has been demonstrated to be embedded within sensory systems and connect with the majority of the neocortex, its function remains unclear. Over the past decade, my group have been instrumental in demonstrating in the marmoset monkey the role of the medial subdivision of the inferior pulvinar in the development of the dorsal stream visual cortex and the manifestations of a lesion to this region of the brain in early life. To this end, we now know that this area plays an implicit role in the development of the visual cortex and establishment of visuomotor behaviours, such as reaching and grasping. Furthermore, we have evidence that the pulvinar can route visual information to the visual cortex following a lesion of the geniculostriate pathway in early life in both monkeys and humans. Collectively, these data demonstrate an essential role for the inferior pulvinar thalamic nuclei in early life. Furthermore, up to this point, it was suggested that the thalamocortical circuits were ‘hardwired’ by birth yet we now have evidence and an example of their inherent plastic nature early in life and ability to reroute sensory information.
Presentation list ver2
If you find any inconvenience, plseas contact email@example.com ASAP.
Abstracs is here: abstracts.pdf
The first Asia-Pacific Contest of Visual Illusion (APCV-i) will be held as one of the programs of the 15th Asia-Pacific Conference on Vision (APCV2019) held in Osaka Japan. The committee of the APCV-i is seeking submissions of dramatic, interesting, provocative, educational, and entertaining demonstrations of visual illusions/phenomena.
Demo presentation and voting take place at the banquet, Wndnesday night (OIC cafeteria).
Presenters"Fragmented gold effect"
Gouki Okazawa1,2 and *Hidehiko Komatsu1,3
1) National Institute for Physiological Sciences, 2) Center for Neural Science, New York University 2) Brain Science Institute, Tamagawa University
"Temporal fusion illusion of human color vision"
Kunming University of Science and Technology
Kentaro Usui and Akiyoshi Kitaoka
College of Comprehensive Psychology, Ritsumeikan University
Takahiro Kawabe, On behalf of APCV-i committee
The Center for Information and Neural Networks is an interdisciplinary neuroscience research institute based in Osaka, located on Suita Campus of Osaka University. CiNet is offering a tour of the facility for APCV2019 attendees just after the conference in the afternoon of Aug. 1.
Please register for the tour using...
CiNet Tour for APCV2019 attendees registration form
Space is limited for 40 visitors. Priorities are given to visitors from other countries. We will let you know via registered email address whether or not your registration is accepted.
Registration Deadline: July 15, 2019
Date and Time of the tour: Thursday, August 1, 2019 14:30~16:00
Please gather at 14:30 in Conference Room A&B, Floor 1, CiNet Building, Osaka University Suita Campus. Address: 1-4 Yamadaoka, Suita City, Osaka, 565-0871
Free Round-trip bus from Ritsumeikan Univ to CiNet is available !!!
Questions regarding this tour should be sent to firstname.lastname@example.org.
We look forward to seeing you in Osaka.Izumi Ohzawa, Chair