claim
FMI



Introduction to Computational Neuroscience

Time: Mondays, as indicated below
Location: Friedrich Miescher Institute, Room 5.30 Access Information

Organizers: Alexander Attinger, Jan Gründemann,
                     Milica Markovic, Adrian Wanner

Outstanding researchers from all over Europe will present central topics in computational and theoretical neuroscience. A general introduction to a topic will be given during the first lecture hour. A research talk will be given in the second part of the lecture.

Contact us at: computational.neuroscience@fmi.ch

Final lecture: Monday 26.01., FMI 5.30

Peter Latham | Gatsby Institute, UCL


Talk: 17:00 - 18:00

Learning and Bayes: probabilities are important at the nanometer scale

Organisms face a hard problem: based on noisy sensory input, they must set a large number of synaptic weights. However, they do not receive enough information in their lifetime to learn the correct, or optimal weights (i.e., the weights that ensure the circuit, system, and ultimately organism functions as effectively as possible). Instead, the best they could possibly do is compute a probability distribution over the optimal weights. Based on this observation, we hypothesise that synapses represent probability distribution over weights - in contrast to the widely held belief that they represent point estimates. From this hypothesis, we derive learning rules for supervised, reinforcement and unsupervised learning. This introduces a new feature: the more uncertain the brain is about the optimal weight of a synapse, the more plastic it is. This is consistent with current data, and introduces several testable predictions.

Lecture: 16:00 - 17:00

Correlations and computation

Neurons in the brain are correlated: they tend to fire together far more often than would be expected by chance. It is well known that these correlations can have a large effect on both information transmission and computing, but it is not known what that effect is. That's partly because correlations often become important only for large populations, where experiments are notoriously difficult. But it's also because theoretical studies have yielded contradictory results. In particular, the answer to the question "how do correlations affect information?" is the rather unsatisfying "it depends on the correlations". Here we turn that question around, and ask "how does information affect correlations?". Phrased that way, there is a unique, well-defined answer, one that tells us exactly what correlations look like in large populations. Moreover, the answer gives us hints to why those correlations are useful for almost noise-free computations.


Date Time Speaker/Topic
19.05.2014 17:00-19:00
Jakob Macke | MPI Tübingen
Statistical methods for characterizing neural population dynamics
6.10.2014 16:00 - 18:00 Gasper Tkacik | IST Austria
How and what can we learn from simultaneous large-scale neural recordingsWhile neuroscience claims to show increasing appreciation for the contact between theory and experiment, this intersection often remains rather superficial. Sometimes such intersection might consist of only a few theoretically predicted signatures that qualitatively match the data; sometimes even fundamental theoretical ideas remain without direct experimental support for decades. Partially this has been due to experimental limitations, but with the recent expansion of our ability to record simultaneously from many neurons in behaving organisms, these limitations are rapidly disappearing. How do we proceed towards a closer interaction between theory and experiment, to make full use of the new rich datasets? In this lecture, I will briefly review various data-driven attempts to understand population coding and focus on the applications of the maximum entropy framework, which provides interesting links between neuroscience, machine learning, and statistical physics.
13.10.2014

lecture: 16:00 - 17:00

talk:      17:00 - 18:00
Simon Thorpe | CERCO Toulouse

Mechanisms for ultra-rapid visual processingFor over 25 years, I have been arguing that the speed of processing in the human visual system puts major constraints on the underlying computational mechanisms. Behavioral and electrophysiological data show that at least some biologically important stimuli (including animals and faces) can be identified and localized in just 100 ms. This is particularly impressive given the limitations of the underlying neural hardware, and suggests that a great deal of processing can be achieved with a single feed-forward sweep though the many successive stages of the visual system. For a long time, it was thought that the ability of the human visual system to process complex natural scenes would remain forever beyond the capacity of artificial vision systems, but it is now clear that the state of the art in computer vision is starting to catch up. Interestingly, the best artificial systems use processing architectures built on simple feed-forward mechanisms that look remarkably similar to those used in the primate visual system. However, the procedures used for training these artificial systems are very different to the mechanisms used in biological vision. In this talk I discuss the possibility that spike-based processing and learning mechanisms may allow future models to combine the remarkable efficiency of the latest computer vision systems with the flexible and rapid learning seen in human scene processing.

How can our brains store visual (and auditory) memories that last a lifetime?People in their 50s and 60s can recognize images and sounds that they have not re-experienced for several decades. How does the brain manage to keep these memory traces intact? I will describe some experimental and modeling work that supports the radical suggestion that these extremely long term memories involve the formation of highly selective neurons, tuned to stimulus patterns that were presented repeatedly at some time in the past - effectively "grandmother cells". Such highly selective neurons can be produced using a simple learning rule based on Spike Time Dependent Plasticity (STDP) that leads neurons to become selective to spatiotemporal patterns of input spikes that occur repeatedly. Even more radical is the suggestion that the neocortex may contain a substantial proportion of totally silent neurons - a sort of neocortical dark matter. Such neurons will effectively remain silent until the original trigger stimulus is shown again. If STDP-like rules apply, the absence of firing would mean that the sets of synaptic weights could remain intact virtually indefinitely.
20.10.2014 16:00 - 18:00 Benedikt Grothe | LMU München
Dissecting neuronal function: an evolutionary perspective on circuits underlying spatial hearingOur concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems.We will argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals will be highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we will combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs.
3.11.2014


lecture: 11:00-12:00

talk:       17:00-18:00
Tiago Branco | Cambridge University

Single cell synaptic integration in dendrites

Synaptic integration in a circuit controlling feeding and innate escape behaviour
26.01.2015

lecture: 16:00 - 17:00

talk:      17:00 - 18:00
Peter Latham | Gatsby Institute, UCL

Correlations and computationNeurons in the brain are correlated: they tend to fire together far more often than would be expected by chance. It is well known that these correlations can have a large effect on both information transmission and computing, but it is not known what that effect is. That's partly because correlations often become important only for large populations, where experiments are notoriously difficult. But it's also because theoretical studies have yielded contradictory results. In particular, the answer to the question "how do correlations affect information?" is the rather unsatisfying it depends on the correlations". Here we turn that question around, and ask "how does information affect correlations?". Phrased that way, there is a unique, well-defined answer, one that tells us exactly what correlations look like in large populations. Moreover, the answer gives us hints to why those correlations are useful for almost noise-free computations.

Learning and Bayes: probabilities are important at the nanometer scaleOrganisms face a hard problem: based on noisy sensory input, they must set a large number of synaptic weights. However, they do not receive enough information in their lifetime to learn the correct, or optimal weights (i.e., the weights that ensure the circuit, system, and ultimately organism functions as effectively as possible). Instead, the best they could possibly do is compute a probability distribution over the optimal weights. Based on this observation, we hypothesise that synapses represent probability distribution over weights in contrast to the widely held belief that they represent point estimates. From this hypothesis, we derive learning rules for supervised, reinforcement and unsupervised learning. This introduces a new feature: the more uncertain the brain is about the optimal weight of a synapse, the more plastic it is. This is consistent with current data, and introduces several testable predictions.

FMI ACCESS
Please access the FMI at its Porte at Maulbeerstrasse 66. At the Porte, visitors should state their names and affiliations. They will check the names and ask the visitor to write their last names on a sticker and let the visitor through. Please take the elevators to the seminar room on the 5th floor. In order to avoid a last minute crunch and delays to the seminar, it is strongly advised to show up at the Porte about 10 minutes before the beginning of the seminar
Logos