Computational Neuroscience Lectures and Workshops
Previous events 2014-2020 (FMI)
In this lecture series, outstanding researchers presented central topics in computational and theoretical neuroscience. All speakers gave both a research talk and a general introduction to a computational neuroscience topic.
Questions, suggestions?
CNIB contact address: computational.neuroscience@fmi.ch
Organizers 2019/2020 : Arjun Bharioke, Claire Meissner-Bernard, Fiona Muellner,
20/21 February 2020 Seminar: Workshop: |
Nao Uchida | Harvard, USA A normative perspective on the diversity of dopamine signals Reinforcement learning and dopamine » Poster for the event |
4/5 December 2019 Seminar: Workshop: |
Elad Schneidman | Weizmann Institute of Science, Reehovot, Israel Spike-timing based neuronal processing: Applications to vision and speech Where and when is the next spike? Gradient learning in spiking-neurons » Poster for the event |
27th November 2019 | Wulfram Gerstner | Ecole Polytechnique Fédérale de Lausanne, Switzerland » Poster for the event Eligibility traces and three-factor learning rules |
2/3 May 2019 Seminar: Workshop: |
Robert Gütig | Charité - Universitätsmedizin, Berlin Spike-timing based neuronal processing: Applications to vision and speech Where and when is the next spike? Gradient learning in spiking-neurons » Poster for the event |
9/10 April 2019 Seminar: Workshop: |
Zhaoping Li | Max Planck Institute for biological cybernetics, Tübingen A new path to understanding vision: theory and experiments Computational modeling of neuro-circuits: case studies in vision, olfaction and locomotion » Poster for the event |
15/16 Nov. 2018 (Thursday, Friday) |
Christian Machens | Champalimaud Centre for the Unknown, Lisbon, Portugal » More about the speaker | Workshop | Spikes – and the headaches they have caused |
Seminar |
Robust coding with spiking neural networks |
Organizers 2017/2018 : Arjun Bharioke, Aleena Garner, Fiona Muellner, Benjamin Titze
7 May 2018 Seminar: 11:30 - 12:30 Workshop: 12:45 - 14:15 |
Alexandre Pouget | University of Geneva, Switzerland Learning, uncertainty and confidence Bayesian approach to neural computation » Poster for the event |
18 April 2018 Workshop: 18:00 - 20:00 |
Gilles Laurent | Max Planck Institute for Brain Research, Frankfurt, Germany Computational Analyses of the Cuttlefish Camouflage Circuitry? » Poster for the event |
8 March 2018 Seminar: 11:30 - 12:30 Workshop: 13:00 - 14:30 |
Kevan Martin | Institute of Neuroinformatics, UZH/ETH Zurich, Switzerland The Cortical Daisy What's exciting about inhibition? » Poster for the event |
15 December 2017 Seminar: 11:30 - 12:30 Workshop: 12:45 - 14:15 |
Walter Senn | University of Bern, Switzerland Cortical microcircuits that implement error-backpropagation in the brain Lagrangian mechanics describing the dynamics and learning in cortical microcircuits » Poster for the event |
8 September 2017 Seminar: 11:30 - 12:30 Workshop: 12:45 - 14:45 |
Tatyana Sharpee | Salk Institute for Biological Sciences, San Diego, CA, USA Part 1: Optimizing neural information capacity; Part 2: Complex, non-linear feature selectivity and position invariance in visual cortex Integrating computational and experimental work » Poster for the event |
Organizers 2014: Alexander Attinger, Jan Gründemann, Milica Markovic, Adrian Wanner
Date | Time | Speaker/Topic |
---|---|---|
19.05.2014 | 17:00-19:00 |
Jakob Macke | MPI Tübingen Statistical methods for characterizing neural population dynamics |
6.10.2014 | 16:00 - 18:00 | Gasper Tkacik | IST Austria How and what can we learn from simultaneous large-scale neural recordingsWhile neuroscience claims to show increasing appreciation for the contact between theory and experiment, this intersection often remains rather superficial. Sometimes such intersection might consist of only a few theoretically predicted signatures that qualitatively match the data; sometimes even fundamental theoretical ideas remain without direct experimental support for decades. Partially this has been due to experimental limitations, but with the recent expansion of our ability to record simultaneously from many neurons in behaving organisms, these limitations are rapidly disappearing. How do we proceed towards a closer interaction between theory and experiment, to make full use of the new rich datasets? In this lecture, I will briefly review various data-driven attempts to understand population coding and focus on the applications of the maximum entropy framework, which provides interesting links between neuroscience, machine learning, and statistical physics. |
13.10.2014 | lecture: 16:00 - 17:00 talk:      17:00 - 18:00 |
Simon Thorpe | CERCO Toulouse Mechanisms for ultra-rapid visual processingFor over 25 years, I have been arguing that the speed of processing in the human visual system puts major constraints on the underlying computational mechanisms. Behavioral and electrophysiological data show that at least some biologically important stimuli (including animals and faces) can be identified and localized in just 100 ms. This is particularly impressive given the limitations of the underlying neural hardware, and suggests that a great deal of processing can be achieved with a single feed-forward sweep though the many successive stages of the visual system. For a long time, it was thought that the ability of the human visual system to process complex natural scenes would remain forever beyond the capacity of artificial vision systems, but it is now clear that the state of the art in computer vision is starting to catch up. Interestingly, the best artificial systems use processing architectures built on simple feed-forward mechanisms that look remarkably similar to those used in the primate visual system. However, the procedures used for training these artificial systems are very different to the mechanisms used in biological vision. In this talk I discuss the possibility that spike-based processing and learning mechanisms may allow future models to combine the remarkable efficiency of the latest computer vision systems with the flexible and rapid learning seen in human scene processing. How can our brains store visual (and auditory) memories that last a lifetime?People in their 50s and 60s can recognize images and sounds that they have not re-experienced for several decades. How does the brain manage to keep these memory traces intact? I will describe some experimental and modeling work that supports the radical suggestion that these extremely long term memories involve the formation of highly selective neurons, tuned to stimulus patterns that were presented repeatedly at some time in the past - effectively "grandmother cells". Such highly selective neurons can be produced using a simple learning rule based on Spike Time Dependent Plasticity (STDP) that leads neurons to become selective to spatiotemporal patterns of input spikes that occur repeatedly. Even more radical is the suggestion that the neocortex may contain a substantial proportion of totally silent neurons - a sort of neocortical dark matter. Such neurons will effectively remain silent until the original trigger stimulus is shown again. If STDP-like rules apply, the absence of firing would mean that the sets of synaptic weights could remain intact virtually indefinitely. |
20.10.2014 | 16:00 - 18:00 | Benedikt Grothe | LMU München Dissecting neuronal function: an evolutionary perspective on circuits underlying spatial hearingOur concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems.We will argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals will be highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we will combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. |
3.11.2014 |
lecture: 11:00-12:00 talk:       17:00-18:00 |
Tiago Branco | Cambridge University Single cell synaptic integration in dendrites Synaptic integration in a circuit controlling feeding and innate escape behaviour |
26.01.2015 | lecture: 16:00 - 17:00 talk:      17:00 - 18:00 |
Peter Latham | Gatsby Institute, UCL Correlations and computationNeurons in the brain are correlated: they tend to fire together far more often than would be expected by chance. It is well known that these correlations can have a large effect on both information transmission and computing, but it is not known what that effect is. That's partly because correlations often become important only for large populations, where experiments are notoriously difficult. But it's also because theoretical studies have yielded contradictory results. In particular, the answer to the question "how do correlations affect information?" is the rather unsatisfying “it depends on the correlations". Here we turn that question around, and ask "how does information affect correlations?". Phrased that way, there is a unique, well-defined answer, one that tells us exactly what correlations look like in large populations. Moreover, the answer gives us hints to why those correlations are useful for almost noise-free computations. Learning and Bayes: probabilities are important at the nanometer scaleOrganisms face a hard problem: based on noisy sensory input, they must set a large number of synaptic weights. However, they do not receive enough information in their lifetime to learn the correct, or optimal weights (i.e., the weights that ensure the circuit, system, and ultimately organism functions as effectively as possible). Instead, the best they could possibly do is compute a probability distribution over the optimal weights. Based on this observation, we hypothesise that synapses represent probability distribution over weights — in contrast to the widely held belief that they represent point estimates. From this hypothesis, we derive learning rules for supervised, reinforcement and unsupervised learning. This introduces a new feature: the more uncertain the brain is about the optimal weight of a synapse, the more plastic it is. This is consistent with current data, and introduces several testable predictions. |