Call for Abstracts: NIPS Workshop on Statistical Methods for Understanding Neural Systems

 

NIPS WORKSHOP 2015 CALL FOR ABSTRACTS
Statistical Methods for Understanding Neural Systems
Friday, December 11th, 2015
Montreal, Canada

--------
Organizers: Allie Fletcher Jakob Macke Ryan Adams Jascha Sohl-Dickstein
--------

Overview:
Recent advances in neural recording technologies, including calcium imaging and high-density electrode arrays, have made it possible to simultaneously record neural activity from large populations of neurons for extended periods of time. These developments promise unprecedented insights into the collective dynamics of neural populations and thereby the underpinnings of brain-like computation. However, this new large-scale regime for neural data brings significant methodological challenges. This workshop seeks to explore the statistical methods and theoretical tools that will be necessary to study these data, build new models of neural dynamics, and increase our understanding of the underlying computation. We have invited researchers across a range of disciplines in statistics, applied physics, machine learning, and both theoretical and experimental neuroscience, with the goal of fostering interdisciplinary insights. We hope that active discussions among these groups can set in motion new collaborations and facilitate future breakthroughs on fundamental research problems.

Call for Papers
We invite high quality submissions of extended abstracts on topics including, but not limited to, the following fundamental questions:

How can we deal with incomplete data in a principled manner?
In most experimental settings, even advanced neural recording methods can only sample a small fraction of all neurons that might be involved in a task, and the observations are often indirect and noisy. As a result, many recordings are from neurons that receive inputs from neurons that are not themselves directly observed, at least not over the same time period. How can we deal with this `missing data' problem in a principled manner? How does this sparsity of recordings influence what we can and cannot infer about neural dynamics and mechanisms?

How can we incorporate existing models of neural dynamics into neural data analysis?
Theoretical neuroscientists have intensely studied neural population dynamics for decades, resulting in a plethora of models of neural population dynamics. However, most analysis methods for neural data do not directly incorporate any models of neural dynamics, but rather build on generic methods for dimensionality reduction or time-series modelling. How can we incorporate existing models of neural dynamics? Conversely, how can we design neural data analysis methods such that they explicitly constrain models of neural dynamics?

What synergies are there between analyzing biological and artificial neural systems?
The rise of ‘deep learning’ methods has shown that hard computational problems can be solved by machine learning algorithms that are built by cascading many nonlinear units. Although artificial neural systems are fully observable, it has proven challenging to provide a theoretical understanding of how they solve computational problems and which features of a neural network are critical for its performance. While such ‘deep networks’ differ from biological neural networks in many ways, they provide an interesting testing ground for evaluating strategies for understanding neural processing systems. Are there synergies between analysis methods for analyzing biological and artificial neural systems? Has the resurgence of deep learning resulted in new hypotheses or strategies for trying to understand biological neural networks?

Confirmed Speakers:
Matthias Bethge
Mitya Chklovskii
John Cunningham
Surya Ganguli
Neil Lawrence
Guillermo Sapiro
Tatyana Sharpee
Richard Zemel

Workshop Website: https://users.soe.ucsc.edu/~afletcher/neuralsysnips.html
Email : smnips2015@rctn.org

New Podcast: Talking Machines

 

We are happy to announce the launch of a new podcast (hosted by Ryan Adams and Katherine Gorman) on machine learning and related topics, called Talking Machines. The first episode is up, with a taste of what the first season will bring. We have really interesting interviews with many great machine learning researchers, so definitely check it out. You can check it out iTunes and SoundCloud.

Harvard CRCS Call for Fellows and Visiting Scholars

 

The Harvard Center for Research on Computation and Society (CRCS) solicits applications for its Postdoctoral Fellows and Visiting Scholars Programs for the 2015-2016 academic year. Postdoctoral Fellows receive an annual salary of approximately $63,000 for one year (with the possibility of renewal) to engage in a program of original research, and are provided with additional funds for travel and research support. Visiting Scholars ordinarily come with their own support, but CRCS can occasionally offer supplemental funding.

We seek researchers who wish to interact with both computer scientists and colleagues from other disciplines, and have a demonstrated interest in connecting their research agenda with societal issues. We are particularly interested in candidates with interests in:

· Economics and Computer Science
· Health Care Informatics
· Privacy & Security
· Technology & Accessibility
· Automation & Reproducibility of Data Analysis

The ideal researcher will have a technical background in an area related to computer science, and a desire to creatively use those skills to address problems of societal importance. CRCS is a highly collaborative environment and we expect Fellows and Scholars to engage with researchers both inside and outside of computer science.

Examples of projects may be found at http://crcs.seas.harvard.edu/research

There are numerous opportunities for CRCS Fellows and Visiting Scholars to engage with Harvard faculty, students, and scholars in computer science and other disciplines, including the bi-weekly CRCS Lunch Seminar series, various informal CRCS lunches, and other research group meetings. Additionally, CRCS has close ties with Harvard’s Berkman Center for Internet and Society, and CRCS Fellows attend the weekly Berkman Fellows' meeting.

Harvard University is an Affirmative Action/Equal Opportunity Employer. We are particularly interested in attracting women and underrepresented groups to participate in CRCS. For further information about the Center and its activities, see http://crcs.seas.harvard.edu/.

Application Procedure

A cover letter, CV, research statement, copies of up to three research papers, and up to three letters of reference should be sent to:

Postdoctoral Fellows and Visiting Scholars Programs
Center for Research on Computation and Society
crcs-apply@seas.harvard.edu

The cover letter should describe what appeals to you about joining CRCS and describe how you would connect with the existing community. Please also make clear in your cover letter whether you are applying for the Postdoctoral Fellow or Visiting Scholar position, as well as whether you are supplied with your own funding.

References for Postdoctoral Fellows should send their letters directly, and Visiting Scholar applicants may provide a list of references rather than having letters sent.

The application deadline for full consideration is Monday, December 1, 2014.

Bayesian Optimization Workshop at NIPS

 

At NIPS this year will be a workshop on Bayesian Optimization in Academia and Industry, on Friday 12 December 2014. The announcement is below. We invite abstracts, due on October 23, 2014.

Bayesian optimization has emerged as an exciting subfield of machine learning that is concerned with the global optimization of noisy, black-box functions using probabilistic methods. Systems implementing Bayesian optimization techniques have been successfully used to solve difficult problems in a diverse set of applications. There have been many recent advances in the methodologies and theory underpinning Bayesian optimization that have extended the framework to new applications as well as provided greater insights into the behaviour of these algorithms. Bayesian optimization is now increasingly being used in industrial settings, providing new and interesting challenges that require new algorithms and theoretical insights.

At last year’s NIPS workshop on Bayesian optimization the focus was on the intersection of “Theory and Practice”. The workshop this year will follow this trend by again looking at theoretical contributions, but also by focusing on the practical side of Bayesian optimization in industry. The goal of this workshop is not only to bring together both practical and theoretical research knowledge from academia, but also to facilitate cross-fertilization with industry. Specifically, we would like to carefully examine the types of problems where Bayesian optimization works well in industrial settings, but also the types of situations where additional performance is needed. The key questions we will discuss are: how to scale Bayesian optimization to long time-horizons and many observations? How to tackle high-dimensional data? How to make Bayesian optimization work in massive, distributed systems? What kind of structural assumptions are we able to make? And finally, what can we say about these questions both empirically and theoretically?

The target audience for this workshop consists of both industrial and academic practitioners of Bayesian optimization as well as researchers working on theoretical advances in probabilistic global optimization. To this end we have invited many industrial users of Bayesian optimization to attend and speak at the workshop. We expect this exchange of industrial and academic knowledge will lead to a significant interchange of ideas and a clearer understanding of the challenges and successes of Bayesian optimization as a whole.

A further goal is to encourage collaboration between the diverse set of researchers involved in Bayesian optimization. This includes not only interchange between industrial and academic researchers, but also between the many different sub-fields of machine learning which make use of Bayesian optimization. We are also reaching out to the wider global optimization and Bayesian inference communities for involvement.