Location & dates EMBL Heidelberg, Germany 6 - 8 Dec 2018
Deadlines Registration closed Abstract submission closed

EMBL Courses and Conferences during the Coronavirus pandemic

With the onsite programme paused, many of our events are now being offered in virtual formats.

Registration is open as usual for many events, with back-up plans in place to move further courses and conferences online as necessary. Registration fees for any events affected by the COVID-19 disruption are fully refundable.

More information for participants of events at EMBL Heidelberg can be found here.

In Conference Workshops

In-conference workshops on Friday 7 December 2018

As shown in the conference programme, there will be 2 workshop sessions taking place on Friday 7 December at the times below. Please check the overview schedule for the workshops which will be shared with you and includes the list of participants for each workshop. This has been based on the results of the survey completed by conference participants. If you did not complete the survey, you are welcome to join the workshops that still have space available. These are indicated on the overview schedule. This schedule is also available on the conference mobile app.

Session 1: 13:30 - 15:00

Session 2: 15:30 - 17:00

Please find below the details for each workshop.

Workshop Abstract Learning goals Prerequisites
ImageJ2 In this workshop, we will learn how to create scripts using ImageJ2 functionality for efficient image processing. We will briefly introduce the main foundational parts of ImageJ2: SciJava, ImgLib2 and Ops. Using (1) SciJava script parameters, we can build simple user interfaces to create interactive commands; (2) SciJava services provide a way to interact with the application context, e.g. to open or save files, to display log and status messages, and to run commands and ops; (3) the image processing library ImgLib2 provides the currency for all n-dimensional processing within ImageJ; finally, (4) ImageJ Ops provide an extensible layer on top of ImgLib2 aimed at making calling image processing operations easier. With some example scripts, we will illustrate how to create an image processing workflow entirely built on ImageJ2 data structures. Learn how to: - use script parameters to ask for inputs and define outputs. - use SciJava Services in scripts to interact with ImageJ (opening/saving images, logging, ops, etc.). - use ImageJ Ops to do image processing. - use the ImgLib2 ROI library to define and operate on regions of interest. - call other commands/scripts from a script, handling inputs and outputs. - mix and match ImageJ1 and ImageJ2 concepts. - Familiarity with ImageJ/Fiji usage and macros. - Up-to-date installation of Fiji on your local machine (http://fiji.sc/). - Java knowledge and experience with IDEs such as Eclipse is helpful but not required.
CellProfiler Using CellProfiler to segment real-world datasets. In this workshop, we will take real data from a 5-channel CellPainting assay and use the various channels to identify various organelles and cell compartments. We will also explore how to create a pipeline robust enough to perform well even in less-than-ideal conditions, such as empty wells, wells with fluorescent "junk", and cells treated with drugs that significantly change cell size and/or morphology. We will add a number of measurements, such that the downstream data could eventually be used for morphological profiling. More concretely, in this workshop you will learn the basics of how to configure a CellProfiler pipeline, what the major classes of module are, and what sorts of things you can measure about your objects in CellProfiler.  You will learn how to apply previously-calculated illumination correction functions, allowing you to improve the quality of your data.  You will also learn tips and important settings for dealing with the unfortunate real-world messiness of biological experiments, including debris and "ugly" cells. Solid understanding of the principals of classical image analysis. Previous use of CellProfiler not required.
KNIME Image Processing KNIME Analytics Platform is an easy to use and comprehensive open source data integration, analysis, and exploration platform designed to handle large amounts of heterogeneous data. In this workshop we will demonstrate KNIME Image Processing, which extends KNIME Analytics Platform’s capabilities to tackle (bio)image analysis challenges. Basic concepts of the platform will be introduced while building a pipeline for segmentation of fluorescence images, extraction of (intensity) measurements and visualization of outputs. We’ll also show-case some more advanced topics like deep learning for bioimage analysis. More concretely, you will learn how to how to get started with developing simple (bio)image analysis pipelines with KNIME Analytics Platform and KNIME Image Processing (reading images, preprocessing, segmentation, (intensity) measurements and their visualization). You will also get an idea of what’s possible with KNIME Image Processing beyond simple segmentation. No previous experience with KNIME Analytics Platform required.
OMERO The Open Microscopy Environment (OME) is an open-source software project that develops tools that enable visualization, analysis, sharing and publication of biological image data. We will present OMERO, our software platform for image data management and analysis. In this workshop we will demonstrate several workflows, including data organisation, annotation, searching, image visualization and figure creation. We will show how to transition from manual data processing to automated processing workflows using applications against the OMERO API, how to integrate a variety of processing tools with OMERO such as ImageJ and CellProfiler. Participants are encouraged to bring laptops so they can try OMERO for themselves. This workshop is dedicated to all researchers and research students dealing with microscopic image data who wish to learn how an image data storage solution could look like. Prior knowledge in microscopy, scripting and data analysis is not required. Any student / researcher dealing with scientific images is more than welcome to join this workshop.
Imaris XT + SRRF PART 1: Imaris XT We will give a general introduction to the Imaris software suite with an emphasis on ImarisXT, a module to extend the features of Imaris and integrate custom modules in the workflow. We will present Bitplane's ImarisOpen portal where developers can share contributions. We will present use several cases of ImarisXT in the context of image reconstruction and analysis. In particular, we will highlight ImarisXT's capability to complement Imaris with highly specialized custom functionalities and its wide range of supported languages, that require minimal adaptation from a software developer's favorite prototyping or production language. PART 2: A practical introduction to SRRF and Super-Resolution Image Analysis In this part of the workshop we'll introduce the SRRF method, a purely analytical super-resolution approach capable of extracting subdiffraction information from images acquired in most conventional microscopes. We'll introduce the basic principles of SRRF and teach participants how to run the open-source algorithm in ImageJ. As a follow up, we'll also teach how the SQUIRREL algorithm can be used to validate the quality of super-resolution images and estimate resolution. At the end of this workshop, participants will have learned about the possibilities offered by the ImarisXT software modules (from an end-user and developer perspective) and how SRRF can be used to extract subdiffraction information from acquired images and SQUIRREL can evaluate the quality of those computed images. This workshop does not require any prior knowledge.
Usable Deep Learning Tools
Deep Learning is promising to change the face of automated image analysis. Today, virtually all proposed deep learning methods/workflows are requiring special computational skills to be applied on ones own microscopy data. Only rarely we find readily usable deep learning tools ready to be used by non-experts. In this workshop we plan to discuss why this is the case and will showcase a few examples of deep learning tools that are making an noteable effort to be user friendly, easily deployable, and immediately useful.

Participants will learn about what makes the deployment of deep learning solutions difficult for developers and will see some examples of bio-image analysis tools that are aiming at being easy-to-use/easy-to-install. The list of tools we will showcase will likely include:
- Cryo-CARE
- Deep Ilastik
- Stardist

No prior knowledge is required. Some knowledge about scripting in python or Fiji might help for parts of this workshop but is not strictly required.
New Big Data Plugins for ImageJ We will introduce serveral related bleeding edge Fiji/ImageJ plugins. These are not part of core Fiji yet and in parts still under active development. The unifying theme is the ability to work with potentially large datasets (many terabytes) for example acquired by lightsheet microscopes. In particular we will cover: - BigStitcher for fully automatic or interactive alignment, fusion, and deconvolution of multi-tile and multi-angle image datasets. - LabKit for trainable image segmentation and labeling, well-suited for large 3D data. - Mastodon for automatic, semi-automatic, or manual cell tracking, with the ability to interactively handle millions of annotations. - SciView for volumetric and virtual reality visualization of large image data, meshes, and annotations. These plugins share a common software stack (ImgLib2, BigDataViewer) and file formats. We will illustrate how they will play well together.

You will have a general idea about the tool landscape for handling big image data in Fiji. You will have a basic understanding about the capabilities of the above-mentioned plugins, and the underlying methods and technology. You should be ready to start using them on your own data.

The workshop will mostly focus on concepts and the plugin UIs.
We assume only general knowledge of Fiji usage. Bonus points if you have used BigDataViewer, Multiview Reconstruction, TrackMate, MaMuT, or any of the plugins mentioned above.
Programming experience is helpful to get most out of the deepest technical bits, but this is not a requirement for participation.
Image Analysis in Notebooks In this workshop, we will learn how to create Jupyter notebooks: sharable documents that contain live code, equations, visualizations and narrative text. We will focus on notebooks that use ImageJ functionality to perform image processing. We will also demonstrate techniques for calling ImageJ from Python code, so that it can be combined with libraries such as numpy, scikit-image and others.

Learn how to:
- use conda to install packages and configure environments
- create Jupyter notebooks that illustrate workflows and highlight results
- develop Java-based notebooks that utilize ImageJ2 and other Java libraries
- develop Python-based notebooks that combine ImageJ2 functionality with other tools including numpy and scikit-image

- Basic familiarity with ImageJ2 concepts (see Workshop 1 on ImageJ2)
- Up-to-date installation of Anaconda or Miniconda (https://conda.io/).
- Familiarity with Jupyter Notebook is helpful but not required.
Ilastik The workshop will introduce workflows of ilastik - a simple tool for interactive machine learning-based segmentation and tracking. After a quick intro, we will explore the lesser-known parts of the program, workflows more advanced than pixel classification, and we will be using ilastik in combination with other tools such as Fiji, KNIME or CellProfiler. You will get familiarized with more advanced segmentation options of ilastik and with its tracking workflow. Further we will cover ways how ilasik can be used as one module in more complex analysis pipelines. Familiarity with the basic use of ilastik is desirable. A quick intro will be givin, but there will be no hands-on session for the very basics.