register interest

Professor Jens Rittscher

Research Area: Bioinformatics & Stats (inc. Modelling and Computational Biology)
Technology Exchange: Bioinformatics, Computational biology, Drug discovery, In vivo imaging, Microscopy (Confocal), Microscopy (EM) and Microscopy (Video)
Scientific Themes: Physiology, Cellular & Molecular Biology and Cancer Biology
Keywords: Imaging, Image Analysis,
Web Links:

The aim of his research is to enhance our understanding of complex biological processes through the analysis of image data that has been acquired at the microscopic scale. Jens Rittscher develops algorithms and methods that enable the quantification of a broad range of phenotypical alterations, the precise localisation of signalling events, and the ability to correlate such events in the context of the biological specimen. This work can be structured into the following three major areas:

This algorithm development needs to be guided by a firm understanding of the broader application context which is indicated in the figure below. Sophisticated algorithms are now necessary to image increasingly complex model systems over an extended period of time. In order to understand the role of certain genetic modifiers we need to relate these to the image derived measurements and features.  

Dr. Jens Rittscher has been appointed as a University Research Lecturer in 2013 and he is the first joint academic appointment between the Institute of Biomedical Engineering and the Nuffield Department of Medicine. In particular his work supports the Target Discovery Institute and Ludwig Institute of Cancer Research. In addition to his research in the field of biomedical imaging, Jens Rittscher has worked extensively in the area of video surveillance, the automatic annotation of video, and understanding of volumetric seismic data. 

Before coming to Oxford in 2013 Jens Rittscher led the Computer Vision Laboratory at GE Global Research in Niskayuna, NY, USA. He joined GE in 2001 after completing his PhD at the Department of Engineering Science at University of Oxford. During this time he was part of the Visual Dynamics Group led by Andrew Blake. He received his Diploma in Mathematics and Computer Science from the University of Bonn, Germany. Jens Rittscher held a position as an adjunct assistant professor at the Rensselaer Polytechnic Institute. He is a member of IEEE and acts as an elected member of the IEEE SPS Technical Committee on Bio Image and Signal Processing. 

Please visit the IBME page for additional details.

Name Department Institution Country
Daniel Ebner Target Discovery Institute Oxford University, NDM Research Building United Kingdom
Professor Xin Lu Oxford Ludwig Institute Oxford University, Old Road Campus Research Building United Kingdom
Professor Benedikt M Kessler Target Discovery Institute Oxford University, NDM Research Building United Kingdom
Prof Alison Noble FREng (MPLS) Oxford University,
Professor Sebastian Nijman Oxford Ludwig Institute Oxford University, Old Road Campus Research Building United Kingdom
Dr Olaf Ansorge (NDCN) Division of Clinical Neurology Oxford University,
Nketia TA, Sailem H, Rohde G, Machiraju R, Rittscher J. 2017. Analysis of live cell images: Methods, tools and opportunities. Methods, 115 pp. 65-79. | Show Abstract | Read more

Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits.

Santamaria-Pang A, Huang Y, Pang Z, Qing L, Rittscher J. 2014. Epithelial cell segmentation via shape ranking Lecture Notes in Computational Vision and Biomechanics, 14 pp. 315-338. | Show Abstract | Read more

© Springer International Publishing Switzerland 2014. We present a robust and high-throughput computational method for cell segmentation using multiplexed immunohistopathology images. The major challenges in obtaining an accurate cell segmentation from tissue samples are due to (i) complex cell and tissue morphology, (ii) different sources of variability including non-homogeneous staining and microscope specific noise, and (iii) tissue quality. Here we present a fast method that uses cell shape and scale information via unsupervised machine learning to enhance and improve general purpose segmentation methods. The proposed method is well suited for tissue cytology because it captures the the morphological and shape heterogeneity in different cell populations.We discuss our segmentation framework for analysing approximately one hundred images of lung and colon cancer and we restrict our analysis to epithelial cells.

Gerdes MJ, Sevinsky CJ, Sood A, Adak S, Bello MO, Bordwell A, Can A, Corwin A, Dinn S, Filkins RJ et al. 2013. Highly multiplexed single-cell analysis of formalin-fixed, paraffin-embedded cancer tissue. Proc Natl Acad Sci U S A, 110 (29), pp. 11982-11987. | Show Abstract | Read more

Limitations on the number of unique protein and DNA molecules that can be characterized microscopically in a single tissue specimen impede advances in understanding the biological basis of health and disease. Here we present a multiplexed fluorescence microscopy method (MxIF) for quantitative, single-cell, and subcellular characterization of multiple analytes in formalin-fixed paraffin-embedded tissue. Chemical inactivation of fluorescent dyes after each image acquisition round allows reuse of common dyes in iterative staining and imaging cycles. The mild inactivation chemistry is compatible with total and phosphoprotein detection, as well as DNA FISH. Accurate computational registration of sequential images is achieved by aligning nuclear counterstain-derived fiducial points. Individual cells, plasma membrane, cytoplasm, nucleus, tumor, and stromal regions are segmented to achieve cellular and subcellular quantification of multiplexed targets. In a comparison of pathologist scoring of diaminobenzidine staining of serial sections and automated MxIF scoring of a single section, human epidermal growth factor receptor 2, estrogen receptor, p53, and androgen receptor staining by diaminobenzidine and MxIF methods yielded similar results. Single-cell staining patterns of 61 protein antigens by MxIF in 747 colorectal cancer subjects reveals extensive tumor heterogeneity, and cluster analysis of divergent signaling through ERK1/2, S6 kinase 1, and 4E binding protein 1 provides insights into the spatial organization of mechanistic target of rapamycin and MAPK signal transduction. Our results suggest MxIF should be broadly applicable to problems in the fields of basic biological research, drug discovery and development, and clinical diagnostics.

Santamaria-Pang A, Huangy Y, Rittscher J. 2013. Cell segmentation and classification via unsupervised shape ranking Proceedings - International Symposium on Biomedical Imaging, pp. 406-409. | Show Abstract | Read more

As histology patterns vary depending on different tissue types, it is typically necessary to adapt and optimize segmentation algorithms to these tissue type-specific applications. Here we present an unsupervised method that utilizes cell shape cues to achieve this task-specific optimization by introducing a shape ranking function. The proposed algorithm is part of our Layers™ toolkit for image and data analysis for multiplexed immunohistopathology images. To the best of our knowledge, this is the first time that this type of methodology is proposed for segmentation and ranking in cell tissue samples. Our new cell ranking scheme takes into account both shape and scale information and provides information about the quality of the segmentation. First, we introduce cell-shape descriptor that can effectively discriminate the cell-type's morphology. Secondly, we formulate a hierarchical-segmentation as a dynamic optimization problem, where cells are subdivided if they improve a segmentation quality criteria. Third, we propose a numerically efficient algorithm to solve this dynamic optimization problem. Our approach is generic, since we don't assume any particular cell morphology and can be applied to different segmentation problems. We show results in segmenting and ranking thousands of cells from multiplexing images and we compare our method with well established segmentation techniques, obtaining very encouraging results. © 2013 IEEE.

Bilgin CC, Rittscher J, Filkins R, Can A. 2012. Digitally adjusting chromogenic dye proportions in brightfield microscopy images. J Microsc, 245 (3), pp. 319-330. | Show Abstract | Read more

We present an algorithm to adjust the contrast of individual dyes from colour (red-green-blue) images of dye mixtures. Our technique is based on first decomposing the colour image into individual dye components, then adjusting each of the dye components and finally mixing the individual dyes to generate colour images. Specifically in this paper, we digitally adjust the staining proportions of hematoxylin and eosin (H&E) chromogenic dyes in tissue images. We formulate the physical dye absorption process as a non-negative mixing equation, and solve the individual components using non-negative matrix factorisation (NMF). Our NMF formulation includes camera dark current in addition to the mixing proportions and the individual H and E components. The novelty of our approach is to adjust the dye proportions while preserving the color of nonlinear dye interactions, such as pigments and red blood cells. In this paper we present results for only H&E images, our technique can easily be extended to other staining techniques.

Margolis D, Santamaria-Pang A, Rittscher J. 2012. Tissue segmentation and classification using graph-based unsupervised clustering Proceedings - International Symposium on Biomedical Imaging, pp. 162-165. | Show Abstract | Read more

Automated segmentation and quantification of cellular and subcellular components in multiplexed images has allowed for a combination of both spatial and protein expression information to become available for analysis. However, performing analyses across multiple patients and tissue types continues to be a challenge, as well as the greater challenge of tissue classification itself. We propose a model of tissues as interconnected networks of epithelial cells whose connectivity is determined by their size, specific expression levels, and proximity to other cells. These Biomarker Enhanced Tissue Networks (BETN) reflect both the individual nature of the cells and the complex cell to cell relationships within the tissue. Performing a simple analysis of such tissue networks managed to successfully classify epithelial cells from stromal cells across multiple patients and tissue types. Further experiments show that significant information about the structure and nature of tissues can also be extracted through analysis of the networks, which will hopefully move towards the eventual goal of true tissue classification. © 2012 IEEE.

Padfield D, Rittscher J, Roysam B. 2011. Coupled minimum-cost flow cell tracking for high-throughput quantitative analysis. Med Image Anal, 15 (4), pp. 650-668. | Show Abstract | Read more

A growing number of screening applications require the automated monitoring of cell populations in a high-throughput, high-content environment. These applications depend on accurate cell tracking of individual cells that display various behaviors including mitosis, merging, rapid movement, and entering and leaving the field of view. Many approaches to cell tracking have been developed in the past, but most are quite complex, require extensive post-processing, and are parameter intensive. To overcome such issues, we present a general, consistent, and extensible tracking approach that explicitly models cell behaviors in a graph-theoretic framework. We introduce a way of extending the standard minimum-cost flow algorithm to account for mitosis and merging events through a coupling operation on particular edges. We then show how the resulting graph can be efficiently solved using algorithms such as linear programming to choose the edges of the graph that observe the constraints while leading to the lowest overall cost. This tracking algorithm relies on accurate denoising and segmentation steps for which we use a wavelet-based approach that is able to accurately segment cells even in images with very low contrast-to-noise. In addition, the framework is able to measure and correct for microscope defocusing and stage shift. We applied the algorithms on nearly 6000 images of 400,000 cells representing 32,000 tracks taken from five separate datasets, each composed of multiple wells. Our algorithm was able to segment and track cells and detect different cell behaviors with an accuracy of over 99%. This overall framework enables accurate quantitative analysis of cell events and provides a valuable tool for high-throughput biological studies.

Cited:

122

Scopus

Doretto G, Sebastian T, Tu P, Rittscher J. 2011. Appearance-based person reidentification in camera networks: problem overview and current approaches Journal of Ambient Intelligence and Humanized Computing, 2 (2), pp. 127-151. | Show Abstract | Read more

Recent advances in visual tracking methods allow following a given object or individual in presence of significant clutter or partial occlusions in a single or a set of overlapping camera views. The question of when person detections in different views or at different time instants can be linked to the same individ ual is of fundamental importance to the video analysis in large-scale network of cameras. This is the person reidentification problem. The paper focuses on algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Methods that effectively address the challenges associated with changes in illumination, pose, and clothing appearance variation are discussed. More specifically, the development of a set of models that capture the overall appearance of an individual and can effectively be used for information retrieval are reviewed. Some of them provide a holistic description of a person, and some others require an intermediate step where specific body parts need to be identified. Some are designed to extract appearance features over time, and some others can operate reliably also on single images. The paper discusses algorithms for speeding up the computation of signatures. In particular it describes very fast procedures for computing co-occurrence matrices by leveraging a generalization of the integral representation of images. The algorithms are deployed and tested in a camera network comprising of three cameras with non-overlapping field of views, where a multi-camera multi-target tracker links the tracks in different cameras by reidentifying the same people appearing in different views. © 2011 Springer-Verlag.

Padfield D, Rittscher J, Roysam B. 2011. Quantitative biological studies enabled by robust cell tracking Proceedings - International Symposium on Biomedical Imaging, pp. 1929-1934. | Show Abstract | Read more

A growing number of screening applications require the automated monitoring of cell populations enabled by cell segmentation and tracking algorithms in a high-throughput, high-content environment. Building upon the tracks generated by such algorithms, we derive biologically relevant features and demonstrate a range of biological studies made possible by such quantitative measures. In the first, we introduce a combination of quantitative features that characterize cell apoptosis and arrest. In the second, we automatically measure the effect of motility-promoting serums. In the third, we show that proper dosage levels can be automatically determined for studying protein translocations. These results provide large-scale quantitative validation of biological experiments and demonstrate that our framework provides a valuable tool for high-throughout biological studies. © 2011 IEEE.

Singh S, Janoos F, Pécot T, Caserta E, Huang K, Rittscher J, Leone G, Machiraju R. 2011. Non-parametric population analysis of cellular phenotypes. Med Image Comput Comput Assist Interv, 14 (Pt 2), pp. 343-351. | Show Abstract

Methods to quantify cellular-level phenotypic differences between genetic groups are a key tool in genomics research. In disease processes such as cancer, phenotypic changes at the cellular level frequently manifest in the modification of cell population profiles. These changes are hard to detect due the ambiguity in identifying distinct cell phenotypes within a population. We present a methodology which enables the detection of such changes by generating a phenotypic signature of cell populations in a data-derived feature-space. Further, this signature is used to estimate a model for the redistribution of phenotypes that was induced by the genetic change. Results are presented on an experiment involving deletion of a tumor-suppressor gene dominant in breast cancer, where the methodology is used to detect changes in nuclear morphology between control and knockout groups.

Singh S, Janoos F, Pécot T, Caserta E, Huang K, Rittscher J, Leone G, Machiraju R. 2011. Non-parametric population analysis of cellular phenotypes Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6892 LNCS (PART 2), pp. 343-351. | Show Abstract | Read more

Methods to quantify cellular-level phenotypic differences between genetic groups are a key tool in genomics research. In disease processes such as cancer, phenotypic changes at the cellular level frequently manifest in the modification of cell population profiles. These changes are hard to detect due the ambiguity in identifying distinct cell phenotypes within a population. We present a methodology which enables the detection of such changes by generating a phenotypic signature of cell populations in a data-derived feature-space. Further, this signature is used to estimate a model for the redistribution of phenotypes that was induced by the genetic change. Results are presented on an experiment involving deletion of a tumor-suppressor gene dominant in breast cancer, where the methodology is used to detect changes in nuclear morphology between control and knockout groups. © 2011 Springer-Verlag.

Lim SN, Doretto G, Rittscher J. 2011. Multi-class object layout with unsupervised image classification and object localization Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6938 LNCS (PART 1), pp. 573-585. | Show Abstract | Read more

Recognizing the presence of object classes in an image, or image classification, has become an increasingly important topic of interest. Equally important, however, is also the capability to locate these object classes in the image. We consider in this paper an approach to these two related problems with the primary goal of minimizing the training requirements so as to allow for ease of adding new object classes, as opposed to approaches that favor training a suite of object-specific classifiers. To this end, we provide the analysis of an exemplar-based approach that leverages unsupervised clustering for classification purpose, and sliding window matching for localization. While such exemplar based approach by itself is brittle towards intraclass and viewpoint variations, we achieve robustness by introducing a novel Conditional Random Field model that facilitates a straightforward accept/reject decision of the localized object classes. Performance of our approach on the PASCAL Visual Object Challenge 2007 dataset demonstrates its efficacy. © 2011 Springer-Verlag.

Rittscher J, Padfield D, Santamaria A, Tu J, Can A, Bello M, Gao D, Sood A, Gerdes M, Ginty F. 2011. Methods and algorithms for extracting high-content signatures from cells, tissues, and model organisms Proceedings - International Symposium on Biomedical Imaging, pp. 1712-1716. | Show Abstract | Read more

While high-content screening is already playing an important role in drug discovery, a growing number of academic laboratories are applying these techniques to conduct a system-level analysis of biological processes. In this context more complex assays and model systems are being imaged at higher throughput. Examples include co-culture assays, tissues, and entire model systems, as for example zebrafish. This summary presents examples of methods and algorithms that have been developed at GE Global Research that facilitate the quantitative analysis of such complex specimens. 1 © 2011 IEEE.

Tu J, Laflen B, Liu X, Bello M, Rittscher J, Tu P. 2011. LPSM: Fitting shape model by linear programming 2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011, pp. 252-258. | Show Abstract | Read more

We propose a shape model fitting algorithm that uses linear programming optimization. Most shape model fitting approaches (such as ASM, AAM) are based on gradient-descent-like local search optimization and usually suffer from local minima. In contrast, linear programming (LP) techniques achieve globally optimal solution for linear problems. In [1], a linear programming scheme based on successive convexification was proposed for matching static object shape in images among cluttered background and achieved very good performance. In this paper, we rigorously derive the linear formulation of the shape model fitting problem in the LP scheme and propose an LP shape model fitting algorithm (LPSM). In the experiments, we compared the performance of our LPSM with the LP graph matching algorithm(LPGM), ASM, and a CONDENSATION based ASM algorithm on a test set of PUT database. The experiments show that LPSM can achieve higher shape fitting accuracy. We also evaluated its performance on the fitting of some real world face images collected from internet. The results show that LPSM can handle various appearance outliers and can avoid local minima problem very well, as the fitting is carried out by LP optimization with l 1 norm robust cost function. © 2011 IEEE.

Singh S, Janoos F, Pécot T, Caserta E, Leone G, Rittscher J, Machiraju R. 2011. Identifying nuclear phenotypes using semi-supervised metric learning. Inf Process Med Imaging, 22 pp. 398-410. | Show Abstract

In systems-based approaches for studying processes such as cancer and development, identifying and characterizing individual cells within a tissue is the first step towards understanding the large-scale effects that emerge from the interactions between cells. To this end, nuclear morphology is an important phenotype to characterize the physiological and differentiated state of a cell. This study focuses on using nuclear morphology to identify cellular phenotypes in thick tissue sections imaged using 3D fluorescence microscopy. The limited label information, heterogeneous feature set describing a nucleus, and existence of subpopulations within cell-types makes this a difficult learning problem. To address these issues, a technique is presented to learn a distance metric from labeled data which is locally adaptive to account for heterogeneity in the data. Additionally, a label propagation technique is used to improve the quality of the learned metric by expanding the training set using unlabeled data. Results are presented on images of tumor stroma in breast cancer, where the framework is used to identify fibroblasts, macrophages and endothelial cells--three major stromal cells involved in carcinogenesis.

Singh S, Janoos F, Pécot T, Caserta E, Leone G, Rittscher J, Machiraju R. 2011. Identifying nuclear phenotypes using semi-supervised metric learning Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6801 LNCS pp. 398-410. | Show Abstract | Read more

In systems-based approaches for studying processes such as cancer and development, identifying and characterizing individual cells within a tissue is the first step towards understanding the large-scale effects that emerge from the interactions between cells. To this end, nuclear morphology is an important phenotype to characterize the physiological and differentiated state of a cell. This study focuses on using nuclear morphology to identify cellular phenotypes in thick tissue sections imaged using 3D fluorescence microscopy. The limited label information, heterogeneous feature set describing a nucleus, and existence of sub-populations within cell-types makes this a difficult learning problem. To address these issues, a technique is presented to learn a distance metric from labeled data which is locally adaptive to account for heterogeneity in the data. Additionally, a label propagation technique is used to improve the quality of the learned metric by expanding the training set using unlabeled data. Results are presented on images of tumor stroma in breast cancer, where the framework is used to identify fibroblasts, macrophages and endothelial cells - three major stromal cells involved in carcinogenesis. © 2011 Springer-Verlag.

Rittscher J. 2010. Characterization of biological processes through automated image analysis. Annu Rev Biomed Eng, 12 (1), pp. 315-344. | Show Abstract | Read more

The systems-level analysis of complex biological processes requires methods that enable the quantification of a broad range of phenotypical alterations, the precise localization of signaling events, and the ability to correlate such signaling events in the context of the spatial organization of the biological specimen. The goal of this review is to illustrate that, when combined with modern imaging platforms and labeling techniques, automated image analysis methods can provide such quantitative information. The article attempts to review necessary image analysis techniques as well as applications that utilize these techniques to provide the data that will enable systems-level biology. The text includes a review of image registration and image segmentation methods, as well as algorithms that enable the analysis of cellular architecture, cell morphology, and tissue organization. Various methods that enable the analysis of dynamic events are also presented.

Gao D, Padfield D, Rittscher J, McKay R. 2010. Automated training data generation for microscopy focus classification. Med Image Comput Comput Assist Interv, 13 (Pt 2), pp. 446-453. | Show Abstract

Image focus quality is of utmost importance in digital microscopes because the pathologist cannot accurately characterize the tissue state without focused images. We propose to train a classifier to measure the focus quality of microscopy scans based on an extensive set of image features. However, classifiers rely heavily on the quality and quantity of the training data, and collecting annotated data is tedious and expensive. We therefore propose a new method to automatically generate large amounts of training data using image stacks. Our experiments demonstrate that a classifier trained with the image stacks performs comparably with one trained with manually annotated data. The classifier is able to accurately detect out-of-focus regions, provide focus quality feedback to the user, and identify potential problems of the microscopy design.

Gao D, Padfield D, Rittscher J, McKay R. 2010. Automated training data generation for microscopy focus classification Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6362 LNCS (PART 2), pp. 446-453. | Show Abstract | Read more

Image focus quality is of utmost importance in digital microscopes because the pathologist cannot accurately characterize the tissue state without focused images. We propose to train a classifier to measure the focus quality of microscopy scans based on an extensive set of image features. However, classifiers rely heavily on the quality and quantity of the training data, and collecting annotated data is tedious and expensive. We therefore propose a new method to automatically generate large amounts of training data using image stacks. Our experiments demonstrate that a classifier trained with the image stacks performs comparably with one trained with manually annotated data. The classifier is able to accurately detect out-of-focus regions, provide focus quality feedback to the user, and identify potential problems of the microscopy design. © 2010 Springer-Verlag.

Singh S, Raman S, Caserta E, Leone G, Ostrowski M, Rittscher J, Machiraju R. 2010. Analysis of spatial variation of nuclear morphology in tissue microenvironments 2010 7th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI 2010 - Proceedings, pp. 1293-1296. | Show Abstract | Read more

We present a study of the spatial variation of nuclear morphology of stromal and cancer-associated fibroblasts in the mouse mammary gland. The work is part of a framework being developed for the analysis of the tumor microenvironment in breast cancer. Recent research has uncovered the role of stromal cells in promoting tumor growth and progression. In specific, studies have indicated that stromal fibroblasts - formerly considered to be passive entities in the extra-cellular matrix - play an active role in the progression of tumor in mammary tissue. We have focused on the analysis of the nuclear morphology of fibroblasts, which several studies have shown to be a critical phenotype in cancer. An essential component of our approach is that the nuclear morphology is studied within the 3D spatial context of the tissue, thus enabling us to pose questions about how the locus of a cell relates to its morphology, and possibly to its function. In order to make quantitative comparisons between nuclear populations, we build statistical shape models of cell populations and infer differences between the populations through these models. We present our observation on both normal and tumor tissues from the mouse mammary gland. ©2010 IEEE.

Padfield D, Rittscher J, Thomas N, Roysam B. 2009. Spatio-temporal cell cycle phase analysis using level sets and fast marching methods. Med Image Anal, 13 (1), pp. 143-155. | Show Abstract | Read more

Enabled by novel molecular markers, fluorescence microscopy enables the monitoring of multiple cellular functions using live cell assays. Automated image analysis is necessary to monitor such model systems in a high-throughput and high-content environment. Here, we demonstrate the ability to simultaneously track cell cycle phase and cell motion at the single cell level. Using a recently introduced cell cycle marker, we present a set of image analysis tools for automated cell phase analysis of live cells over extended time periods. Our model-based approach enables the characterization of the four phases of the cell cycle G1, S, G2, and M, which enables the study of the effect of inhibitor compounds that are designed to block the replication of cancerous cells in any of the phases. We approach the tracking problem as a spatio-temporal volume segmentation task, where the 2D slices are stacked into a volume with time as the z dimension. The segmentation of the G2 and S phases is accomplished using level sets, and we designed a model-based shape/size constraint to control the evolution of the level set. Our main contribution is the design of a speed function coupled with a fast marching path planning approach for tracking cells across the G1 phase based on the appearance change of the nuclei. The viability of our approach is demonstrated by presenting quantitative results on both controls and cases in which cells are treated with a cell cycle inhibitor.

Padfield D, Rittscher J, Roysam B. 2009. Coupled minimum-cost flow cell tracking. Inf Process Med Imaging, 21 pp. 374-385. | Show Abstract

A growing number of screening applications require the automated monitoring of cell populations in a high-throughput, high-content environment. These applications depend on accurate cell tracking of individual cells that display various behaviors including mitosis, occlusion, rapid movement, and entering and leaving the field of view. We present a tracking approach that explicitly models each of these behaviors and represents the association costs in a graph-theoretic minimum-cost flow framework. We show how to extend the minimum-cost flow algorithm to account for mitosis and merging events by coupling particular edges. We applied the algorithm to nearly 6,000 images of 400,000 cells representing 32,000 tracks taken from five separate datasets, each composed of multiple wells. Our algorithm is able to track cells and detect different cell behaviors with an accuracy of over 99%.

Padfield D, Rittscher J, Roysam B. 2009. Coupled minimum-cost flow cell tracking Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5636 LNCS pp. 374-385. | Show Abstract | Read more

A growing number of screening applications require the automated monitoring of cell populations in a high-throughput, high-content environment. These applications depend on accurate cell tracking of individual cells that display various behaviors including mitosis, occlusion, rapid movement, and entering and leaving the field of view. We present a tracking approach that explicitly models each of these behaviors and represents the association costs in a graph-theoretic minimum-cost flow framework. We show how to extend the minimum-cost flow algorithm to account for mitosis and merging events by coupling particular edges. We applied the algorithm to nearly 6,000 images of 400,000 cells representing 32,000 tracks taken from five separate datasets, each composed of multiple wells.Our algorithm is able to track cells and detect different cell behaviors with an accuracy of over 99%. © 2009 Springer Berlin Heidelberg.

Kim SJ, Doretto G, Rittscher J, Tu P, Krahnstoever N, Pollefeys M. 2009. A model change detection approach to dynamic scene modeling 6th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2009, pp. 490-495. | Show Abstract | Read more

In this work we propose a dynamic scene model to provide information about the presence of salient motion in the scene, and that could be used for focusing the attention of a pan/tilt/zoom camera, or for background modeling purposes. Rather than proposing a set of saliency detectors, we define what we mean by salient motion, and propose a precise model for it. Detecting salient motion becomes equivalent to detecting a model change. We derive optimal online procedures to solve this problem, which enable a very fast implementation. Promising results show that our model can effectively detect salient motion even in severely cluttered scenes, and while a camera is panning and tilting. © 2009 IEEE.

Cited:

31

Scopus

Tu P, Sebastian T, Doretto G, Krahnstoever N, Rittscher J, Yu T. 2008. Unified crowd segmentation Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5305 LNCS (PART 4), pp. 691-704. | Show Abstract | Read more

This paper presents a unified approach to crowd segmentation. A global solution is generated using an Expectation Maximization framework. Initially, a head and shoulder detector is used to nominate an exhaustive set of person locations and these form the person hypotheses. The image is then partitioned into a grid of small patches which are each assigned to one of the person hypotheses. A key idea of this paper is that while whole body monolithic person detectors can fail due to occlusion, a partial response to such a detector can be used to evaluate the likelihood of a single patch being assigned to a hypothesis. This captures local appearance information without having to learn specific appearance models. The likelihood of a pair of patches being assigned to a person hypothesis is evaluated based on low level image features such as uniform motion fields and color constancy. During the E-step, the single and pairwise likelihoods are used to compute a globally optimal set of assignments of patches to hypotheses. In the M-step, parameters which enforce global consistency of assignments are estimated. This can be viewed as a form of occlusion reasoning. The final assignment of patches to hypotheses constitutes a segmentation of the crowd. The resulting system provides a global solution that does not require background modeling and is robust with respect to clutter and partial occlusion. © 2008 Springer Berlin Heidelberg.

Cited:

28

Scopus

Padfield D, Rittscher J, Roysam B. 2008. Spatio-temporal cell segmentation and tracking for automated screening 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Proceedings, ISBI, pp. 376-379. | Show Abstract | Read more

A growing number of screening applications require the automated monitoring of cell populations including cell segmentation, tracking, and measurement. We present general methods for cell segmentation and tracking that exploit the spatiotemporal nature of the task to constrain segmentation. The images are de-noised and segmented by combining wavelet coefficients at various levels, thus enabling extraction of cells in images with low contrast-to-noise ratios. Each track of clustered cells resulting from association of nearby cells in the spatio-temporal volume is then split into individual cells by evolving sets of contours from other slices. The hypothesis whether to split or merge objects making up the cluster is tested using learned features trained from single track cells. Due to the difficult nature of generating ground truth, we also present a framework for edit-based validation whereby the user corrects the edits made by the automatic system rather than generating the truth from scratch. The results show the promise of the approach and demonstrate the ability of the algorithms to provide meaningful measurements of cell response to drug treatment in low-dose Hoechst-stained cells. ©2008 IEEE.

Padfield D, Rittscher J, Roysam B. 2008. Methods for monitoring cellular motion and function Conference Record - Asilomar Conference on Signals, Systems and Computers, pp. 47-50. | Show Abstract | Read more

Automated cell phase analysis of live cells over extended time periods requires both novel assays and automated image analysis algorithms. We approach the tracking problem as a spatio-temporal volume segmentation problem, where the 2D slices are stacked into a volume with time as the z dimension. This extended abstract gives an overview of our approach and outlines how a robust tracking system for high-throughput screening can be designed. © 2008 IEEE.

Tu P, Krahstoever N, Rittscher J. 2007. View adaptive detection and distributed site wide tracking 2007 IEEE Conference on Advanced Video and Signal Based Surveillance, AVSS 2007 Proceedings, pp. 57-62. | Show Abstract | Read more

Using a detect and track paradigm, we present a surveillance framework where each camera uses local resources to perform real-time person detection. These detections are then processed by a distributed site-wide tracking system. The detectors themselves are based on boosted user-defined exemplars, which capture both appearance and shape information. The detectors take integral images of both intensity and Sobel responses as input. This data representation enables efficient processing without relying on background subtraction or other motion cues. View-specific person detectors are constructed by iteratively presenting the boosting algorithm with training data associated with each individual camera. These detections are then transmitted from a distributed set of tracking clients to a server, which maintains a set of site-wide target tracks. Automatic calibration methods allow for tracking to be performed in a ground plane representation, which enables effective camera handoff. Factors such as network latencies and scalability will be discussed. © 2007 IEEE.

Cited:

237

Scopus

Wang X, Doretto G, Sebastian T, Rittscher J, Tu P. 2007. Shape and appearance context modeling Proceedings of the IEEE International Conference on Computer Vision, | Show Abstract | Read more

In this work we develop appearance models for computing the similarity between image regions containing deformable objects of a given class in realtime. We introduce the concept of shape and appearance context. The main idea is to model the spatial distribution of the appearance relative to each of the object parts. Estimating the model entails computing occurrence matrices. We introduce a generalization of the integral image and integral histogram frameworks, and prove that it can be used to dramatically speed up occurrence computation. We demonstrate the ability of this framework to recognize an individual walking across a network of cameras. Finally, we show that the proposed approach outperforms several other methods. ©2007 IEEE.

Rittscher J, Krahnstoever N, Galup L. 2007. Multi-target tracking using hybrid particle filtering Proceedings - Seventh IEEE Workshop on Applications of Computer Vision, WACV 2005, pp. 447-454. | Show Abstract | Read more

We address the problem of multi-target tracking based on sequential Monte Carlo filtering for a. visual access control application. Sequential Monte Carlo methods are very suitable for approximating posterior distributions for single target tracking applications. However, tracking multiple targets is more difficult and critically depends on the ability to represent all statistically significant modes with a sufficient number of samples. Even when tracking a. single target, controlling the effective sample size of the particle set only crudely estimates how well it approximates the posterior target distribution. In contrast, previous work demonstrates that using a Kalman filter control loop, which monitors the performance of the particle filter, can dramatically improve posterior distribution approximation in a dynamic fashion. This paper extends this principle to multi-target tracking by introducing a technique called mode stratification. In addition, a method to automatically augment and delete the number of modes using local relative entropy measures is introduced. Experiments applying the proposed technique for visual head tracking in an access control application illustrate the effectiveness of the method.

Tu PH, Doretto G, Krahnstoever NO, Perera AGA, Wheeler FW, Liu X, Rittscher J, Sebastian TB, Yu T, Harding KG. 2007. An intelligent video framework for homeland protection Proceedings of SPIE - The International Society for Optical Engineering, 6562 | Show Abstract | Read more

This paper presents an overview of Intelligent Video work currently under development at the GE Global Research Center and other research institutes. The image formation process is discussed in terms of illumination, methods for automatic camera calibration and lessons learned from machine vision. A variety of approaches for person detection are presented. Crowd segmentation methods enabling the tracking of individuals through dense environments such as retail and mass transit sites are discussed. It is shown how signature generation based on gross appearance can be used to reacquire targets as they leave and enter disjoint fields of view. Camera calibration information is used to further constrain the detection of people and to synthesize a top-view, which fuses all camera views into a composite representation. It is shown how site-wide tracking can be performed in this unified framework. Human faces are an important feature as both a biometric identifier and as a method for determining the focus of attention via head pose estimation. It is shown how automatic pantilt-zoom control; active shape/appearance models and super-resolution methods can be used to enhance the face capture and analysis problem. A discussion of additional features that can be used for inferring intent is given. These include body-part motion cues and physiological phenomena such as thermal images of the face.

Krahnstoever N, Rittscher J, Tu P, Chean K, Tomlinson T. 2007. Activity recognition using visual tracking and RFID Proceedings - Seventh IEEE Workshop on Applications of Computer Vision, WACV 2005, pp. 494-500. | Show Abstract | Read more

Computer vision-based articulated human motion tracking is attractive for many applications since it allows unobtrusive and passive estimation of people's activities. Although much progress has been made on human-only tracking, the visual tracking of people that interact with objects such as tools, products, packages, and devices is considerably more challenging. The wide variety of objects, their varying visual appearance, and their varying (and often small) size makes a vision-based understanding of person-object interactions very difficult. To alleviate this problem for at least some application domains, we propose a framework that combines visual human motion tracking with RFID based object tracking. We customized commonly available RFID technology to obtain orientation estimates of objects in the field of RFID emitter coils. The resulting fusion of visual human motion tracking and RFID-based object tracking enables the accurate estimation of high-level interactions between people and objects for application domains such as retail, home-care, workplace-safety, manufacturing and others.

Padfield D, Rittscher J, Thomas N, Roysam B. 2006. Validation methods for cell cycle analysis algorithms in confocal fluorescence images 2006 IEEE/NLM Life Science Systems and Applications Workshop, LiSA 2006, | Show Abstract | Read more

Automated analysis of live cells over extended time periods requires both novel assays and automated image analysis algorithms. Among other applications, this is necessary for studying the effect of inhibitor compounds that are designed to block the replication of cancerous cells. Due to their toxicity, fluorescent dyes that bind to the nuclear DNA cannot be used to mark nuclei, and traditional non-toxic nuclear markers do not yield information about the cell cycle phases. Instead, a non-toxic cell cycle phase marker can be used. We previously described a set of image analysis methods designed to automatically segment nuclei in such 2D time-lapse images. While the methods show promise, it is necessary to provide a validation framework for these methods. This paper introduces methods for validating the various stages of the algorithm in order to demonstrate their viability for automatic cell cycle analysis. © 2006 IEEE.

Cited:

292

Scopus

Gheissari N, Sebastian TB, Tu PH, Rittscher J, Hartley R. 2006. Person reidentification using spatiotemporal appearance Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2 pp. 1528-1535. | Show Abstract | Read more

In many surveillance applications it is desirable to determine if a given individual has been previously observed over a network of cameras. This is the person reidentification problem. This paper focuses on reidentification algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Person reidentification approaches have two aspects: (i) establish correspondence between parts, and (ii) generate signatures that are invariant to variations in illumination, pose, and the dynamic appearance of clothing. A novel spatiotemporal segmentation algorithm is employed to generate salient edgels that are robust to changes in appearance of clothing. The invariant signatures are generated by combining normalized color and salient edgel histograms. Two approaches are proposed to generate correspondences: (i) a model based approach that fits an articulated model to each individual to establish a correspondence map, and (ii) an interest point operator approach that nominates a large number of potential correspondences which are evaluated using a region growing scheme. Finally, the approaches are evaluated on a 44 person database across 3 disparate views. © 2006 IEEE.

Liu X, Chen T, Rittscher J. 2006. Optimal pose for face recognition Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2 pp. 1439-1446. | Show Abstract | Read more

Researchers in psychology have well studied the impact of the pose of a face as perceived by humans, and concluded that the so-called 3/4 view, halfway between the front view and the profile view, is the easiest for face recognition by humans. For face recognition by machines, while much work has been done to create recognition algorithms that are robust to pose variation, little has been done in finding the most representative pose for recognition. In this paper, we use a number of algorithms to evaluate face recognition performance when various poses are used for training. The result, similar to findings in psychology that the 3/4 view is the best, is also justified by the discrimination power of different regions on the face, computed from both the appearance and the geometry of these regions. We believe our study is both scientifically interesting and practically beneficial for many applications. © 2006 IEEE.

Sebastian T, Rittscher J, Yu L. 2006. Computing phagocytosis index for high-throughput applications 2006 3rd IEEE International Symposium on Biomedical Imaging: From Nano to Macro - Proceedings, 2006 pp. 546-549. | Show Abstract

Phagocytes defend us against infection by ingesting invading microorganisms. This process is called phagocytosis. Specific assays have been developed to monitor, for example, the effect of immunomodulatory drugs on phagocytosis. Highthroughput microscopy is used to analyze entire studies. As it is simply infeasible to evaluate such large data sets, as usually produced by high-throughput microscopes, robust image analysis algorithms are needed. The proposed method is used to compute the phagocytosis index, a quantitative measurement which allows to analyze entire concentration studies. The focus of this paper is the development of an automatic approach that does not require any user-interactive parameter settings. In addition the method is capable of dealing with a wide range of image contrasts since the quality of the assay itself can vary. Both manual scoring and consistency with biological data are used to validate the results. © 2006 IEEE.

Cited:

93

Scopus

Rittscher J, Tu PH, Krahnstoever N. 2005. Simultaneous estimation of segmentation and shape Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2 pp. 486-493. | Show Abstract | Read more

The main focus of this work is the integration of feature grouping and model based segmentation into one consistent framework. The algorithm is based on partitioning a given set of image features using a likelihood function that is parameterized on the shape and location of potential individuals in the scene, using a variant of the EM formulation, maximum likelihood estimates of both the model parameters and the grouping are obtained simultaneously. The resulting algorithm performs global optimization and generates accurate results even when decisions can not be made using local context alone. An important feature of the algorithm is that the number of people in the scene is not modeled explicitly. As a result no prior knowledge or assumed distributions are required. The approach is shown to be robust with respect to partial occlusion, shadows, clutter, and can operate over a large range of challenging view angles including those that are parallel to the ground plane. Comparisons with existing crowd segmentation systems are made and the utility of coupling crowd segmentation with a temporal tracking system is demonstrated. © 2005 IEEE.

Cited:

47

Scopus

Liu X, Tu PH, Rittscher J, Perera A, Krahnstoever N. 2005. Detecting and counting people in surveillance applications IEEE International Conference on Advanced Video and Signal Based Surveillance - Proceedings of AVSS 2005, 2005 pp. 306-311. | Show Abstract | Read more

A number of surveillance scenarios require the detection and tracking of people. Although person detection and counting systems are commercially available today, there is need for further research to address the challenges of real world scenarios. The focus of this work is the segmentation of groups of people into individuals. One relevant application of this algorithm is people counting. Experiments document that the presented approach leads to robust people counts. © 2005 IEEE.

Tu PH, Rittscher J. 2004. Crowd segmentation through emergent labeling Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3247 pp. 187-198. | Show Abstract

As an alternative to crowd segmentation using model-based object detection methods which depend on learned appearance models, we propose a paradigm that only makes use of low-level interest points. Here the detection of objects of interest is formulated as a clustering problem. The set of feature points are associated with vertices of a graph. Edges connect vertices based on the plausibility that the two vertices could have been generated from the same object. The task of object detection amounts to identifying a specific set of cliques of this graph. Since the topology of the graph is constrained by a geometric appearance model the maximal cliques can be enumerated directly. Each vertex of the graph can be a member of multiple maximal cliques. We need to find an assignment such that every vertex is only assigned to a single clique. An optimal assignment with respect to a global score function is estimated though a technique akin to soft-assign which can be viewed as a form of relaxation labeling that propagates constraints from regions of low to high ambiguity. No prior knowledge regarding the number of people in the scene is required. © Springer-Verlag 2004.

Rittscher J, Blake A, Hoogs A, Stein G. 2003. Mathematical modelling of animate and intentional motion. Philos Trans R Soc Lond B Biol Sci, 358 (1431), pp. 475-490. | Show Abstract | Read more

Our aim is to enable a machine to observe and interpret the behaviour of others. Mathematical models are employed to describe certain biological motions. The main challenge is to design models that are both tractable and meaningful. In the first part we will describe how computer vision techniques, in particular visual tracking, can be applied to recognize a small vocabulary of human actions in a constrained scenario. Mainly the problems of viewpoint and scale invariance need to be overcome to formalize a general framework. Hence the second part of the article is devoted to the question whether a particular human action should be captured in a single complex model or whether it is more promising to make extensive use of semantic knowledge and a collection of low-level models that encode certain motion primitives. Scene context plays a crucial role if we intend to give a higher-level interpretation rather than a low-level physical description of the observed motion. A semantic knowledge base is used to establish the scene context. This approach consists of three main components: visual analysis, the mapping from vision to language and the search of the semantic database. A small number of robust visual detectors is used to generate a higher-level description of the scene. The approach together with a number of results is presented in the third part of this article.

Cited:

40

Scopus

Hoogs A, Rittscher J, Stein G, Schmiederer J. 2003. Video content annotation using visual analysis and a large semantic knowledgebase Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2 | Show Abstract

We present a novel approach to automatically annotating broadcast video. To manage the enormous variety of objects, events and scenes in video problem domains such as news video, we couple generic image analysis with a semantic database, WordNet, containing huge amounts of real-world information. Object and event recognition are performed by searching WordNet for concepts jointly supported by image evidence and topic context derived from the video transcript. No object- or event-specific training is required, and only a few object models and detection algorithms are required to label much of the significant content of news video. The hierarchical structure of WordNet yields hierarchical recognition, dynamically tailored to the level of supporting image evidence. The potential of the approach is demonstrated by analyzing a wide variety of scenes in news video.

Cited:

26

Scopus

Rittscher J, Blake A, Roberts SJ. 2002. Towards the automatic analysis of complex human body motions IMAGE AND VISION COMPUTING, 20 (12), pp. 905-916. | Show Abstract | Read more

The classification of human body motion is an integral component for the automatic interpretation of video sequences. In a first part we present an effective approach that uses mixed discrete/continuous states to couple perception with classification. A spline contour is used to track the outline of the person. We show that for a quasi-periodic human body motion, an autoregressive process (ARP) is a suitable model for the contour dynamics. A collection of ARP can then be used as a dynamical model for mixed state Condensation filtering, switching automatically between different motion classes. Subsequently this method is applied to automatically segment sequences which contain different motions into subsequences, which contain only one type of motion. Tracking the contour of moving people is, however, difficult. This is why we propose to classify the type of motion directly from the spatio-temporal features of the image sequence. Representing the image data as a spatio-temporal or XYT cube and taking the 'epipolar slices' [Workshop on Computer Vision, Representation and Control, Shanty Creek, MI, October (1985) 168] of the cube reveals that different motions, such as running and walking, have characteristic patterns. A new method, which effectively compresses these motion patterns into a low-dimensional feature vector is introduced. The convincing performance of this new feature extraction method is demonstrated for both the classification and automatic segmentation of video sequences for a diverse set of motions. © 2002 Elsevier Science B.V. All rights reserved.

Cited:

78

Scopus

Kato J, Watanabe T, Joga S, Rittscher J, Blake A. 2002. An HMM-based segmentation method for traffic monitoring movies IEEE Transactions on Pattern Analysis and Machine Intelligence, 24 (9), pp. 1291-1296. | Show Abstract | Read more

Shadows of moving objects often obstruct robust visual tracking. We propose an HMM-based segmentation method which classifies in real time each pixel or region into three categories: shadows, foreground, and background objects. In the case of traffic monitoring movies, the effectiveness of the proposed method has been proven through experimental results.

Cited:

46

Scopus

Sullivan J, Rittscher J. 2001. Guiding random particles by deterministic search Proceedings of the IEEE International Conference on Computer Vision, 1 pp. 323-330. | Show Abstract | Read more

Among the algorithms developed towards the goal of robust and efficient tracking, two approaches which stand out due to their success are those based on particle filtering [8, 12, 14] and variational approaches [5, 16] . The Bayesian approach led to the development of the particle filter, which performs a random search guided by a stochastic motion model. On the other hand, localising an object can be based on minimising a cost function. This minimum can be found using variational methods. The search paradigms differ in these two methods. One is stochastic and model-driven while the other is deterministic and data-driven. This paper presents a new algorithm to incorporate the strengths of both approaches into one consistent framework. To allow this fusion a smooth, wide likelihood function is constructed, based on a sum-of-squares distance measure and an appropriate sampling scheme is introduced. Based on low-level information this scheme automatically mixes the two methods of search and adapts the computational demands of the algorithm to the difficulty of the problem at hand. The ability to effectively track complex motions without the need for finely tuned motion models is demonstrated.

Cited:

128

Scopus

North B, Blake A, Isard M, Rittscher J. 2000. Learning and classification of complex dynamics IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 22 (9), pp. 1016-1034. | Show Abstract | Read more

Standard, exact techniques based on likelihood maximization are available for learning Auto-Regressive Process models of dynamical processes. The uncertainty of observations obtained from real sensors means that dynamics can be observed only approximately. Learning can still be achieved via 'EM-K'-Expectation-Maximization (EM) based on Kalman Filtering. This cannot handle more complex dynamics, however, involving multiple classes of motion. A problem arises also in the case of dynamical processes observed visually: background clutter arising for example, in camouflage, produces non-Gaussian observation noise. Even with a single dynamical class, non-Gaussian observations put the learning problem beyond the scope of EM-K. For those cases, we show here how 'EM-C' - based on the Condensation algorithm which propagates random 'particle-sets', can solve the learning problem. Here, learning in clutter is studied experimentally using visual observations of a hand moving over a desktop. The resulting learned dynamical model is shown to have considerable predictive value: When used as a prior for estimation of motion, the burden of computation in visual observation is significantly reduced. Multiclass dynamics are studied via visually observed juggling; plausible dynamical models have been found to emerge from the learning process, and accurate classification of motion has resulted. In practice, EM-C learning is computationally burdensome and the paper concludes with some discussion of computational complexity.

Nketia TA, Sailem H, Rohde G, Machiraju R, Rittscher J. 2017. Analysis of live cell images: Methods, tools and opportunities. Methods, 115 pp. 65-79. | Show Abstract | Read more

Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits.

Santamaria-Pang A, Huang Y, Pang Z, Qing L, Rittscher J. 2014. Epithelial cell segmentation via shape ranking Lecture Notes in Computational Vision and Biomechanics, 14 pp. 315-338. | Show Abstract | Read more

© Springer International Publishing Switzerland 2014. We present a robust and high-throughput computational method for cell segmentation using multiplexed immunohistopathology images. The major challenges in obtaining an accurate cell segmentation from tissue samples are due to (i) complex cell and tissue morphology, (ii) different sources of variability including non-homogeneous staining and microscope specific noise, and (iii) tissue quality. Here we present a fast method that uses cell shape and scale information via unsupervised machine learning to enhance and improve general purpose segmentation methods. The proposed method is well suited for tissue cytology because it captures the the morphological and shape heterogeneity in different cell populations.We discuss our segmentation framework for analysing approximately one hundred images of lung and colon cancer and we restrict our analysis to epithelial cells.

Gerdes MJ, Sevinsky CJ, Sood A, Adak S, Bello MO, Bordwell A, Can A, Corwin A, Dinn S, Filkins RJ et al. 2013. Highly multiplexed single-cell analysis of formalin-fixed, paraffin-embedded cancer tissue. Proc Natl Acad Sci U S A, 110 (29), pp. 11982-11987. | Show Abstract | Read more

Limitations on the number of unique protein and DNA molecules that can be characterized microscopically in a single tissue specimen impede advances in understanding the biological basis of health and disease. Here we present a multiplexed fluorescence microscopy method (MxIF) for quantitative, single-cell, and subcellular characterization of multiple analytes in formalin-fixed paraffin-embedded tissue. Chemical inactivation of fluorescent dyes after each image acquisition round allows reuse of common dyes in iterative staining and imaging cycles. The mild inactivation chemistry is compatible with total and phosphoprotein detection, as well as DNA FISH. Accurate computational registration of sequential images is achieved by aligning nuclear counterstain-derived fiducial points. Individual cells, plasma membrane, cytoplasm, nucleus, tumor, and stromal regions are segmented to achieve cellular and subcellular quantification of multiplexed targets. In a comparison of pathologist scoring of diaminobenzidine staining of serial sections and automated MxIF scoring of a single section, human epidermal growth factor receptor 2, estrogen receptor, p53, and androgen receptor staining by diaminobenzidine and MxIF methods yielded similar results. Single-cell staining patterns of 61 protein antigens by MxIF in 747 colorectal cancer subjects reveals extensive tumor heterogeneity, and cluster analysis of divergent signaling through ERK1/2, S6 kinase 1, and 4E binding protein 1 provides insights into the spatial organization of mechanistic target of rapamycin and MAPK signal transduction. Our results suggest MxIF should be broadly applicable to problems in the fields of basic biological research, drug discovery and development, and clinical diagnostics.

Santamaria-Pang A, Huangy Y, Rittscher J. 2013. Cell segmentation and classification via unsupervised shape ranking Proceedings - International Symposium on Biomedical Imaging, pp. 406-409. | Show Abstract | Read more

As histology patterns vary depending on different tissue types, it is typically necessary to adapt and optimize segmentation algorithms to these tissue type-specific applications. Here we present an unsupervised method that utilizes cell shape cues to achieve this task-specific optimization by introducing a shape ranking function. The proposed algorithm is part of our Layers™ toolkit for image and data analysis for multiplexed immunohistopathology images. To the best of our knowledge, this is the first time that this type of methodology is proposed for segmentation and ranking in cell tissue samples. Our new cell ranking scheme takes into account both shape and scale information and provides information about the quality of the segmentation. First, we introduce cell-shape descriptor that can effectively discriminate the cell-type's morphology. Secondly, we formulate a hierarchical-segmentation as a dynamic optimization problem, where cells are subdivided if they improve a segmentation quality criteria. Third, we propose a numerically efficient algorithm to solve this dynamic optimization problem. Our approach is generic, since we don't assume any particular cell morphology and can be applied to different segmentation problems. We show results in segmenting and ranking thousands of cells from multiplexing images and we compare our method with well established segmentation techniques, obtaining very encouraging results. © 2013 IEEE.

Bilgin CC, Rittscher J, Filkins R, Can A. 2012. Digitally adjusting chromogenic dye proportions in brightfield microscopy images. J Microsc, 245 (3), pp. 319-330. | Show Abstract | Read more

We present an algorithm to adjust the contrast of individual dyes from colour (red-green-blue) images of dye mixtures. Our technique is based on first decomposing the colour image into individual dye components, then adjusting each of the dye components and finally mixing the individual dyes to generate colour images. Specifically in this paper, we digitally adjust the staining proportions of hematoxylin and eosin (H&E) chromogenic dyes in tissue images. We formulate the physical dye absorption process as a non-negative mixing equation, and solve the individual components using non-negative matrix factorisation (NMF). Our NMF formulation includes camera dark current in addition to the mixing proportions and the individual H and E components. The novelty of our approach is to adjust the dye proportions while preserving the color of nonlinear dye interactions, such as pigments and red blood cells. In this paper we present results for only H&E images, our technique can easily be extended to other staining techniques.

Margolis D, Santamaria-Pang A, Rittscher J. 2012. Tissue segmentation and classification using graph-based unsupervised clustering Proceedings - International Symposium on Biomedical Imaging, pp. 162-165. | Show Abstract | Read more

Automated segmentation and quantification of cellular and subcellular components in multiplexed images has allowed for a combination of both spatial and protein expression information to become available for analysis. However, performing analyses across multiple patients and tissue types continues to be a challenge, as well as the greater challenge of tissue classification itself. We propose a model of tissues as interconnected networks of epithelial cells whose connectivity is determined by their size, specific expression levels, and proximity to other cells. These Biomarker Enhanced Tissue Networks (BETN) reflect both the individual nature of the cells and the complex cell to cell relationships within the tissue. Performing a simple analysis of such tissue networks managed to successfully classify epithelial cells from stromal cells across multiple patients and tissue types. Further experiments show that significant information about the structure and nature of tissues can also be extracted through analysis of the networks, which will hopefully move towards the eventual goal of true tissue classification. © 2012 IEEE.

Padfield D, Rittscher J, Roysam B. 2011. Coupled minimum-cost flow cell tracking for high-throughput quantitative analysis. Med Image Anal, 15 (4), pp. 650-668. | Show Abstract | Read more

A growing number of screening applications require the automated monitoring of cell populations in a high-throughput, high-content environment. These applications depend on accurate cell tracking of individual cells that display various behaviors including mitosis, merging, rapid movement, and entering and leaving the field of view. Many approaches to cell tracking have been developed in the past, but most are quite complex, require extensive post-processing, and are parameter intensive. To overcome such issues, we present a general, consistent, and extensible tracking approach that explicitly models cell behaviors in a graph-theoretic framework. We introduce a way of extending the standard minimum-cost flow algorithm to account for mitosis and merging events through a coupling operation on particular edges. We then show how the resulting graph can be efficiently solved using algorithms such as linear programming to choose the edges of the graph that observe the constraints while leading to the lowest overall cost. This tracking algorithm relies on accurate denoising and segmentation steps for which we use a wavelet-based approach that is able to accurately segment cells even in images with very low contrast-to-noise. In addition, the framework is able to measure and correct for microscope defocusing and stage shift. We applied the algorithms on nearly 6000 images of 400,000 cells representing 32,000 tracks taken from five separate datasets, each composed of multiple wells. Our algorithm was able to segment and track cells and detect different cell behaviors with an accuracy of over 99%. This overall framework enables accurate quantitative analysis of cell events and provides a valuable tool for high-throughput biological studies.

Cited:

122

Scopus

Doretto G, Sebastian T, Tu P, Rittscher J. 2011. Appearance-based person reidentification in camera networks: problem overview and current approaches Journal of Ambient Intelligence and Humanized Computing, 2 (2), pp. 127-151. | Show Abstract | Read more

Recent advances in visual tracking methods allow following a given object or individual in presence of significant clutter or partial occlusions in a single or a set of overlapping camera views. The question of when person detections in different views or at different time instants can be linked to the same individ ual is of fundamental importance to the video analysis in large-scale network of cameras. This is the person reidentification problem. The paper focuses on algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Methods that effectively address the challenges associated with changes in illumination, pose, and clothing appearance variation are discussed. More specifically, the development of a set of models that capture the overall appearance of an individual and can effectively be used for information retrieval are reviewed. Some of them provide a holistic description of a person, and some others require an intermediate step where specific body parts need to be identified. Some are designed to extract appearance features over time, and some others can operate reliably also on single images. The paper discusses algorithms for speeding up the computation of signatures. In particular it describes very fast procedures for computing co-occurrence matrices by leveraging a generalization of the integral representation of images. The algorithms are deployed and tested in a camera network comprising of three cameras with non-overlapping field of views, where a multi-camera multi-target tracker links the tracks in different cameras by reidentifying the same people appearing in different views. © 2011 Springer-Verlag.

Padfield D, Rittscher J, Roysam B. 2011. Quantitative biological studies enabled by robust cell tracking Proceedings - International Symposium on Biomedical Imaging, pp. 1929-1934. | Show Abstract | Read more

A growing number of screening applications require the automated monitoring of cell populations enabled by cell segmentation and tracking algorithms in a high-throughput, high-content environment. Building upon the tracks generated by such algorithms, we derive biologically relevant features and demonstrate a range of biological studies made possible by such quantitative measures. In the first, we introduce a combination of quantitative features that characterize cell apoptosis and arrest. In the second, we automatically measure the effect of motility-promoting serums. In the third, we show that proper dosage levels can be automatically determined for studying protein translocations. These results provide large-scale quantitative validation of biological experiments and demonstrate that our framework provides a valuable tool for high-throughout biological studies. © 2011 IEEE.

Singh S, Janoos F, Pécot T, Caserta E, Huang K, Rittscher J, Leone G, Machiraju R. 2011. Non-parametric population analysis of cellular phenotypes. Med Image Comput Comput Assist Interv, 14 (Pt 2), pp. 343-351. | Show Abstract

Methods to quantify cellular-level phenotypic differences between genetic groups are a key tool in genomics research. In disease processes such as cancer, phenotypic changes at the cellular level frequently manifest in the modification of cell population profiles. These changes are hard to detect due the ambiguity in identifying distinct cell phenotypes within a population. We present a methodology which enables the detection of such changes by generating a phenotypic signature of cell populations in a data-derived feature-space. Further, this signature is used to estimate a model for the redistribution of phenotypes that was induced by the genetic change. Results are presented on an experiment involving deletion of a tumor-suppressor gene dominant in breast cancer, where the methodology is used to detect changes in nuclear morphology between control and knockout groups.

Singh S, Janoos F, Pécot T, Caserta E, Huang K, Rittscher J, Leone G, Machiraju R. 2011. Non-parametric population analysis of cellular phenotypes Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6892 LNCS (PART 2), pp. 343-351. | Show Abstract | Read more

Methods to quantify cellular-level phenotypic differences between genetic groups are a key tool in genomics research. In disease processes such as cancer, phenotypic changes at the cellular level frequently manifest in the modification of cell population profiles. These changes are hard to detect due the ambiguity in identifying distinct cell phenotypes within a population. We present a methodology which enables the detection of such changes by generating a phenotypic signature of cell populations in a data-derived feature-space. Further, this signature is used to estimate a model for the redistribution of phenotypes that was induced by the genetic change. Results are presented on an experiment involving deletion of a tumor-suppressor gene dominant in breast cancer, where the methodology is used to detect changes in nuclear morphology between control and knockout groups. © 2011 Springer-Verlag.

Lim SN, Doretto G, Rittscher J. 2011. Multi-class object layout with unsupervised image classification and object localization Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6938 LNCS (PART 1), pp. 573-585. | Show Abstract | Read more

Recognizing the presence of object classes in an image, or image classification, has become an increasingly important topic of interest. Equally important, however, is also the capability to locate these object classes in the image. We consider in this paper an approach to these two related problems with the primary goal of minimizing the training requirements so as to allow for ease of adding new object classes, as opposed to approaches that favor training a suite of object-specific classifiers. To this end, we provide the analysis of an exemplar-based approach that leverages unsupervised clustering for classification purpose, and sliding window matching for localization. While such exemplar based approach by itself is brittle towards intraclass and viewpoint variations, we achieve robustness by introducing a novel Conditional Random Field model that facilitates a straightforward accept/reject decision of the localized object classes. Performance of our approach on the PASCAL Visual Object Challenge 2007 dataset demonstrates its efficacy. © 2011 Springer-Verlag.

Image-Based Decoding the Role of Microglia

While neurons have been the primary focus of dementia research, the genetic link to the immune system has refocused attention on the resident immune cells of the brain, microglia. These cells originate during embryogenesis, when immune cells invade the embryonic CNS and populate the otherwise immune privileged brain. Microglia have been implicated and shown to be responsible for a host of essential surveillance function in the central nervous system, including synapse maintenance, response to ...

View project

2189

Thank you for registering your interest

We were unable to record your request to register for interest in future opportunities. Please try again and if problems persist contact us at webteam@ndm.ox.ac.uk