Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

© 2020, Springer Nature Switzerland AG. Accurate segmentation of cellular structures is critical for automating the analysis of microscopy data. Advances in deep learning have facilitated extensive improvements in semantic image segmentation. In particular, U-Net, a model specifically developed for biomedical image data, performs multi-instance segmentation through pixel-based classification. However, approaches based on U-Net tend to merge touching cells in dense cell cultures, resulting in under-segmentation. To address this issue, we propose DeepSplit; a multi-task convolutional neural network architecture where one encoding path splits into two decoding branches. DeepSplit first learns segmentation masks, then explicitly learns the more challenging cell-cell contact regions. We test our approach on a challenging dataset of cells that are highly variable in terms of shape and intensity. DeepSplit achieves 90% cell detection coefficient and 90% Dice Similarity Coefficient (DSC) which is a significant improvement on the state-of-the-art U-Net that scored 70% and 84% respectively.

Original publication

DOI

10.1007/978-3-030-52791-4_13

Type

Conference paper

Publication Date

01/01/2020

Volume

1248 CCIS

Pages

155 - 167