The relevance of the Problem

Of late, efforts are underway to build computer-assisted diagnostic tools for cancer diagnosis via image processing. Such computer-assisted tools require capturing of images, stain color normalization of images, segmentation of cells of interest, and classification to count malignant versus healthy cells. This challenge is positioned towards robust segmentation of cells which is the first stage to build such a tool for plasma cell cancer, namely, Multiple Myeloma (MM), which is a type of blood cancer. We will provide images after stain color normalization. The problem of plasma cell segmentation in MM is challenging owing to multiple reasons- 1) There is a varying amount of nucleus and cytoplasm from one cell to another. 2) The cells may appear in clusters or as isolated single cells. 3) The cells appearing in clusters may have three cases- (a) cytoplasm of two cells touch each other (b) the cytoplasm of one cell and nucleus of another touch each other, (c) nucleus of cells touch each other. Since the cytoplasm and nucleus have different colors, the segmentation of cells may pose challenges. 4) There may be multiple cells touching each other in the cluster. 5) There may be unstained cells, say a red blood cell underneath the cell of interest, changing its color and shade. 6) The cytoplasm of a cell may be close to the background of the whole image, making it difficult to identify the boundary of the cell and segment it. Hence, the problem is very challenging and interesting. The segmentation algorithm designed by the participants should segment the cells from clusters as well. 

Data usage agreement

Participants cannot share the data, cannot use it for any commercial purpose, cannot publish any paper using this data released through the challenge except for the challenge proceedings. However, the challenge data may be released publicly within 6 months of the conclusion of the challenge and can thereafter be used for academic and research purposes. The participants are free to use publicly available data including pre-trained nets for training purposes. 

Code availability

Results from participating teams will be evaluated at the leaderboard by comparing them with the ground truth available with the organizers. It will be tested on the Instance mean averaged precision (ImAP). The code that will be used to compute this metric will be made available on the challenge page. Participating teams will be required to share the codes with the organizers. The participants will be required to upload the codes at the public platform, say Github, once the papers are accepted in the proceedings.

Publication Policy

1. For this challenge, we will be submitting a "Challenge Report" in some reputed journal of Medical Imaging. Challenge Reports are a common way to publish the challenge summary. Some samples of the published Challenge Reports are:     
        
a. https://arxiv.org/pdf/1808.04277.pdf              

2. We will choose Top-5 teams from the final leaderboard, who would hence be the co-authors in this paper. Their methods will be discussed at length with their pros and cons.

3. Please note that the data would be made public after the challenge report is accepted for publication. Until then, neither the data nor any paper on this data can be published anywhere. 

4. In order to verify the results and to ensure reproducibility, it is mandatory for all the teams to share a report, their codes, and trained models. We will be including the leaderboard statistics and scores in the Challenge report.