SCCNN

class SCCNN(num_input_channels=3, patch_output_shape=(13, 13), radius=12, min_distance=6, threshold_abs=0.2)[source]

Initialize SCCNN [1].

The following models have been included in tiatoolbox:

  1. sccnn-crchisto:

    This model is trained on CRCHisto dataset

  2. sccnn-conic:

    This model is trained on CoNIC dataset Centroids of ground truth masks were used to train this model. The results are reported on the whole test data set including preliminary and final set.

The original model was implemented in Matlab. The model has been reimplemented in PyTorch for Python compatibility. The original model uses HRGB as input, where ‘H’ represents hematoxylin. The model has been modified to rely on RGB image as input.

The tiatoolbox model should produce the following results on the following datasets using 8 pixels as radius for true detection:

SCCNN performance

Model name

Data set

Precision

Recall

F1Score

sccnn-crchisto

CRCHisto

0.82

0.80

0.81

sccnn-conic

CoNIC

0.79

0.79

0.79

Parameters:
  • num_input_channels (int) – Number of channels in input. default=3.

  • tuple (patch_output_shape) – Defines output height and output width. default=(13, 13).

  • radius (int) – Radius for nucleus detection, default = 12.

  • min_distance (int) – The minimal allowed distance separating peaks. To find the maximum number of peaks, use min_distance=1, default=6.

  • threshold_abs (float) – Minimum intensity of peaks, default=0.20.

  • patch_output_shape (tuple[int, int])

References

[1] Sirinukunwattana, Korsuk, et al. “Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images.” IEEE transactions on medical imaging 35.5 (2016): 1196-1206.

Initialize SCCNN.

Methods

forward

Logic for using layers defined in init.

infer_batch

Run inference on an input batch.

postproc

Post-processing script for MicroNet.

preproc

Transforming network input to desired format.

spatially_constrained_layer2

Spatially constrained layer 2.

Attributes

training

forward(input_tensor)[source]

Logic for using layers defined in init.

This method defines how layers are used in forward operation.

Parameters:
  • input_tensor (torch.Tensor) – Input images, the tensor is in the shape of NCHW.

  • self (SCCNN)

Returns:

Output map for cell detection. Peak detection should be applied to this output for cell detection.

Return type:

torch.Tensor

static infer_batch(model, batch_data, *, on_gpu)[source]

Run inference on an input batch.

This contains logic for forward operation as well as batch I/O aggregation.

Parameters:
  • model (nn.Module) – PyTorch defined model.

  • batch_data (numpy.ndarray or torch.Tensor) – A batch of data generated by torch.utils.data.DataLoader.

  • on_gpu (bool) – Whether to run inference on a GPU.

Returns:

Output probability map.

Return type:

list of numpy.ndarray

postproc(prediction_map)[source]

Post-processing script for MicroNet.

Performs peak detection and extracts coordinates in x, y format.

Parameters:
  • prediction_map (ndarray) – Input image of type numpy array.

  • self (SCCNN)

Returns:

Pixel-wise nuclear instance segmentation prediction.

Return type:

numpy.ndarray

static preproc(image)[source]

Transforming network input to desired format.

This method is model and dataset specific, meaning that it can be replaced by user’s desired transform function before training/inference.

Parameters:

image (torch.Tensor) – Input images, the tensor is of the shape NCHW.

Returns:

The transformed input.

Return type:

output (torch.Tensor)

spatially_constrained_layer2(sc1_0, sc1_1, sc1_2)[source]

Spatially constrained layer 2.

Estimates row, column and height for sc2 layer mapping.

Parameters:
  • sc1_0 (torch.Tensor) – Output of spatially_constrained_layer1 estimating the x position of the nucleus.

  • sc1_1 (torch.Tensor) – Output of spatially_constrained_layer1 estimating the y position of the nucleus.

  • sc1_2 (torch.Tensor) – Output of spatially_constrained_layer1 estimating the confidence in nucleus detection.

  • self (SCCNN)

Returns:

Probability map using the estimates from spatially_constrained_layer1.

Return type:

torch.Tensor