HoVerNetPlus

class HoVerNetPlus(num_input_channels=3, num_types=None, num_layers=None, nuc_type_dict=None, layer_type_dict=None)[source]

Initialise HoVerNet+ [1].

HoVerNet+ takes an RGB input image, and provides the option to simultaneously segment and classify the nuclei present, as well as semantically segment different regions or layers in the images. Note the HoVerNet+ architecture assumes an image resolution of 0.5 mpp, in contrast to HoVerNet at 0.25 mpp.

The tiatoolbox model should produce following results on the specified datasets that it was trained on.

HoVerNet+ Performance for Nuclear Instance Segmentation

Model name

Data set

DICE

AJI

DQ

SQ

PQ

hovernetplus-oed

OED

0.84

0.69

0.86

0.80

0.69

HoVerNet+ Mean Performance for Semantic Segmentation

Model name

Data set

F1

Precision

Recall

Accuracy

hovernetplus-oed

OED

0.82

0.82

0.82

0.84

Parameters:
  • num_input_channels (int) – The number of input channels, default = 3 for RGB.

  • num_types (int) – The number of types of nuclei present in the images.

  • num_layers (int) – The number of layers/different regions types present.

  • nuc_type_dict (dict | None)

  • layer_type_dict (dict | None)

References

[1] Shephard, Adam J., et al. “Simultaneous Nuclear Instance and Layer Segmentation in Oral Epithelial Dysplasia.” Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.

Initialize HoVerNetPlus.

Methods

infer_batch

Run inference on an input batch.

postproc

Post-processing script for image tiles.

Attributes

training

static infer_batch(model, batch_data, *, on_gpu)[source]

Run inference on an input batch.

This contains logic for forward operation as well as batch i/o aggregation.

Parameters:
  • model (nn.Module) – PyTorch defined model.

  • batch_data (ndarray) – A batch of data generated by torch.utils.data.DataLoader.

  • on_gpu (bool) – Whether to run inference on a GPU.

Return type:

tuple

static postproc(raw_maps)[source]

Post-processing script for image tiles.

Parameters:

raw_maps (list(ndarray)) – A list of prediction outputs of each head and assumed to be in the order of [np, hv, tp, ls] (match with the output of infer_batch).

Returns:

  • inst_map (ndarray):

    Pixel-wise nuclear instance segmentation prediction.

  • inst_dict (dict):

    A dictionary containing a mapping of each instance within inst_map instance information. It has the following form:

    {
        0: {  # Instance ID
            "box": [
                x_min,
                y_min,
                x_max,
                y_max,
            ],
            "centroid": [x, y],
            "contour": [
                [x, y],
                ...
            ],
            "type": integer,
            "prob": float,
        },
        ...
    }
    

    where the instance ID is an integer corresponding to the instance at the same pixel value within inst_map.

  • layer_map (ndarray):

    Pixel-wise layer segmentation prediction.

  • layer_dict (dict):

    A dictionary containing a mapping of each segmented layer within layer_map. It has the following form

    {
        1: {  # Instance ID
            "contour": [
                [x, y],
                ...
            ],
            "type": integer,
        },
        ...
    }
    

Return type:

tuple

Examples

>>> from tiatoolbox.models.architecture.hovernetplus import HoVerNetPlus
>>> import torch
>>> import numpy as np
>>> batch = torch.from_numpy(image_patch)[None]
>>> # image_patch is a 256x256x3 numpy array
>>> weights_path = "A/weights.pth"
>>> pretrained = torch.load(weights_path)
>>> model = HoVerNetPlus(num_types=3, num_layers=5)
>>> model.load_state_dict(pretrained)
>>> output = model.infer_batch(model, batch, on_gpu=False)
>>> output = [v[0] for v in output]
>>> output = model.postproc(output)