HoVerNet¶
tiatoolbox
.models
.architecture
.hovernet
.HoVerNet
- class HoVerNet(num_input_channels=3, num_types=None, mode='original', nuc_type_dict=None)[source]¶
Initialise HoVerNet [1].
The tiatoolbox models should produce the following results:
HoVerNet segmentation performance on the CoNSeP dataset [1]¶ Model name
Data set
DICE
AJI
DQ
SQ
PQ
hovernet-original-consep
CoNSeP
0.85
0.57
0.70
0.78
0.55
HoVerNet segmentation performance on the Kumar dataset [2]¶ Model name
Data set
DICE
AJI
DQ
SQ
PQ
hovernet-original-kumar
Kumar
0.83
0.62
0.77
0.77
0.60
- Parameters:
num_input_channels (int) – Number of channels in input.
num_types (int) – Number of nuclei types within the predictions. Once defined, a branch dedicated for typing is created. By default, no typing (num_types=None) is used.
mode (str) – To use architecture defined in as in original paper (original) or the one used in PanNuke paper (fast).
nuc_type_dict (dict | None)
References
[1] Graham, Simon, et al. “HoVerNet: Simultaneous segmentation and classification of nuclei in multi-tissue histology images.” Medical Image Analysis 58 (2019): 101563.
[2] Kumar, Neeraj, et al. “A dataset and a technique for generalized nuclear segmentation for computational pathology.” IEEE transactions on medical imaging 36.7 (2017): 1550-1560.
Initialize
HoVerNet
.Methods
Logic for using layers defined in init.
To collect instance information and store it within a dictionary.
Run inference on an input batch.
Post-processing script for image tiles.
Attributes
training
- forward(input_tensor)[source]¶
Logic for using layers defined in init.
This method defines how layers are used in forward operation.
- Parameters:
input_tensor (torch.Tensor) – Input images, the tensor is in the shape of NCHW.
self (HoVerNet)
- Returns:
A dictionary containing the inference output. The expected format os {decoder_name: prediction}.
- Return type:
- static get_instance_info(pred_inst, pred_type=None)[source]¶
To collect instance information and store it within a dictionary.
- Parameters:
pred_inst (
numpy.ndarray
) – An image of shape (height, width) which contains the probabilities of a pixel being a nuclei.pred_type (
numpy.ndarray
) – An image of shape (height, width, 1) which contains the probabilities of a pixel being a certain type of nuclei.
- Returns:
A dictionary containing a mapping of each instance within pred_inst instance information. It has the following form:
{ 0: { # Instance ID "box": [ x_min, y_min, x_max, y_max, ], "centroid": [x, y], "contour": [ [x, y], ... ], "type": integer, "prob": float, }, ... }
where the instance ID is an integer corresponding to the instance at the same pixel value within pred_inst.
- Return type:
- static infer_batch(model, batch_data, *, device)[source]¶
Run inference on an input batch.
This contains logic for forward operation as well as batch i/o aggregation.
- Parameters:
model (nn.Module) – PyTorch defined model.
batch_data (ndarray) – A batch of data generated by torch.utils.data.DataLoader.
device (str) – Transfers model to the specified device. Default is “cpu”.
- Returns:
Output from each head. Each head is expected to contain N predictions for N input patches. There are two cases, one with 2 heads (Nuclei Pixels np and Hover hv) or with 2 heads (np, hv, and Nuclei Types tp).
- Return type:
- static postproc(raw_maps)[source]¶
Post-processing script for image tiles.
- Parameters:
raw_maps (list(
numpy.ndarray
)) – A list of prediction outputs of each head and assumed to be in the order of [np, hv, tp] (match with the output of infer_batch).- Returns:
numpy.ndarray
- Instance map:Pixel-wise nuclear instance segmentation prediction.
dict
- Instance dictionary:A dictionary containing a mapping of each instance within inst_map instance information. It has the following form:
{ 0: { # Instance ID "box": [ x_min, y_min, x_max, y_max, ], "centroid": [x, y], "contour": [ [x, y], ... ], "type": 1, "prob": 0.95, }, ... }
where the instance ID is an integer corresponding to the instance at the same pixel location within the returned instance map.
- Return type:
Examples
>>> from tiatoolbox.models.architecture.hovernet import HoVerNet >>> import torch >>> import numpy as np >>> batch = torch.from_numpy(image_patch)[None] >>> # image_patch is a 256x256x3 numpy array >>> weights_path = "A/weights.pth" >>> pretrained = torch.load(weights_path) >>> model = HoVerNet(num_types=6, mode="fast") >>> model.load_state_dict(pretrained) >>> output = model.infer_batch(model, batch, device="cuda") >>> output = [v[0] for v in output] >>> output = model.postproc(output)