where TP are the true positives, FP false positives and FN false negatives. We can see that DC≥IoU.
The dice coefficient can also be defined as a loss function:
DL(p,^p)=2⟨p,^p⟩∥p∥22+∥^p∥22
where p∈{0,1}n and 0≤^p≤1.
Since p is either 1 or 0, the numerator will always be one times the predicted probability of the foreground pixel (1). Hence, when p is a background pixel (0), the numerator will be 0.
Tversky loss
Tversky loss (TL) is a generalization of Dice loss. TL adds a weight to FP and FN.
which is just Dice loss. In the paper [4], the authors square the predicted probability in the denominator, but e.g. the paper [5]keeps the term as it is.
Lovász-Softmax
DL and TL simply relax the hard constraint ^p∈{0,1}n in order to have a function on the domain [0,1]. The paper [6] derives instead a surrogate loss function.
An implementation of Lovász-Softmax can be found on github. Note that this loss requires the identity activation in the last layer. A negative value means class A and a positive value means class B.
In Keras the loss function can be used as follows:
References
[1] S. Xie and Z. Tu. Holistically-Nested Edge Detection, 2015.
[2] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar. Focal Loss for Dense Object Detection, 2017.
[3] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation, 2015.
[4] F. Milletari, N. Navab, and S.-A. Ahmadi. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation, 2016.
[5] S. S. M. Salehi, D. Erdogmus, and A. Gholipour. Tversky loss function for image segmentation using 3D fully convolutional deep networks, 2017.
[6] M. Berman, A. R. Triki, M. B. Blaschko. The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks, 2018.
No comments:
Post a Comment