Sunday, September 29, 2019

Common git usage

Clone a single branch
git clone user@git-server:project_name.git -b branch_name /your/folder




Monday, September 16, 2019

Ubuntu boot to initramfs


Problem:
Ubuntu cannot boot normally but drop to initramfs. The reason is some part of filesystem is broken, so we need to fix.
BusyBox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash)
Enter 'help' for a list of built-in commands.

(initramfs)

Solution write the command of fsck /dev/sdax like the below and give Y if the console ask for fixing something:
(initramfs) fsck /dev/sda1
or
(initramfs) fsck /dev/sdaX
X specifies mounted disk part number.
If you don't want to manually press 'y' every time it asks for a fix, you can also run the command with the -y option.
(initramfs) fsck /dev/sdaX -y

Friday, August 30, 2019

Install opencv_contrib

Note version of opencv and opencv_contrib must be the same, e.g., opencv-3.4.1 and opencv_contrib-3.4.1

in opencv-3.4.1 build directory:

$ cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_CUDA=OFF -D CMAKE_INSTALL_PREFIX=/home/share/opencv34 -D OPENCV_EXTRA_MODULES_PATH=<opencv_contrib-3.4.1>/modules -DBUILD_opencv_rgbd=OFF ..

$ make -j

Wednesday, June 26, 2019

segmentation loss

From original article:
https://lars76.github.io/neural-networks/object-detection/losses-for-segmentation/

IoU loss https://www.cs.umanitoba.ca/~ywang/papers/isvc16.pdf

Focal loss

Focal loss (FL) [2] tries to down-weight the contribution of easy examples so that the CNN focuses more on hard examples.
FL can be defined as follows:
FL(p,p^)=(α(1p^)γplog(p^)+(1α)p^γ(1p)log(1p^))
When γ=0, we obtain BCE.
This time we cannot use weighted_cross_entropy_with_logits to implement FL in Keras. We will derive instead our own focal_loss_with_logits function.
=α(1p^)γplog(1+ex)(1α)p^γ(1p)log(ex1+ex)=α(1p^)γplog(1+ex)(1α)p^γ(1p)(xlog(1+ex))=α(1p^)γplog(1+ex)+(1α)p^γ(1p)(x+log(1+ex))=log(1+ex)(α(1p^)γp+(1α)p^γ(1p))+x(1α)p^γ(1p)=log(ex(1+ex))(α(1p^)γp+(1α)p^γ(1p))+x(1α)p^γ(1p)=(log(1+ex)x)(α(1p^)γp+(1α)p^γ(1p))+x(1α)p^γ(1p)=(log(1+e|x|)+max(x,0))(α(1p^)γp+(1α)p^γ(1p))+x(1α)p^γ(1p)
And the implementation is then:
def focal_loss(alpha=0.25, gamma=2):
  def focal_loss_with_logits(logits, targets, alpha, gamma, y_pred):
    weight_a = alpha * (1 - y_pred) ** gamma * targets
    weight_b = (1 - alpha) * y_pred ** gamma * (1 - targets)
    
    return (tf.log1p(tf.exp(-tf.abs(logits))) + tf.nn.relu(-logits)) * (weight_a + weight_b) + logits * weight_b 

  def loss(y_true, y_pred):
    y_pred = tf.clip_by_value(y_pred, tf.keras.backend.epsilon(), 1 - tf.keras.backend.epsilon())
    logits = tf.log(y_pred / (1 - y_pred))

    loss = focal_loss_with_logits(logits=logits, targets=y_true, alpha=alpha, gamma=gamma, y_pred=y_pred)

    return tf.reduce_mean(loss)

  return loss

Overlap measures

Dice Loss / F1 score

The Dice coefficient is similar to the Jaccard Index (Intersection over Union, IoU):
DC=2TP2TP+FP+FN=2|XY||X|+|Y|IoU=TPTP+FP+FN=|XY||X|+|Y||XY|
where TP are the true positives, FP false positives and FN false negatives. We can see that DCIoU.
The dice coefficient can also be defined as a loss function:
DL(p,p^)=2p,p^p22+p^22
where p{0,1}n and 0p^1.
def dice_loss(y_true, y_pred):
  numerator = 2 * tf.reduce_sum(y_true * y_pred)
  # some implementations don't square y_pred
  denominator = tf.reduce_sum(y_true + tf.square(y_pred))

  return numerator / (denominator + tf.keras.backend.epsilon())
Since p is either 1 or 0, the numerator will always be one times the predicted probability of the foreground pixel (1). Hence, when p is a background pixel (0), the numerator will be 0.

Tversky loss

Tversky loss (TL) is a generalization of Dice loss. TL adds a weight to FP and FN.
DL(p,p^)=p,p^p,p^+β1p,p^+(1β)p,1p^
Let β=12. Then
=2p,p^2p,p^+1p,p^+p,1p^=2p,p^1,p^+1,p
which is just Dice loss. In the paper [4], the authors square the predicted probability in the denominator, but e.g. the paper [5]keeps the term as it is.
def tversky_loss(beta):
  def loss(y_true, y_pred):
    numerator = tf.reduce_sum(y_true * y_pred)
    denominator = y_true * y_pred + beta * (1 - y_true) * y_pred + (1 - beta) * y_true * (1 - y_pred)

    return numerator / (tf.reduce_sum(denominator) + tf.keras.backend.epsilon())

  return loss

Lovász-Softmax

DL and TL simply relax the hard constraint p^{0,1}n in order to have a function on the domain [0,1]. The paper [6] derives instead a surrogate loss function.
An implementation of Lovász-Softmax can be found on github. Note that this loss requires the identity activation in the last layer. A negative value means class A and a positive value means class B.
In Keras the loss function can be used as follows:
def lovasz_softmax(y_true, y_pred):
  return lovasz_hinge(labels=y_true, logits=y_pred)

model.compile(loss=lovasz_softmax, optimizer=optimizer, metrics=[pixel_iou])

References

[1] S. Xie and Z. Tu. Holistically-Nested Edge Detection, 2015.
[2] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar. Focal Loss for Dense Object Detection, 2017.
[3] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation, 2015.
[4] F. Milletari, N. Navab, and S.-A. Ahmadi. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation, 2016.
[5] S. S. M. Salehi, D. Erdogmus, and A. Gholipour. Tversky loss function for image segmentation using 3D fully convolutional deep networks, 2017.
[6] M. Berman, A. R. Triki, M. B. Blaschko. The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks, 2018.