1. Duration
Monday, December 5th, 2022 - Saturday, December 10th, 2022
2. Learning Record
2.1 Learned Capsule Network
Read the paper [1] and learned the code. Refactor the code to fit the input of different datasets. Also, I turned the eager mode off.
2.2 Learned Efficient Capsule Network
Read the paper [2] and learned the code. The network attained fabulous results on the fashion MNIST dataset in only five epochs. But the loss quickly turns nan in other datasets. Besides, the computation power of my laptop is enough to do the training.
2.3 Learned the CBAM
Read the paper [3] and learned the code. Compared to the vision transformer, this code is a lot easier.
3. Feeling
3.1 Anxious
I had some new ideas about micro-expression spotting, but it seemed I don't have much time.
4. References
[1]S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic Routing Between Capsules.” arXiv, 2017. doi: 10.48550/ARXIV.1710.09829.
[2]V. Mazzia, F. Salvetti, and M. Chiaberge, “Efficient-CapsNet: capsule network with self-attention routing,” Scientific Reports, vol. 11, no. 1, p. 14634, Jul. 2021, doi: 10.1038/s41598-021-93977-0.
[3]S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “CBAM: Convolutional Block Attention Module,” in Computer Vision – ECCV 2018, Cham, 2018, pp. 3–19.