Self-Supervised Learning: a suvey

Self-Supervised Representation Learning Broadly speaking, all the generative models can be considered as self-supervised, but with different goals: Generative models focus on creating diverse and realistic images While self-supervised representation learning care about producing good features generally helpful for many tasks Image based Distortion Exemplar-CNN (Dosovitskiy et al., 2015) Rotation of an entire image (Gidaris et al.

Loss functioin in neural network

Kullback-Leibler divergence Information theory Quantify information of intuition1 Likely events should have low information content Less likely events should have higher information content Independent events should have additive information. For example, finding out that a tossed coin has come up as heads twice should convey twice as much information as finding out that a tossed coin has come…

炼丹心法

What is the difference between an autoencoder and an encoder decoder? 参数 神经网络的调参顺序? 神经网络中的Epoch、Iteration、Batchsize 神经网络…

Pytorch 小技巧

Tensor Torch 的广播机制 Contigious vs non-contigious tensor Optimizer 如果使用 cuda,那么要先构造 optimizers 再创建 model,参考官方文档。 Effect of calling model.cuda() after constructing an optimizer 查看参数个数…

Go Hugo!

在 2020 年的春节前夕,我终于完成了博客从 Hexo 到 Hugo 的迁移。期间踩过不少坑,也有不少小朋友来问我如何开始进行个人博客写作,因此…

Deep Learning Material

Material CMU 11-785 Videos 《神经网络与深度学习》 林轩田《机器学习基石》 李宏毅《1天搞懂深度学习》 李宏毅《Generative Adversarial Network (GA…

梯度下降原理及理解

代价函数 为了量化我们神经网络的拟合效果,我们定义一个代价函数: $$C(w,b) = \frac {1}{2n}\sum\limits_{x}||y(x)-a||^2$$ 我们训练算法的目的,就是最小化权值和偏置的代价…