The following includes my reading list and a list of papers organized in tracks which I can recommend to read. Most (all?) of them are about machine learning and neural networks.
Reading List
I am aware of the following papers and I want to read them ... when I have time:
- Doubly Convolutional Neural Networks
- Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks
- Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning
- Convolutional Neural Fabrics
- Evolving Neural Networks through Augmenting Topologies and A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks
- Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations: Making networks smaller (file size)
- All You Need is Beyond a Good Init: Exploring Better Solution for Training Extremely Deep Convolutional Neural Networks with Orthonormality and Modulation
- Large-Scale Evolution of Image Classifiers
Best of
The following is a list of papers, organized by the year I read (or written) them. Not when they were published.
2016
- Lipton, Z.C., 2016. The Mythos of Model Interpretability. IEEE Spectrum. (summary)
- Zhang, C., Bengio, S., Hardt, M., Recht, B. and Vinyals, O., 2016. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530. (summary)
- Deep Learning without Poor Local Minima and Matrix Completion has No Spurious Local Minimum
Tracks
Weight Initialization
- X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks.” in Aistats, vol. 9, 2010, pp. 249–256. (summary)
- A. M. Saxe, J. L. McClelland, and S. Ganguli, “Exact solutions to the nonlinear dynamics of learning in deep linear neural networks,” arXiv preprint arXiv:1312.6120, Dec. 2013. (summary)
- K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision, Feb. 2015, pp. 1026–1034. (summary)
- D. Mishkin and J. Matas, “All you need is a good init,” arXiv preprint arXiv:1511.06422, Nov. 2015. (summary)
Ideas
- Establishing Human-Level scores for Benchmarks
- User Interfaces: What are good examples?
- Herarchical Classification
- Pooling: Can it be replaced by convolutions?
- Ensembles: Train an ensemble, use it to get better labels than simple one-hot encoding, train new single network on new labels. (Possibly the same as Distilling the Knowledge in a Neural Network)
- OCR and semantic segmentation
- Negative images
- How does taking grayscale images on color-image trained networks decrease performance?