• Martin Thoma
  • Home
  • Categories
  • Tags
  • Archives
  • Support me

Paper List

Contents

  • Reading List
  • Best of
    • 2016
  • Tracks
    • Weight Initialization
  • Ideas

The following includes my reading list and a list of papers organized in tracks which I can recommend to read. Most (all?) of them are about machine learning and neural networks.

Reading List

I am aware of the following papers and I want to read them ... when I have time:

  1. Doubly Convolutional Neural Networks
  2. Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks
  3. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning
  4. Convolutional Neural Fabrics
  5. Evolving Neural Networks through Augmenting Topologies and A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks
  6. Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations: Making networks smaller (file size)
  7. All You Need is Beyond a Good Init: Exploring Better Solution for Training Extremely Deep Convolutional Neural Networks with Orthonormality and Modulation
  8. Large-Scale Evolution of Image Classifiers

Best of

The following is a list of papers, organized by the year I read (or written) them. Not when they were published.

2016

  1. Lipton, Z.C., 2016. The Mythos of Model Interpretability. IEEE Spectrum. (summary)
  2. Zhang, C., Bengio, S., Hardt, M., Recht, B. and Vinyals, O., 2016. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530. (summary)
  3. Deep Learning without Poor Local Minima and Matrix Completion has No Spurious Local Minimum

Tracks

Weight Initialization

  1. X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks.” in Aistats, vol. 9, 2010, pp. 249–256. (summary)
  2. A. M. Saxe, J. L. McClelland, and S. Ganguli, “Exact solutions to the nonlinear dynamics of learning in deep linear neural networks,” arXiv preprint arXiv:1312.6120, Dec. 2013. (summary)
  3. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision, Feb. 2015, pp. 1026–1034. (summary)
  4. D. Mishkin and J. Matas, “All you need is a good init,” arXiv preprint arXiv:1511.06422, Nov. 2015. (summary)

Ideas

  • Establishing Human-Level scores for Benchmarks
    • User Interfaces: What are good examples?
    • Herarchical Classification
  • Pooling: Can it be replaced by convolutions?
  • Ensembles: Train an ensemble, use it to get better labels than simple one-hot encoding, train new single network on new labels. (Possibly the same as Distilling the Knowledge in a Neural Network)
  • OCR and semantic segmentation
  • Negative images
  • How does taking grayscale images on color-image trained networks decrease performance?

Published

Jan 11, 2017
by Martin Thoma

Category

Science

Tags

  • Academia 4
  • Computer Science 8
  • Papers 2
  • Reading 2
  • Science 9

Contact

  • Martin Thoma - A blog about Code, the Web and Cyberculture
  • E-mail subscription
  • RSS-Feed
  • Privacy/Datenschutzerklärung
  • Impressum
  • Powered by Pelican. Theme: Elegant by Talha Mansoor