15 Representation Learning

15.1 Greedy Layer-Wise Unsupervised Pretraining

15.1.1 When and Why Does Unsupervised Pretraining Work?

15.2 Transfer Learning and Domain Adaptation

15.3 Semi-Supervised Disentangling of Causal Factors

15.4 Distributed Representation

15.5 Exponential Gains from Depth

15.6 Providing Clues to Discover Underlying Causes

what makes one representation better than another?

A list of these generic regularization strategies:

  • Smoothness
  • Linearity
  • Multiple expanatory factors
  • Causal factors
  • Depth, or a hierarchical organization of explanatory factors
  • Shared factors across tasks
  • Manifolds
  • Natural clustering
  • Temporal and spatial coherence
  • Sparsity
  • Simplicity of factor dependencies