Deep Learning Limits

_https://www.wired.com/story/greedy-brittle-opaque-and-shallow-the-downsides-to-deep-learning

According to skeptics like Marcus, deep learning is **greedy, brittle, opaque, and shallow. **

The systems are greedy because they demand huge sets of training data.

  • BrainBlocks doesn’t require much data
  • synthetic data
  • recognition focused and not classification

Brittle because when a neural net is given a “transfer test”—confronted with scenarios that differ from the examples used in training—it cannot contextualize the situation and frequently breaks.

  • Open Set Recognition

They are opaque because, unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted in terms of their weights within a mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases.

  • Human Explainability

Finally, they are** shallow** because they are programmed with little innate knowledge and possess no common sense about the world or human psychology.

  • Leverage Knowledge or Other Models

Gary Marcus

Marcus, Gary. “Deep learning: A critical appraisal.” arXiv preprint arXiv:1801.00631 (2018).

_https://medium.com/@GaryMarcus/in-defense-of-skepticism-about-deep-learning-6e8bfd5ae0f1

Innateness, AlphaZero, and Artificial Intelligence_ _ https://arxiv.org/pdf/1801.05667

_ _ _ Deep Learning: A Critical Appraisal_ _ _https://arxiv.org/abs/1801.00631