A list or description of the different tools available in BrainBlocks. Not all of them are implemented or even possible, but many of them have there own modules or can be reconstructed with a composition of multiple modules.

This list was supposed to be a full description of BrainBlock’s capabilities, but it ended up being very wordy and raising more questions than it answered to average laymen.

Data Encoders

  • Converts to sparse binary representation while preserving semantic and temporal meaning

Types

  • scalar encoder
  • persistence encoder
  • cyclical encoder
  • change encoder
  • time encoder
  • beat encoder (external clock or natural tempo input)

Features

  • encoder type
    • specify input keys (may include time)
    • if no data or bad data, specify a behavior
      • usually a no-op
      • catch error or exception or check input flag
  • emit result and flag that update is available
    • possible that no update is available, so execution should not occur above
    • persist same output in case other encoders update
  • scalar encoder
    • just take input and encode it
  • persistence encoder
    • take time input and run timer for no-change
    • always updates even if no input is received
  • scalar change encoder
    • specify difference threshold
    • only emit updates if data changes beyond threshold
    • reset baseline on update
  • scalar time encoder
    • fixed range timer
    • roll over option
    • clamp option

Spatial Pooler

  • static
  • learning
  • boosting

Sequence Learner

  • anomaly score
  • static
  • learning
  • greedy winner selection
  • event-driven execution (not time-based)

Similarity Determination from Distal Connections

  • Extract similarity matrix
  • Inhibit non-relevant features and noise columns
  • Output layer voting and inhibition

Hierarchy Construction

  • inter-column relationships from higher-level processing
  • Manual connections for proportional relevance
  • How to eliminate non-relevant and noise features?
    • feedback?
      • Like ART theory, inhibit non-relevant columns with feedback
      • Learn feedback connections through hebbian learning / activations from inputs and active states
    • distal inter-column biasing?

Connectivity and Biasing Strategies

  • apical feedback (1st choice)
  • basal/distal modulation/context (2nd choice)
    • binary predictive
    • scalar predictive
  • proximal feedforward
    • required for any activation

Set Sample Classifier

  • intra-column distal connections
  • inter-column distal connections for voting
  • feedback to input layer
  • converges and requires reset

Attention Selector / Feature Selection

  • Feedback from set sample classifier
  • Associate relevant columns and inhibit non-relevant columns
  • Functions as static attention selector for multiple input columns
  • Requires reset, but task-specific static case is fine
  • Learning is achieved from Hebbian association between active output cells and active input cells
    • proximal dendrite connections are associated across all columns
    • disconnected from irrelevant columns
  • Full column inhibition for irrelevant features
    • whole column apical feedback, enable/disables input
  • Strategies:
    • select fixed amount of proximal input columns (keep input mag stable)
      • change column connectivity based on relevant associations
      • Given 16 input columns, select 4 columns as input.
      • Relevance is determined by similarity and correlation between input columns
      • Distal inter-column learning creates similarity matrix based on real-time correlation
      • Mini-column to mini-column dendritic learning is used to enhance the vote of the winning input columns
        • K highest cumulative depolarized columns are selected as winners columns, all others are inhibited
      • Converges to static attention and may require reset or hard sensory input change

Interpreter module

  • linear classifier to infer human-understandable representation
    • infer current state
    • predict next state from prediction states
    • independent parameter inference
    • linear regression (single value)
      • usable info
    • discretized bin classification (multiple values)
      • for understanding performance
  • state machine estimation
    • build human-readable FSM from SDR patterns