Netvora logo
Submit Startup Subscribe
Home About Contact Submit Startup Subscribe

Sakana introduces new AI architecture, ‘Continuous Thought Machines’ to make models reason with less guidance — like human brains

Comment

Sakana introduces new AI architecture, ‘Continuous Thought Machines’ to make models reason with less guidance — like human brains

Sakana introduces new AI architecture, ‘Continuous Thought Machines’ to make models reason with less guidance — like human brains

Breakthrough AI Model Architecture Unveiled: Continuous Thought Machines

By Netvora Tech News


Renowned artificial intelligence startup Sakana, founded by former top Google AI scientists, has introduced a revolutionary new AI model architecture dubbed Continuous Thought Machines (CTM). This innovative design aims to revolutionize language models, enabling them to tackle a broader range of complex cognitive tasks, much like humans navigate unfamiliar problems. CTMs diverge from traditional Transformer models, which rely on fixed, parallel layers processing inputs simultaneously. Instead, CTMs unfold computation over steps within each input/output unit, known as an artificial "neuron." Each neuron retains a short history of its previous activity, using this memory to decide when to activate again.

This adaptive approach allows CTMs to learn and adapt more effectively, mimicking the human brain's ability to reason through complex problems. By retaining a short history of its previous activity, each neuron can draw upon this knowledge to make informed decisions, leading to more accurate and efficient problem-solving.

How CTMs differ from Transformer-based LLMs

One key distinction between CTMs and traditional Transformer-based large language models (LLMs) is the use of variable, custom timelines. This enables CTMs to provide more intelligent and flexible processing, as they can adjust their internal clock to suit the specific task at hand.

  • Unlike fixed, parallel processing, CTMs' adaptive architecture allows for more accurate and efficient processing of complex tasks.
  • Each neuron's memory helps to retain context and make informed decisions, leading to improved problem-solving capabilities.
  • Custom timelines enable CTMs to adjust their processing to suit specific tasks, resulting in more intelligent and flexible language models.
Stay ahead of the curve with the latest AI news and exclusive content. Subscribe to our daily and weekly newsletters.

Comments (0)

Leave a comment

Back to homepage