The Illusion of Thinking
A close look at apple's analysis of reasoning in LLMs through problem complexity reveals limitations in current benchmark design and model interpretability.
Quick takes on AI, technology, science, and whatever catches my attention. Written fast, published faster.
The status indicator reflects the current state of the work: - Abandoned: Work that has been discontinued - Notes: Initial collections of thoughts and references - Draft: Early structured version with a central thesis - In Progress: Well-developed work actively being refined - Finished: Completed work with no planned major changes This helps readers understand the maturity and completeness of the content.
The confidence tag expresses how well-supported the content is, or how likely its overall ideas are right. This uses a scale from "impossible" to "certain", based on the Kesselman List of Estimative Words: 1. "certain" 2. "highly likely" 3. "likely" 4. "possible" 5. "unlikely" 6. "highly unlikely" 7. "remote" 8. "impossible" Even ideas that seem unlikely may be worth exploring if their potential impact is significant enough.
The importance rating distinguishes between trivial topics and those which might change your life. Using a scale from 0-10, content is ranked based on its potential impact on: - the reader - the intended audience - the world at large For example, topics about fundamental research or transformative technologies would rank 9-10, while personal reflections or minor experiments might rank 0-1.
A close look at apple's analysis of reasoning in LLMs through problem complexity reveals limitations in current benchmark design and model interpretability.
Anthropic launched Haiku, Sonnet, and Opus under the Claude 3 family, offering powerful capabilities across tasks.
Gemini 1.5 Pro featured industry-leading long-context capabilities, enabling advanced reasoning over vast documents.
The first Gemini model from Google DeepMind, merging strengths from AlphaCode, Pathways, and large-scale training.
An optimized and cost-efficient variant of GPT-4 powering ChatGPT with custom GPTs, tools, and longer context.
An improved Claude model with stronger reasoning, fewer hallucinations, and increased openness for public use.
Anthropic’s first Claude model, focused on safety through Constitutional AI, emphasizing harmlessness and transparency.
A multimodal leap forward for OpenAI, capable of reasoning over images and text with more nuanced capabilities.
The 175B parameter model that revolutionized natural language interfaces and powered the first wave of AI API tools.
GPT-2 demonstrated surprisingly coherent text generation, sparking debate over AI safety and open-sourcing.
OpenAI's first generative pre-trained transformer model, laying the foundation for large language models.