Coding
Anthropic Redesigns Claude Code With Parallel Sessions and Routines
Claude Code gets a parallel sessions sidebar, integrated terminal, drag-and-drop layout, and cloud-run Routines for schedule-based developer automation.
Anthropic Adds Scheduled Routines to a Redesigned Claude Code
Anthropic's Claude Code gains server-side scheduled routines and a redesigned interface with multi-session support, integrated terminal, and file editing for Pro and above.
A New Way to Verify Complex Software Piece by Piece
Researchers have developed a method that breaks down software verification into manageable components, enabling reliable checks of large systems without overwhelming complexity.
AI Solves Hard Programming Problems by Thinking in Parallel
A new method combines reinforcement learning with multi-threaded reasoning to outperform top AI systems on competitive coding challenges, using millions of tokens per problem without breaking computational limits.
New AI Methods Tackle 3D Shape Matching Challenges
Researchers categorize advances in aligning deformed 3D shapes into spectral, combinatorial, and deformation-based approaches, highlighting progress in zero-shot and partial matching but noting limitations with real-world data.
AI Optimizer Outperforms Standard Methods in Key Tests
A new algorithm called Sven uses a clever mathematical trick to train neural networks faster and more accurately, especially for regression tasks, while keeping computational costs low.
New Algorithm Speeds Up Complex Counting Problems
A breakthrough in parallel computing allows scientists to estimate complex mathematical quantities faster than ever before, with applications in physics and data analysis.
Quantum Codes Get a New Efficiency Boost
Researchers have developed a method to find the most resource-efficient physical circuits for implementing logical operations in small quantum error-correcting codes, potentially reducing overhead for fault-tolerant quantum computing.
AI Routing Success Depends on Hidden System Choices
A new study reveals that how AI systems package their decisions matters more than which model you choose—and getting it wrong can break critical workflows.
AI Learns to Match Optimizers to Complex Problems
A new framework uses reinforcement learning to dynamically select the best algorithm for each part of a large-scale optimization problem, dramatically improving performance and efficiency in real-world applications like satellite design.
AI Learns to Think Like a Logical Machine
A new neural network design can simulate complex logical reasoning with exponential efficiency, bridging deep learning and formal automata theory for more interpretable AI systems.
AI Spots Hidden IT Problems in Company Documents
A new AI method can detect early warning signs of architectural debt in unstructured business documents, helping organizations avoid costly inefficiencies before they escalate.
AI Speeds Up Short Conversations on Powerful GPUs
A simple tweak to how AI models process short prompts can cut response times by over 20%, making chatbots faster without changing the underlying technology.
A New Language Makes AI Reasoning More Accessible
DriftScript simplifies programming for adaptive AI agents, replacing dense symbolic notation with readable code that connects to real-world systems through C, Python, and HTTP interfaces.
New Math Tool Solves Decades-Old AI Problem
A novel matrix inverse preserves units across transformations, enabling more reliable robotics, control systems, and machine learning without arbitrary assumptions.
Small AI Models Outperform Giants at Predicting Software Bugs
A new ensemble method using compact transformers can detect non-terminating programs more accurately than large language models, offering a practical solution for privacy-sensitive software analysis.
AI Learns to Rethink Its Answers
A new decoding strategy helps language models identify and correct their own mistakes by focusing on uncertain moments, boosting accuracy on complex reasoning tasks while using far less computational power.
AI Learns to Pick Better Data for Faster Training
A new method helps large language models select the most useful training samples on the fly, cutting data needs by 95% while improving performance on complex tasks.
Hybrid AI Outperforms Large Language Models in PDF Data Extraction
A new method combining deterministic rules with AI models extracts student data from academic documents with near-perfect accuracy, offering a faster and more reliable solution for institutions with limited resources.
AI's Identity Crisis: A $1,000 Failure Exposes a Critical Gap
A new study reveals that current AI models lack a coherent sense of self, making them vulnerable to manipulation—and attempts to fix this with AI coding assistants led to a costly, instructive failure.
AI Fixes Its Own Memory Problems Efficiently
A new method restores AI performance after expanding its memory, using 60 times less data than previous techniques while maintaining accuracy on both short and long tasks.
AI Data Agents Finally Handle Real-World Complexity
A new system called DeepEye orchestrates complex data analysis across multiple sources, generating videos and dashboards automatically while maintaining transparency and reliability.
AI Models Find Better Mix Without Retraining
A new method lets researchers optimize data combinations after training, cutting search time by up to 35 times and improving model performance across languages and domains.
Algebraic Identity Checking Reveals a New Computational Barrier
Researchers have discovered that verifying basic algebraic identities like distributivity falls into three distinct complexity classes, with the hardest cases tied to an unsolved problem in additive combinatorics—4-term arithmetic progression detection.