Sakana's AB-MCTS Unlocks AI Collective Intelligence at Inference Time Sakana AI introduces AB-MCTS (Adaptive Branching Monte Carlo Tree Search), a cutting-edge algorithm that enables multiple frontier AI models to collaborate during inference. Rather than relying solely... AB-MCTS AI collaboration ARC-AGI-2 collective intelligence inference-time scaling large language models Monte Carlo Tree Search TreeQuest
MIT's CodeSteer Helps Language Models Outsmart Complex Problems Large language models (LLMs) have dramatically changed our relationship with AI, offering impressive fluency in language understanding and generation. Yet, when these models confront tasks that demand... AI coaching algorithmic tasks artificial intelligence code generation large language models machine learning MIT research symbolic reasoning
AMD Ryzen AI Max+ Upgrade: Powering 128B-Parameter LLMs Locally on Windows PCs With AMD's latest update deploying massive language models, up to 128 billion parameters, directly on your Windows laptop is now a possible. AMD’s Ryzen AI Max+ is a breakthrough that brings state-of-... AMD context window large language models LLM deployment local AI quantization Ryzen AI Windows AI
Test-Time Diffusion Deep Researcher: Ushering in a Human-Like AI Research Paradigm Introducing Test-Time Diffusion Deep Researcher (TTD-DR) framework an AI assistant that doesn't just gather information, but actively thinks, revises, and refines its work, much like a skilled human r... AI research draft refinement large language models multihop reasoning research automation retrieval augmentation self-evolution test-time diffusion
How AI is Revolutionizing Scientific Writing and Research Artificial intelligence is rapidly transforming the landscape of scientific research by streamlining complex workflows and boosting efficiency, tools like ChatGPT and GPT-4 are making scientific writi... academic publishing AI ethics artificial intelligence large language models literature review research productivity science communication science writing
MiroMind-M1: Redefining Open-Source Mathematical Reasoning for AI Open-source AI is entering a new phase, with MiroMind-M1 leading the charge in mathematical reasoning. This project goes beyond simply releasing models by offering full transparency, every model, data... AI transparency CAMPO chain-of-thought large language models mathematical reasoning open-source AI reinforcement learning token efficiency
Open Deep Search: Unlocking Advanced AI Search for Everyone Imagine if the power behind today’s smartest search engines was no longer limited to tech giants. That’s the promise of Open Deep Search (ODS), a breakthrough open-source framework that puts advanced ... AI search benchmarks community innovation information retrieval large language models machine learning open-source reasoning agents
NVIDIA Helix Parallelism Powers Real-Time AI with Multi-Million Token Contexts AI assistants recalling months of conversation, legal bots parsing vast case law libraries, or coding copilots referencing millions of lines of code, all while delivering seamless, real-time responses... AI inference GPU optimization KV cache large language models NVIDIA Blackwell parallelism real-time AI
MIT’s CodeSteer Is Coaching AI to Outsmart Complex Problems MIT researchers have built CodeSteer, a smart assistant designed to help large language models (LLMs) seamlessly alternate between generating text and writing code by utilizing a "coach" to evaluate t... AI code generation CodeSteer large language models machine learning MIT research problem solving symbolic reasoning
AI in Radiology: RadGPT is Making Medical Reports Patient-Friendly Medical reports, especially those in radiology, are often filled with terminology that can overwhelm even the most diligent patient. Now, Stanford researchers are changing the landscape with RadGPT , ... AI healthcare large language models medical reports patient communication RadGPT radiology Stanford research
Feedback-Driven Methods Are Transforming Prompt Engineering Prompt engineering is crucial for maximizing the capabilities of large language models (LLMs), but it has traditionally required significant manual effort and specialized know-how. As new tasks and mo... AI research automation efficiency feedback loops large language models machine learning prompt optimization
Large Reasoning Models: Breakthroughs and Breaking Points in AI Problem-Solving Artificial intelligence has made remarkable strides, and Large Reasoning Models (LRMs) are at the forefront of this revolution. These models promise to deliver more than just answers, they aim to repl... AI research artificial intelligence benchmarking chain-of-thought large language models model limitations problem complexity reasoning