Code World Model: A 32B Agentic Coding LLM Grounded In Execution Traces This article analyzes a Meta FAIR technical report introducing the Code World Model (CWM), a 32-billion-parameter decoder-only transformer trained to model program execution and agentic software engin... agents code generation execution traces LLM reinforcement learning software engineering
EpMAN Reweights Attention With Episodic Memory To Tackle 256k-Token Contexts Long-context reasoning is still a weak spot for many large language models, even as context windows grow. The ACL 2025 paper EpMAN: Episodic Memory AttentioN for Generalizing to Longer Contexts ... ACL 2025 attention episodic-memory LLM long-context RAG
LLMigrate Turns Lazy LLMs Into Reliable C-to-Rust Translators Rewriting performance-critical C code in Rust promises stronger memory safety with similar speed, but moving large systems is hard. A new preprint introduces LLMigrate, a toolchain that combines large... C-to-Rust Linux Kernel LLM Program Repair Rust