Scientific optimization plays a crucial role across various domains, including mathematics, physics, and chemistry. Large Language Models (LLMs) have been increasingly used for mathematical optimization due to their reasoning capabilities.
However, existing prompt-based optimization approaches suffer from sensitivity to prompt structures and difficulties handling long observational feedback sequences. The authors propose a General Scientific Optimizer (GSO), a bi-level optimization framework integrating model editing techniques into LLMs to refine solutions iteratively.
General Scientific Optimizers: Exploiting Edited Large Language Models