LMs typically improve their factual accuracy by enhancing their internal parameters or using external retrieval systems to provide additional context during inference12. Techniques like ROME and MEMIT focus on editing the model's internal parameters to update knowledge1. AI algorithms can also sift through vast amounts of information, identify relevant data, and use it to refine the model's responses, continuously learning from new data to improve accuracy over time345.
ROME and MEMIT are model editing techniques that focus on adjusting the internal parameters of Language Models (LMs) to update or correct knowledge. ROME performs a rank-one modification of the MLP weights of a single layer to directly write a memory into the model5. MEMIT builds upon ROME to insert many memories by modifying the MLP weights of a range of critical layers5. Both techniques optimize for the preservation-memorization objective, with ROME using an equality constraint and MEMIT employing a more flexible least-square constraint that allows for batched editing2.
The advanced LMs used in this RAG study were LLaMa-2 and Phi-2. The researchers employed techniques like Causal Mediation Analysis, Attention Contributions, and Attention Knockouts to analyze how these models depend on external context and internal memory when answering factual queries.