AIResearch AIResearch
Back to articles
AI

AI Recommenders Adapt Without Forgetting Old Preferences

A new method updates only key parts of large language models to track changing user interests, saving computational costs and preventing performance drops for inactive users.

AI Research
March 27, 2026
3 min read
AI Recommenders Adapt Without Forgetting Old Preferences

Large language models (LLMs) are transforming recommendation systems on e-commerce platforms, but they struggle to keep up as user preferences evolve over time. lies in the massive size of these models: retraining them with all new data is too expensive, while fine-tuning only on recent interactions causes the system to forget the preferences of users who haven't been active lately. This leads to inaccurate recommendations for both groups, undermining the personalization that makes these systems valuable. A new framework called EvoRec addresses this by efficiently updating only the most relevant parts of the model, balancing adaptation with memory retention.

The researchers developed EvoRec, which uses a Locate-Forget-Update paradigm to manage preference shifts in LLM-based recommender systems. First, it locates sensitive layers in the model by comparing hidden states when processing old and new interaction sequences, identifying parameters most affected by preference changes. Then, it forgets outdated interactions by filtering them out using a lightweight model like SASRec, which scores past interactions based on relevance to current preferences and removes the lowest-scoring ones. Finally, it updates only the located sensitive parameters with the filtered data, minimizing changes to the rest of the model. This approach modifies just 30% of the parameters typically adjusted in s like LoRA, reducing computational load while focusing updates where they matter most.

Extensive experiments on real-world datasets, Amazon Beauty and Amazon Toys, demonstrate EvoRec's effectiveness. was tested across multiple update cycles, with showing it outperforms existing approaches like re-training, fine-tuning, and LSAT. For example, on the Amazon Beauty dataset with TALLRec as the base model, EvoRec achieved an HR@3 of 0.4912 for active users (U_A), compared to 0.4789 for LSAT and 0.4653 for fine-tuning. Importantly, it maintained strong performance for inactive users (U_I), with an HR@3 of 0.4157 on the overall user set, avoiding the preference forgetting seen in other s. The framework also proved efficient, with update times comparable to fine-tuning but with better accuracy, as shown in Figure 5 and Table 3.

Of this research are significant for real-world applications where user interests change dynamically, such as in online shopping or content streaming. By enabling LLM-based recommenders to adapt quickly without costly retraining, EvoRec could make personalized systems more responsive and sustainable. It addresses a critical pain point in AI deployment: balancing model updates with resource constraints. For businesses, this means maintaining recommendation quality for all users, even those with sporadic activity, which can enhance customer satisfaction and retention. 's compatibility with various LLMs, like Qwen2-7B and LLaMA-2-7B, as noted in the backbone analysis, suggests broad applicability across different platforms.

Despite its advantages, EvoRec has limitations. The performance depends on the accuracy of the filtering model in identifying outdated interactions, and the choice of hyperparameters like the threshold for sensitive layers (set at 30% in the study) requires tuning for optimal . The paper notes that if too many parameters are updated, it can lead to conflicts between old and new preferences, reducing effectiveness. Additionally, while EvoRec reduces computational costs compared to re-training, it still involves overhead from the filtering step, though this is minimal with lightweight models. Future work could explore integrating EvoRec with other advanced techniques to further improve robustness and scalability in diverse recommendation scenarios.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn