* 知名開源實作
* https://github.com/mem0ai/mem0
* https://github.com/letta-ai/letta
* LangChain 關於 Memory 的文章和實作
* https://blog.langchain.dev/adding-long-term-memory-to-opengpts/ (2024/11)
* https://blog.langchain.dev/launching-long-term-memory-support-in-langgraph/ (2024/10/8)
* Memory for agents https://blog.langchain.dev/memory-for-agents/ (2024/10/19)
* LangMem
* https://langchain-ai.github.io/langmem/concepts/conceptual_guide/
* https://blog.langchain.dev/langmem-sdk-launch/
* Agent Memory 課程
* https://www.deeplearning.ai/short-courses/llms-as-operating-systems-agent-memory/
* 使用 https://github.com/letta-ai/letta 框架
* https://www.deeplearning.ai/short-courses/agent-memory-building-memory-aware-agents/
* Oracle AI Database、LangChain
* https://github.com/topoteretes/awesome-ai-memory
* Graph-based memory https://github.com/getzep/graphiti
- Building Brain-Like Memory for AI | LLM Agent Memory Systems 影片 (2024/12/16)
- https://www.youtube.com/watch?v=VKPngyO0iKg
- https://github.com/ALucek/agentic-memory/tree/main
- https://www.psychologytoday.com/us/basics/memory/types-of-memory
- https://claude.ai/chat/c9fb3e5a-90bb-4b60-ad1f-0629d62718e7
* paper: Sleep-time compute: make your machines think while they sleep (2025/4/22)
* https://arxiv.org/abs/2504.13171
* https://x.com/charlespacker/status/1914380650993569817
* paper: Rethinking Memory in AI: Taxonomy, Operations, Topics, and Future Directions (2025/5)
* https://arxiv.org/abs/2505.00675
* paper: Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory
* https://arxiv.org/abs/2504.19413
* 發展整理 https://x.com/aparnadhinak/status/1920903237269680568 (2025/5/10)
* Self-Managed LLM Memory 實作參考 (2025/5)
* cookbook https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/memory_cookbook.ipynb
* 比較 https://x.com/RLanceMartin/status/1925633240427114808/photo/1 (2205/5/23)
* Stop Pretending Your Agent Memory Isn’t RAG (2025/6)
* https://medium.com/asymptotic-spaghetti-integration/stop-pretending-your-agent-memory-isnt-rag-c2daf995d820
* paper: LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory (2025/3)
* https://arxiv.org/abs/2410.10813
* talk: Using LongMemEval to Improve Agent Memory (2025/8)
* https://www.youtube.com/watch?v=FTokJt1ioeg&list=PL5q_lef6zVkb2j0SjbqFWLUdTTvkEnfaL&index=3&t=43s
* talk: Building AI to Remember: Engineering AI Memory (2025/6)
* https://luma.com/krkqwl5x
* New Computer https://www.youtube.com/watch?v=7AmhgMAJIT4
* The Man Behind LangChain Memory https://www.youtube.com/watch?v=OTyhD7oKPn0
* Cursor https://www.youtube.com/watch?v=MtwWHXYFqbQ
* 這三場都很棒!
* Integrating Long-Term Memory with Gemini 2.5 (2025/7/3)
* https://www.philschmid.de/gemini-with-memory
* 用 mem0 的一個示範,每回合 search 和 add memory 到 vector store
* Memory in Agents, Make LLMs remember (2025/8/4)
* https://www.philschmid.de/memory-in-agents
* 簡單的綜述介紹
* ChatGPT, Claude, Gemini 的 app 記憶功能分析 (2025 Q4)
* https://www..com/writing/chatgpt-memory-bitter-lesson
* https://www.shloked.com/writing/claude-memory
* https://www.shloked.com/writing/claude-memory-tool
* https://www.shloked.com/writing/gemini-memory
* Google Has Your Data. Gemini Barely Uses It. (2025/11/9)
* https://www.shloked.com/writing/gemini-memory
* gemini 的記憶系統
* Exploring Anthropic’s Memory Tool (2025/11/25)
* https://leoniemonigatti.com/blog/claude-memory-tool.html
* https://platform.claude.com/docs/en/agents-and-tools/tool-use/memory-tool
* claude api 的 memory 工具
* Claude Diary (2025/12/1)
* https://x.com/RLanceMartin/status/1997357794027336007
* https://rlancemartin.github.io/2025/12/01/claude_diary/
* 簡單的代理記憶模式。反思對話記錄,從實際使用中提煉偏好 / 反饋來更新記憶
* Making Sense of Memory in AI Agents (2025/12/5)
* https://www.leoniemonigatti.com/blog/memory-in-ai-agents.html
* 技術綜述
* I Reverse Engineered ChatGPT's Memory System, and Here's What I Found! (2025/12/9)
* https://manthanguptaa.in/posts/chatgpt_memory/
* https://x.com/oran_ge/status/1998879291808112926
* https://x.com/arafatkatze/status/2000695091468591204
* ChatGPT 的記憶系統
* Evaluating Context Compression for AI (Coding) Agents (2025/12/16)
* https://factory.ai/news/evaluating-compression
* 評測壓縮方法
* OpenAI Cookbook: Context Engineering for Personalization - State Management with Long-Term Memory Notes using OpenAI Agents SDK
* https://developers.openai.com/cookbook/examples/agents_sdk/context_personalization (2026/1/5)
* 基於狀態的記憶
* 非常實踐可實作
* paper: Memory in the Age of AI Agents (2026/1/13)
* https://arxiv.org/abs/2512.13564
* https://x.com/omarsar0/status/2004557075037245489
* How we built Agent Builder’s memory system (2026/1/15)
* https://www.langchain.com/conceptual-guides/how-we-built-agent-builders-memory
* LangChain 經驗分享
* https://blog.aihao.tw/2026/02/17/agent-builder-memory/
* Managing agentic memory with Elasticsearch (2026/1/16)
* https://www.elastic.co/search-labs/blog/agentic-memory-management-elasticsearch
* Memory as Reasoning
* https://x.com/helloiamleonie/status/2017370424808509451 (2026/1/31)
* https://blog.plasticlabs.ai/blog/Memory-as-Reasoning
* 這觀點好讚 https://chatgpt.com/c/69ee6ef8-e560-8320-a842-1d1f92880f82
* memory 不應該只存「事實」,而要存「可追溯的推論」
* write memory 不該只是 extraction,而應該是 reasoning step。很多 memory 實作會在對話結束後問模型:「這段對話有什麼值得記住?」這容易產生瑣碎記憶。更好的 prompt 應該要求模型判斷「未來是否會改變回答品質」
* recall memory 也不該只是 similarity search,而應該是「針對當前任務選擇可用推論」
* agent 應該有「surprisal / contradiction handling」
* Agent Memory: Why Your AI Has Amnesia and How to Fix It
* https://blogs.oracle.com/developers/agent-memory-why-your-ai-has-amnesia-and-how-to-fix-it (2026/2/17)
* 12个开源项目,4条技术路线:Agent记忆系统完整选型指南
* https://x.com/GoSailGlobal/status/2041912933228409054 (2026/4/9)
* 纯文件存储、向量数据库 + RAG、知识图谱、混合检索
* AI Memory 的真正难点:为什么 Vector Store + Embedding 远远不够
* https://x.com/jakevin7/status/2032342979890016561 (2026/3/13)
* How Mastra's Observational Memory Beat the Hardest Memory Benchmark
* https://x.com/bookercodes/status/2038981619587994114 (2026/3/31)
* https://x.com/aparnadhinak/status/2039087091716624616
* 不需要向量資料庫。不需要圖形資料庫。不做壓縮。只有兩個在背景中運作的代理人,以及一個簡單的想法:觀察重要的事,其餘就讓它逐漸淡出。
* Supermemory https://github.com/supermemoryai/supermemory
* AI 的记忆进化史:我们是如何让AI拥有记忆的
* https://x.com/nopinduoduo/status/2035018557978030429?s=12&t=m3zl53dVatGnKHGnEVcGmQ (2026/3/20)
* Claude Code's memory 架構
* https://x.com/himanshustwts/status/2038924027411222533 (2026/3/31)
* https://x.com/ellen_in_sf/status/2039098050837463504 (2026/4/1)
* 浅谈 Agent Memory
* https://mp.weixin.qq.com/s?__biz=MzIzNjE2NTI3NQ==&mid=2247491772&idx=1&sn=bbbbfbe12bff30ae169d21b8d04eef65&chksm=e98618632ccdbf05e0099e66e568074c3567cc226754a19d0eb3252a8ca430f0460348284a1d&poc_token=HLys8GmjMVoJTMfgJgp_zMPROUqfqCgKIolvvhQN (2026/4/12)
* Built-in memory for Claude Managed Agents
* Claude API 出的 memory 功能
* https://claude.com/blog/claude-managed-agents-memory (2026/4/23)
* https://platform.claude.com/docs/en/managed-agents/memory
* https://x.com/RLanceMartin/status/2047720067107033525 (2026/4/25)
* 就基於檔案系統,沒有額外做向量檢索,似乎定位是「精煉的、結構化的小型知識」
* Agent Memory 實作: 大多數記憶功能不需要用到向量檢索
* https://blog.aihao.tw/2026/04/28/agent-memory-no-vector/ (2026/4/28)
## File-system
* AgentFS
* https://github.com/tursodatabase/
* https://x.com/penberg/status/1985769421353168904
* https://www.llamaindex.ai/blog/making-coding-agents-safe-using-llamaindex
---
> https://chatgpt.com/c/687d37c8-57d4-8008-bc7f-1a2a6e72df74