<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Agent-Memory on Stack Research</title><link>https://stackresearch.org/tags/agent-memory/</link><description>Recent content in Agent-Memory on Stack Research</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 23 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://stackresearch.org/tags/agent-memory/index.xml" rel="self" type="application/rss+xml"/><item><title>Why Agent Memory Needs a Control Plane</title><link>https://stackresearch.org/research/why-agent-memory-needs-a-control-plane/</link><pubDate>Mon, 23 Mar 2026 00:00:00 +0000</pubDate><guid>https://stackresearch.org/research/why-agent-memory-needs-a-control-plane/</guid><description>&lt;p&gt;In an end-to-end memory governance scenario, a migrated record was present in the store but denied by default retrieval. The data existed, but policy correctly kept it out of the agent&amp;rsquo;s active context. That behavior sounds strict until a real system shows how quickly &amp;ldquo;just store it&amp;rdquo; turns into stale, unsafe memory that is hard to audit.&lt;/p&gt;
&lt;p&gt;That gap is why &lt;a href="https://github.com/stack-research/agentic-memory-fabric"&gt;Agentic Memory Fabric&lt;/a&gt; is a control plane for memory, not another retrieval wrapper. The point is simple: memory used by agents should be treated like governed infrastructure, with clear lineage and retrieval policy enforced at runtime.&lt;/p&gt;</description></item><item><title>Memory Should Decay</title><link>https://stackresearch.org/research/memory-should-decay/</link><pubDate>Sat, 14 Mar 2026 00:00:00 +0000</pubDate><guid>https://stackresearch.org/research/memory-should-decay/</guid><description>&lt;p&gt;An agent memory run started with 50 stored facts. Each fact had a half-life of 10 ticks. After 30 ticks of a task loop, 8 memories remained.&lt;/p&gt;
&lt;p&gt;Those 8 were the ones the agent kept using. The other 42 expired automatically. No cleanup script. No manual pruning. No summarization pass pretending stale facts were still useful.&lt;/p&gt;
&lt;p&gt;The experiment is small, but the shape is important. Agent memory does not need to be an attic where every fact waits forever. It can behave more like working state: reinforced by use, weakened by neglect, and removed when confidence falls below a threshold.&lt;/p&gt;</description></item><item><title>NHI and Agentic Risk: Secrets, Memory, and Persistence</title><link>https://stackresearch.org/research/nhi-asi-series-03-secrets-and-memory/</link><pubDate>Tue, 17 Feb 2026 00:00:00 +0000</pubDate><guid>https://stackresearch.org/research/nhi-asi-series-03-secrets-and-memory/</guid><description>&lt;p&gt;A secret leak is not a single event. It is a copying process.&lt;/p&gt;
&lt;p&gt;A token appears in a CI log. The log is indexed for troubleshooting. An agent is asked to diagnose a failed deployment and retrieves the log. The agent summarizes the failure, stores the useful parts in memory, and later uses that memory while calling a tool. By then the token may have moved through several systems that were never designed to be secret stores.&lt;/p&gt;</description></item></channel></rss>