<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Lineage on Stack Research</title><link>https://stackresearch.org/tags/lineage/</link><description>Recent content in Lineage on Stack Research</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Fri, 17 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://stackresearch.org/tags/lineage/index.xml" rel="self" type="application/rss+xml"/><item><title>Agent Incident Response Needs a Measurable Drill</title><link>https://stackresearch.org/research/agent-incident-drill/</link><pubDate>Fri, 17 Apr 2026 00:00:00 +0000</pubDate><guid>https://stackresearch.org/research/agent-incident-drill/</guid><description>&lt;p&gt;Agent incident response needs a clock, a journal, and a stopping point.&lt;/p&gt;
&lt;p&gt;Without those three things, failure remains theatrical. A bad action happens, someone opens logs, someone reconstructs intent, someone asks whether the system could have been stopped sooner. The answers arrive after the important interval has already passed.&lt;/p&gt;
&lt;p&gt;The useful question is narrower: can a controlled agent failure be made measurable while it is happening?&lt;/p&gt;
&lt;p&gt;&lt;a href="https://stackresearch.org/research/control-ops/"&gt;ControlOps&lt;/a&gt; built the parts: scope validation, decision lineage, blast-radius assessment, and kill-path auditing. The drill described here connects those parts around one small incident. It does not prove that agent systems are safe. It proves something more modest and more useful: one proposed action can be checked, stopped, recorded, scored, and prepared for rollback before it becomes an invisible state change.&lt;/p&gt;</description></item><item><title>Agent Security Is a Release Engineering Problem</title><link>https://stackresearch.org/research/agent-security-is-a-release-engineering-problem/</link><pubDate>Sun, 29 Mar 2026 00:00:00 +0000</pubDate><guid>https://stackresearch.org/research/agent-security-is-a-release-engineering-problem/</guid><description>&lt;p&gt;On Tuesday, the agent reads a note.&lt;/p&gt;
&lt;p&gt;The note may be a webpage, a support transcript, a tool result, a migration record, or a line in a document somebody thought was harmless. Nothing dramatic happens. The session ends. The operator closes the tab. The team ships two other changes before lunch: a prompt tweak, a small retrieval adjustment, a new tool scope for a staging workflow.&lt;/p&gt;
&lt;p&gt;On Friday, the same system takes a different task. It answers a planning question, prepares a runbook, suggests a deployment path, or reaches for a tool under a credential it did not have on Tuesday. What matters is not the moment the bad state entered. What matters is that it survived.&lt;/p&gt;</description></item><item><title>Why Agent Memory Needs a Control Plane</title><link>https://stackresearch.org/research/why-agent-memory-needs-a-control-plane/</link><pubDate>Mon, 23 Mar 2026 00:00:00 +0000</pubDate><guid>https://stackresearch.org/research/why-agent-memory-needs-a-control-plane/</guid><description>&lt;p&gt;In an end-to-end memory governance scenario, a migrated record was present in the store but denied by default retrieval. The data existed, but policy correctly kept it out of the agent&amp;rsquo;s active context. That behavior sounds strict until a real system shows how quickly &amp;ldquo;just store it&amp;rdquo; turns into stale, unsafe memory that is hard to audit.&lt;/p&gt;
&lt;p&gt;That gap is why &lt;a href="https://github.com/stack-research/agentic-memory-fabric"&gt;Agentic Memory Fabric&lt;/a&gt; is a control plane for memory, not another retrieval wrapper. The point is simple: memory used by agents should be treated like governed infrastructure, with clear lineage and retrieval policy enforced at runtime.&lt;/p&gt;</description></item></channel></rss>