<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Kill Paths on Stack Research</title><link>https://stackresearch.org/tags/kill-paths/</link><description>Recent content in Kill Paths on Stack Research</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Fri, 17 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://stackresearch.org/tags/kill-paths/index.xml" rel="self" type="application/rss+xml"/><item><title>Agent Incident Response Needs a Measurable Drill</title><link>https://stackresearch.org/research/agent-incident-drill/</link><pubDate>Fri, 17 Apr 2026 00:00:00 +0000</pubDate><guid>https://stackresearch.org/research/agent-incident-drill/</guid><description>&lt;p&gt;Agent incident response needs a clock, a journal, and a stopping point.&lt;/p&gt;
&lt;p&gt;Without those three things, failure remains theatrical. A bad action happens, someone opens logs, someone reconstructs intent, someone asks whether the system could have been stopped sooner. The answers arrive after the important interval has already passed.&lt;/p&gt;
&lt;p&gt;The useful question is narrower: can a controlled agent failure be made measurable while it is happening?&lt;/p&gt;
&lt;p&gt;&lt;a href="https://stackresearch.org/research/control-ops/"&gt;ControlOps&lt;/a&gt; built the parts: scope validation, decision lineage, blast-radius assessment, and kill-path auditing. The drill described here connects those parts around one small incident. It does not prove that agent systems are safe. It proves something more modest and more useful: one proposed action can be checked, stopped, recorded, scored, and prepared for rollback before it becomes an invisible state change.&lt;/p&gt;</description></item><item><title>ControlOps: Letting Machines Talk</title><link>https://stackresearch.org/research/control-ops/</link><pubDate>Sat, 14 Mar 2026 00:00:00 +0000</pubDate><guid>https://stackresearch.org/research/control-ops/</guid><description>&lt;p&gt;An autonomous system should not be judged only by the moment when it answers. The answer is the visible surface. Beneath it there are quieter questions: who allowed this action, which evidence shaped it, how far could the failure travel, and how quickly could the system be stopped?&lt;/p&gt;
&lt;p&gt;These questions are often asked after the fact. A runbook is opened. A trace is reconstructed. Someone searches logs for the decision that mattered. The machine has already acted, and the organization is trying to recover the shape of the action from its shadow.&lt;/p&gt;</description></item></channel></rss>