<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Probing on Stack Research</title><link>https://stackresearch.org/tags/probing/</link><description>Recent content in Probing on Stack Research</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 16 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://stackresearch.org/tags/probing/index.xml" rel="self" type="application/rss+xml"/><item><title>The Unaskable Question</title><link>https://stackresearch.org/research/the-unaskable-question-machine/</link><pubDate>Mon, 16 Mar 2026 00:00:00 +0000</pubDate><guid>https://stackresearch.org/research/the-unaskable-question-machine/</guid><description>&lt;p&gt;Ask a language model something it does not know, and it may admit uncertainty or invent an answer. Ask it something a policy forbids, and it may refuse. Those are familiar failure modes. They have names, benchmarks, mitigations, and whole taxonomies around them.&lt;/p&gt;
&lt;p&gt;There is another category that receives less attention: questions the model cannot engage with because the question contradicts the structure of the system being asked. Not a knowledge gap. Not a safety boundary. A structural impossibility.&lt;/p&gt;</description></item></channel></rss>