<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Genetic Algorithms on Stack Research</title><link>https://stackresearch.org/tags/genetic-algorithms/</link><description>Recent content in Genetic Algorithms on Stack Research</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sun, 15 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://stackresearch.org/tags/genetic-algorithms/index.xml" rel="self" type="application/rss+xml"/><item><title>Evolving Better Prompts</title><link>https://stackresearch.org/research/genetic-prompt-programming/</link><pubDate>Sun, 15 Mar 2026 00:00:00 +0000</pubDate><guid>https://stackresearch.org/research/genetic-prompt-programming/</guid><description>&lt;p&gt;A four-generation prompt evolution run moved average fitness from 0.887 to 0.926. The best prompt reached 0.965. The run used a population of 8 prompts and completed in under 4 minutes on a MacBook Pro with &lt;code&gt;llama3.1:8b&lt;/code&gt; running locally through Ollama.&lt;/p&gt;
&lt;p&gt;The useful trick is not genetic programming in the old sense of random token edits. Mutation and crossover are language-model calls. Every variant is still a valid prompt. The model rewrites prompts in ways a human prompt engineer might recognize: tighter wording, added constraints, reordered instructions, more concrete examples, removed weak parts.&lt;/p&gt;</description></item></channel></rss>