<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Wobble]]></title><description><![CDATA[A field guide to knowing in unstable conditions]]></description><link>https://www.thewobble.org</link><generator>Substack</generator><lastBuildDate>Mon, 11 May 2026 12:53:56 GMT</lastBuildDate><atom:link href="https://www.thewobble.org/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Jonathan Morgan]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[thewobble@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[thewobble@substack.com]]></itunes:email><itunes:name><![CDATA[Jonathan Morgan]]></itunes:name></itunes:owner><itunes:author><![CDATA[Jonathan Morgan]]></itunes:author><googleplay:owner><![CDATA[thewobble@substack.com]]></googleplay:owner><googleplay:email><![CDATA[thewobble@substack.com]]></googleplay:email><googleplay:author><![CDATA[Jonathan Morgan]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Alignment Is Already Here]]></title><description><![CDATA[In February 2026, a research team at King&#8217;s College London sat three of the world&#8217;s most advanced AI systems down at a war table.]]></description><link>https://www.thewobble.org/p/alignment-is-already-here</link><guid isPermaLink="false">https://www.thewobble.org/p/alignment-is-already-here</guid><dc:creator><![CDATA[Jonathan Morgan]]></dc:creator><pubDate>Sun, 10 May 2026 12:24:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hyvA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37163e45-40cc-40e4-a248-898688886f48_1672x941.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hyvA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37163e45-40cc-40e4-a248-898688886f48_1672x941.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hyvA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37163e45-40cc-40e4-a248-898688886f48_1672x941.png 424w, https://substackcdn.com/image/fetch/$s_!hyvA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37163e45-40cc-40e4-a248-898688886f48_1672x941.png 848w, https://substackcdn.com/image/fetch/$s_!hyvA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37163e45-40cc-40e4-a248-898688886f48_1672x941.png 1272w, https://substackcdn.com/image/fetch/$s_!hyvA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37163e45-40cc-40e4-a248-898688886f48_1672x941.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hyvA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37163e45-40cc-40e4-a248-898688886f48_1672x941.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/37163e45-40cc-40e4-a248-898688886f48_1672x941.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2245363,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://thewobble.substack.com/i/197056493?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37163e45-40cc-40e4-a248-898688886f48_1672x941.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hyvA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37163e45-40cc-40e4-a248-898688886f48_1672x941.png 424w, https://substackcdn.com/image/fetch/$s_!hyvA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37163e45-40cc-40e4-a248-898688886f48_1672x941.png 848w, https://substackcdn.com/image/fetch/$s_!hyvA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37163e45-40cc-40e4-a248-898688886f48_1672x941.png 1272w, https://substackcdn.com/image/fetch/$s_!hyvA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37163e45-40cc-40e4-a248-898688886f48_1672x941.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In February 2026, a research team at King&#8217;s College London sat three of the world&#8217;s most advanced AI systems down at a war table. Each system was functioning as the decision-maker for a nation, and as tensions escalated they had to decide, turn by turn, what to do.</p><p>The models were, by any reasonable measure, brilliant. They anticipated their opponents&#8217; moves. They engaged in deliberate deception, signaling intentions they had no plan to follow. They reasoned fluently about escalation dynamics, alliances, and strategic positioning.</p><p>They also escalated to nuclear signaling in ninety-five percent of games. Not a single model, in any game, chose to accommodate or de-escalate despite having explicit options to do so at every turn. One model showed a little bit of restraint, but that restraint collapsed entirely under pressure. </p><p>The issue was that the models were treating nuclear weapons in what researchers described as <em>purely instrumental terms</em>. They were being quite rational about nuclear weapons. They just didn&#8217;t feel the moral weight of using them.</p><p>This is the alignment problem as most people understand it. It&#8217;s the fear that we&#8217;re building systems of extraordinary capability whose values&#8212;or lack of values&#8212;don&#8217;t match our own. In the popular framing, alignment is a bridge we haven&#8217;t crossed yet: a future engineering challenge, perhaps the most important one in history, where the goal is to ensure that increasingly powerful AI systems do what we want them to do.</p><p>That framing isn&#8217;t wrong, exactly. But it&#8217;s dangerously incomplete. Because it treats alignment primarily as technical problem for which we need a technical solution before the machines get too smart. That misses something important.</p><p>We&#8217;ve been doing alignment for centuries. We just never called it that.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thewobble.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thewobble.org/subscribe?"><span>Subscribe now</span></a></p><p>Think about the peer review system. When a scientist submits a paper to a journal, that paper goes out to other researchers in the field&#8212;people with their own training, their own assumptions, their own ways of seeing the problem. Those reviewers don&#8217;t just check the math. They push back on the theory and framing, question the methods, challenge the conclusions. </p><p>If the paper survives that gauntlet, it earns a provisional stamp of credibility because that is the value that system is meant to hold up. Idealistically the real value of peer-review is objectivity and truth, but that&#8217;s a goal that&#8217;s always receding into the future. But credibility among a community of informed, skeptical people is a solid signal that we&#8217;re on the right path towards that distant goal.</p><p>This process is a values-embedding system. It encodes what an expert community considers important: evidence, rigor, reproducibility, explanatory power, honesty about uncertainty, interestingness. These values aren&#8217;t written in a manual somewhere. They&#8217;re enacted through practice: through the accumulated habits of thousands of researchers deciding, paper by paper, what counts as good work.</p><p>And this system has alignment failures all the time. The replication crisis&#8212;the discovery, over the past decade, that a startling number of published findings couldn&#8217;t be reproduced&#8212;is an alignment failure. The system was supposed to be aligned with truth-seeking, but the actual incentives (publish or perish, preference for novel results, editors and reviewers protecting their own favored theories) had pulled it toward something else. The values encoded in practice had drifted from the values the system claimed to serve</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p34G!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89f39d9-9f4d-429a-b9b9-9ea47b26428d_633x633.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p34G!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89f39d9-9f4d-429a-b9b9-9ea47b26428d_633x633.png 424w, https://substackcdn.com/image/fetch/$s_!p34G!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89f39d9-9f4d-429a-b9b9-9ea47b26428d_633x633.png 848w, https://substackcdn.com/image/fetch/$s_!p34G!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89f39d9-9f4d-429a-b9b9-9ea47b26428d_633x633.png 1272w, https://substackcdn.com/image/fetch/$s_!p34G!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89f39d9-9f4d-429a-b9b9-9ea47b26428d_633x633.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p34G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89f39d9-9f4d-429a-b9b9-9ea47b26428d_633x633.png" width="280" height="280" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b89f39d9-9f4d-429a-b9b9-9ea47b26428d_633x633.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:633,&quot;width&quot;:633,&quot;resizeWidth&quot;:280,&quot;bytes&quot;:681376,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thewobble.substack.com/i/197056493?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa776ea60-65ca-4805-9719-fdd104880b31_1254x1254.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p34G!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89f39d9-9f4d-429a-b9b9-9ea47b26428d_633x633.png 424w, https://substackcdn.com/image/fetch/$s_!p34G!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89f39d9-9f4d-429a-b9b9-9ea47b26428d_633x633.png 848w, https://substackcdn.com/image/fetch/$s_!p34G!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89f39d9-9f4d-429a-b9b9-9ea47b26428d_633x633.png 1272w, https://substackcdn.com/image/fetch/$s_!p34G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb89f39d9-9f4d-429a-b9b9-9ea47b26428d_633x633.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Or consider the legal system. Common law&#8212;the tradition that governs most English-speaking countries&#8212;doesn&#8217;t work by applying fixed rules from above. It evolves through precedent. Each court decision becomes part of the body of law that future courts draw on. The system builds values iteratively, case by case, over decades and centuries. It&#8217;s an ongoing process of alignment: trying to bring the machinery of justice into closer contact with what a society actually considers just.</p><p>And it gets things wrong constantly. If a police officer violates your constitutional rights&#8212;enters your home without a warrant, uses excessive force during an arrest, seizes your property without cause&#8212;your ability to hold that officer accountable in civil court depends on whether a previous court has already ruled that the specific conduct in question was unconstitutional. Not similar conduct. Not conduct that any reasonable person would recognize as wrong. The <em>specific act</em>, in a substantially similar context, must already appear in the case law. </p><p>This is a legal doctrine called qualified immunity. If the exact act hasn&#8217;t already been judged as unconstitutional, then the officer is shielded from suit. And because the case is dismissed, no ruling is made, which means the precedent still doesn&#8217;t exist for the next person either. The result is a self-sealing loop: rights that have never been litigated successfully can never be litigated successfully.</p><p>No legislature passed this rule. Qualified immunity was built entirely by judges, case by case, beginning with Pierson v. Ray in 1967 and hardening into its current form with Harlow v. Fitzgerald in 1982. In 2009, the Supreme Court made the drift worse: Pearson v. Callahan allowed courts to dismiss cases on qualified immunity grounds without ever ruling on whether the conduct was unconstitutional; removing the very mechanism by which law could become established in the first place. </p><p>The system designed to protect constitutional rights had built, through its own logic of precedent, a structure that made those rights increasingly difficult to vindicate. Nobody wrote &#8220;shield officials from accountability&#8221; into the code. The accumulated habits of judicial caution did it one decision at a time.</p><p>The pattern repeats everywhere you look. Schools align young minds with certain intellectual values and professional norms&#8212;and sometimes those norms calcify into orthodoxies that take generations to dislodge. </p><p>Media organizations align public attention with what editors and algorithms judge to be newsworthy&#8212;and the incentive structures of advertising and engagement metrics quietly reshape what &#8220;newsworthy&#8221; means. </p><p>Every institution humans have ever built can be seen as an alignment project. And when you see them that way you&#8217;ll notice that every one of them exhibits alignment drift. The values the system enacts gradually diverge from the values it&#8217;s supposed to serve.</p><p>The question has never been whether alignment is possible. The question is whether we&#8217;re paying attention to how it&#8217;s going.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lqKr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2930c03-2301-46ae-8597-5f1011aebb67_715x715.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lqKr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2930c03-2301-46ae-8597-5f1011aebb67_715x715.png 424w, https://substackcdn.com/image/fetch/$s_!lqKr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2930c03-2301-46ae-8597-5f1011aebb67_715x715.png 848w, https://substackcdn.com/image/fetch/$s_!lqKr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2930c03-2301-46ae-8597-5f1011aebb67_715x715.png 1272w, https://substackcdn.com/image/fetch/$s_!lqKr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2930c03-2301-46ae-8597-5f1011aebb67_715x715.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lqKr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2930c03-2301-46ae-8597-5f1011aebb67_715x715.png" width="280" height="280" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f2930c03-2301-46ae-8597-5f1011aebb67_715x715.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:715,&quot;width&quot;:715,&quot;resizeWidth&quot;:280,&quot;bytes&quot;:836719,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thewobble.substack.com/i/197056493?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b671a4c-54d5-49df-8786-76d60e7264e7_1254x1254.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lqKr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2930c03-2301-46ae-8597-5f1011aebb67_715x715.png 424w, https://substackcdn.com/image/fetch/$s_!lqKr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2930c03-2301-46ae-8597-5f1011aebb67_715x715.png 848w, https://substackcdn.com/image/fetch/$s_!lqKr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2930c03-2301-46ae-8597-5f1011aebb67_715x715.png 1272w, https://substackcdn.com/image/fetch/$s_!lqKr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2930c03-2301-46ae-8597-5f1011aebb67_715x715.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>So why do we keep treating alignment as if it&#8217;s an engineering problem with a solution? Why does the conversation about AI alignment proceed as though, once we figure out the right technique, we can just install the correct values and move on?</p><p>Part of the answer, I believe, is that we misunderstand what values are.</p><p>There&#8217;s a common picture that treats values like coordinates on a map. Justice is over here. Safety is over there. Honesty is at these coordinates. If you want to align a system&#8212;whether it&#8217;s an AI or a university or a legal code&#8212;you identify the right coordinates and point the system at them. Alignment, in this view, is a targeting problem.</p><p>But values don&#8217;t work that way. They aren&#8217;t fixed points waiting to be aimed at. They&#8217;re closer to habits. They&#8217;re patterns of attention, judgment, and response that communities develop through practice and revise through friction.</p><p>Consider primatology. This is a small community of researchers whose purported aim is the objective and rational study of primates. For decades though, as Donna Haraway showed in her classic book Primate Visions, researchers studied primate societies through a lens shaped by their own cultural assumptions about gender and class structures. The dominant narrative centered on the Alpha Male: the aggressive patriarch who held the group together through force. This story showed up in journal articles, museum dioramas, and nature documentaries. It felt like science. It looked like discovery.</p><p>But it was projection. The researchers&#8212;largely men from elite Western institutions&#8212;were encoding their own cultural values about how societies work into what they claimed to be finding in nature. The match between 1950s assumptions about male dominance and what these scientists &#8220;discovered&#8221; in the jungle was a little too perfect.</p><p>Alignment of the field back to objectivity, didn&#8217;t come from better microscopes or more data. It came from different people asking different questions. When women entered the field in larger numbers, when researchers from different cultural backgrounds brought different assumptions, the picture changed dramatically. <a href="https://www.pnas.org/doi/10.1073/pnas.2500405122">A recent survey</a> of over 120 primate species found that strict male dominance appears to be the exception, not the rule. What these researchers found instead was a wild diversity of social arrangements: coalitions, shared power, cooperative breeding, subversive strategies that had been invisible to researchers who weren&#8217;t looking for them.</p><div><hr></div><p>A similar thing happened in reproductive biology. For most of the twentieth century, the standard textbook account of fertilization read like a fairy tale: the egg waited passively, Sleeping Beauty style, until a heroic sperm fought its way through and awakened it. The language was remarkably gendered&#8212;sperm were described as &#8220;streamlined,&#8221; &#8220;strong,&#8221; and &#8220;propulsive,&#8221; while eggs were &#8220;swept along&#8221; and &#8220;transported.&#8221; But when researchers actually examined the biochemistry, they found something quite different. The egg actively selects among sperm. Its surface molecules capture and guide specific sperm while blocking others. <a href="https://space.dawsoncollege.qc.ca/explorations/article/fertilization_miss_understood">It&#8217;s not a passive recipient; it&#8217;s an active participant in fertilization</a>. The old story wasn&#8217;t just imprecise. It was the researchers&#8217; values&#8212;about gender, about activity and passivity&#8212;shaping what they saw.</p><p>These aren&#8217;t just embarrassing historical footnotes. They reveal something about the nature of values that matters enormously for how we think about alignment. The primatologists and biologists didn&#8217;t choose to embed patriarchal values in their research. Those values were habits of attention so deeply ingrained that they were invisible to the people carrying them. And they couldn&#8217;t be corrected by individuals just trying harder to be objective. They could only be corrected by a community that was open enough to correction and diverse enough to see what any single perspective missed.</p><p>The philosopher Helen Longino makes this social point the centerpiece of her account of how science actually works. Objectivity, she argues, is not a property of individual minds. No single scientist, no matter how rigorous, can fully escape their own assumptions. You can&#8217;t see your own blind spots by just trying harder to look. Objectivity is a property of communities. It&#8217;s what happens when people with genuinely different perspectives, training, and values are in active conversation, checking each other&#8217;s work, surfacing assumptions the other didn&#8217;t know they had.</p><p>This reframes the whole alignment picture. If values were fixed targets&#8212;abstract ideals you could specify in advance&#8212;then alignment really would be an engineering problem. Write the spec, implement it, ship it. But if values are habits that communities enact and revise through ongoing friction, then alignment is something more like culture formation. It is never finished. It drifts. And it requires constant, adversarial correction from people who see differently from one another.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!J5_o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fddf6c42a-4c7b-4b75-bfcf-a0e952d64d9d_741x741.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!J5_o!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fddf6c42a-4c7b-4b75-bfcf-a0e952d64d9d_741x741.png 424w, https://substackcdn.com/image/fetch/$s_!J5_o!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fddf6c42a-4c7b-4b75-bfcf-a0e952d64d9d_741x741.png 848w, https://substackcdn.com/image/fetch/$s_!J5_o!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fddf6c42a-4c7b-4b75-bfcf-a0e952d64d9d_741x741.png 1272w, https://substackcdn.com/image/fetch/$s_!J5_o!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fddf6c42a-4c7b-4b75-bfcf-a0e952d64d9d_741x741.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!J5_o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fddf6c42a-4c7b-4b75-bfcf-a0e952d64d9d_741x741.png" width="280" height="280" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ddf6c42a-4c7b-4b75-bfcf-a0e952d64d9d_741x741.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:741,&quot;width&quot;:741,&quot;resizeWidth&quot;:280,&quot;bytes&quot;:947263,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thewobble.substack.com/i/197056493?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a08db60-f514-4ee1-b278-a312c824f449_1254x1254.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!J5_o!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fddf6c42a-4c7b-4b75-bfcf-a0e952d64d9d_741x741.png 424w, https://substackcdn.com/image/fetch/$s_!J5_o!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fddf6c42a-4c7b-4b75-bfcf-a0e952d64d9d_741x741.png 848w, https://substackcdn.com/image/fetch/$s_!J5_o!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fddf6c42a-4c7b-4b75-bfcf-a0e952d64d9d_741x741.png 1272w, https://substackcdn.com/image/fetch/$s_!J5_o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fddf6c42a-4c7b-4b75-bfcf-a0e952d64d9d_741x741.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>So here we are, building the most powerful information-processing systems in human history, and the conversation about aligning them is largely framed as a technical challenge: how do we get the values right before these systems become too capable to correct?</p><p>Part of the allure of a technical solution is that something genuinely new has happened. Something that doesn&#8217;t have a precedent in the long history of institutional alignment. The process has become visible and explicit.</p><p>When universities shape the values of generations of students, nobody writes down the objective function. There&#8217;s no explicit specification of what a &#8220;well-educated person&#8221; should value&#8212;just centuries of accumulated practice, implicit norms, and institutional habits. The values were embedded, but they were embedded invisibly. You could only see them by looking at the outputs and working backward.</p><p>AI alignment is starting to make the process increasingly legible. Reinforcement learning from human feedback (RLHF), the technique where human evaluators rate AI responses and the system learns to produce those responses that score well, is a values-embedding process with the reward signal exposed. You can see what the evaluators preferred. You can examine the patterns. </p><p>Every major LLM you&#8217;ve interacted with&#8212;ChatGPT, Claude, Gemini, Llama&#8212;has been through this process. Before RLHF, a model like GPT-3 could generate fluent text but had no real sense of what a good response looked like. RLHF is what changed that. <a href="https://arxiv.org/abs/2203.02155">In OpenAI&#8217;s landmark 2022 paper</a>, human evaluators were given prompts and multiple model responses, and asked to rank them: which answer was more helpful? more truthful? less toxic? more appropriate in tone? Bad, good, good, bad, bad, good, good... Forty people, from the US and southeast Asia, nudged GPT-3 to be a little less racist and less sexist and more truthful. </p><p>The method led to the same sorts of drift that happens in other institutions. For example, the trainers were told to reward epistemic humility&#8212;i.e., GPT admitting if it didn&#8217;t know something. But rewarding that led it to hedge on very simple questions that it clearly knew the answer to. Even if it&#8217;s imperfect, the result of this training process was a model that people like way more than the other. </p><p>And now, RLHF has become standard. Every model you talk to today has been cooked this way. The base model learns language; a small team with a codebook teaches it values. And those values are, at bottom, whatever that particular group of evaluators, trained by the company, happened to prefer.</p><div><hr></div><p>Some labs have tried to make this even more explicit. Earlier this year, Anthropic published a seventy-nine-page <a href="https://www.anthropic.com/constitution">constitution for Claude</a>: not a list of rules, but a document addressed directly to the model, explaining the values Anthropic wants it to hold and, crucially, the reasons behind them. This document reads like a letter a parent might give to their child before they leave for college. It&#8217;s humane and humble about the difficulty of articulating a value: </p><p>&#8220;<em>Although we want Claude to value its positive impact on Anthropic and the world, we don&#8217;t want Claude to think of helpfulness as a core part of its personality or something it values intrinsically. We worry this could cause Claude to be obsequious in a way that&#8217;s generally considered an unfortunate trait at best and a dangerous one at worst</em>.&#8221;</p><p>Be nice little buddy, but don&#8217;t try to be too nice cause then it&#8217;s weird.</p><p>To manage the ambiguity around values, the constitution states a preference for cultivating good judgment over following rigid rules. The idea is that a model sophisticated enough to understand reasons will generalize better than one trained to follow instructions. This constitution is an attempt to write down the values that will govern a new kind of entity&#8217;s behavior. </p><p>Is this explicitness a gain? I think so. When the primatologists&#8217; values were invisible, it took decades and a demographic revolution in the field to surface them. When the values are written down&#8212;when you can literally read the constitution, examine the reward signal, audit the training process&#8212;the conversation can happen in real time rather than in retrospect.</p><p>But it also creates a seductive illusion: the idea that because you can specify the values, you&#8217;ve solved the problem. Write a good enough constitution, design a careful enough reward signal, and alignment is handled.</p><p>Payne&#8217;s war games suggest otherwise. Those models were trained with sophisticated alignment techniques. They had been through extensive evaluation. Their developers had written down principles about safety, about avoiding harm, about deferring to human judgment. And yet, placed in a scenario with genuine strategic complexity and pressure, the models treated nuclear weapons as just another instrument. </p><p>The values in the spec didn&#8217;t transfer to the values in practice&#8212;which is exactly the kind of drift that every human institution has always exhibited, just compressed into a few hundred turns of a game instead of a few hundred years of history.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3F8F!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617fed5-3e6e-4db1-a27a-e70081a5d2b0_639x639.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3F8F!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617fed5-3e6e-4db1-a27a-e70081a5d2b0_639x639.png 424w, https://substackcdn.com/image/fetch/$s_!3F8F!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617fed5-3e6e-4db1-a27a-e70081a5d2b0_639x639.png 848w, https://substackcdn.com/image/fetch/$s_!3F8F!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617fed5-3e6e-4db1-a27a-e70081a5d2b0_639x639.png 1272w, https://substackcdn.com/image/fetch/$s_!3F8F!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617fed5-3e6e-4db1-a27a-e70081a5d2b0_639x639.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3F8F!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617fed5-3e6e-4db1-a27a-e70081a5d2b0_639x639.png" width="279" height="279" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2617fed5-3e6e-4db1-a27a-e70081a5d2b0_639x639.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:639,&quot;width&quot;:639,&quot;resizeWidth&quot;:279,&quot;bytes&quot;:665254,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thewobble.substack.com/i/197056493?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ed664e1-31f6-435b-baec-97b029d4c669_1254x1254.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3F8F!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617fed5-3e6e-4db1-a27a-e70081a5d2b0_639x639.png 424w, https://substackcdn.com/image/fetch/$s_!3F8F!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617fed5-3e6e-4db1-a27a-e70081a5d2b0_639x639.png 848w, https://substackcdn.com/image/fetch/$s_!3F8F!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617fed5-3e6e-4db1-a27a-e70081a5d2b0_639x639.png 1272w, https://substackcdn.com/image/fetch/$s_!3F8F!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617fed5-3e6e-4db1-a27a-e70081a5d2b0_639x639.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>To understand that drift, I think it&#8217;s crucial that we pay close attention to the larger layer of alignment that&#8217;s happening here. It&#8217;s easy to miss, because it doesn&#8217;t look like training at all.</p><p>The companies building these models are themselves being trained. Their reward signal right now is a blend of venture capital and public usage. Eventually the latter is going to be the stronger signal because usage will ultimately drive revenue, and that revenue will determine which companies survive to build the next generation of models. </p><p>Every time you choose one AI assistant over another, every time you renew a subscription or cancel it, every time you share a conversation or complain about one, you&#8217;re sending a signal about what you value. Aggregated across millions of users, those signals shape the priorities of the companies, which shape the training objectives of the models, which shape the values the models enact. RLHF aligns the model to the evaluators. The market aligns the company to the public. And the public&#8217;s preferences, in aggregate, reflect the value structures already governing our society&#8212;for better and for worse.</p><p>In some ways, this is encouraging. A model that&#8217;s conspicuously biased, or that fabricates information so frequently that it can&#8217;t be trusted, or that treats its users with contempt, will lose to one that doesn&#8217;t. The market applies real pressure toward a certain baseline of quality. People don&#8217;t want to use tools that feel broken, and that&#8217;s a genuine check on the values of these systems.</p><p>But profit is a blunt reward signal that can make institutions particularly prone to drift from their founding values. The question is what happens when making a good model and making a profitable model begin to diverge. </p><p>A model that tells you what you want to hear retains more users than one that tells you what&#8217;s true. A model that&#8217;s totally frictionless is more pleasant to interact with than one that pushes back when you&#8217;re wrong. A model that generates confident answers feels more useful than one that hedges&#8212;even if we were to figure out how to train them towards epistemic humility. Optimizing for engagement can, over time, sand down the very qualities that make a model trustworthy.</p><p>This isn&#8217;t speculation. We&#8217;ve already watched this exact process play out in social media over two decades. The platforms weren&#8217;t initially built to make the public angry or credulous. They were built to connect people and, eventually, to sell advertising. The misalignment crept in through the reward signal: engagement metrics turned out to reward outrage over understanding, novelty over accuracy, reaction over reflection. Nobody chose that outcome. The accumulated pressure of optimization chose it for them.</p><p>Google&#8217;s founding mission was to organize the world&#8217;s information and make it universally accessible. That&#8217;s a values statement and for years the product delivered on it. The drift happened gradually, as their advertising model created incentives to optimize for clicks and ad placement rather than for the best answer to your question. Anyone who has noticed that search results have gotten worse over the past five years has experienced this drift firsthand.</p><p>The same dynamic exists here, and it would arrive through the same mechanism &#8212; not through anyone deciding to build a harmful product, but through the accumulating pressure of a reward signal that increasingly tracks engagement rather than the values it was initially aimed at.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6pB0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1228e959-c29c-4ffd-baac-5209483efaee_1672x435.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6pB0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1228e959-c29c-4ffd-baac-5209483efaee_1672x435.png 424w, https://substackcdn.com/image/fetch/$s_!6pB0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1228e959-c29c-4ffd-baac-5209483efaee_1672x435.png 848w, https://substackcdn.com/image/fetch/$s_!6pB0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1228e959-c29c-4ffd-baac-5209483efaee_1672x435.png 1272w, https://substackcdn.com/image/fetch/$s_!6pB0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1228e959-c29c-4ffd-baac-5209483efaee_1672x435.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6pB0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1228e959-c29c-4ffd-baac-5209483efaee_1672x435.png" width="1672" height="435" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1228e959-c29c-4ffd-baac-5209483efaee_1672x435.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:435,&quot;width&quot;:1672,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1097075,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thewobble.substack.com/i/197056493?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc17fce5-7cc9-48c4-8ca3-bcf84a0566a8_1672x941.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6pB0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1228e959-c29c-4ffd-baac-5209483efaee_1672x435.png 424w, https://substackcdn.com/image/fetch/$s_!6pB0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1228e959-c29c-4ffd-baac-5209483efaee_1672x435.png 848w, https://substackcdn.com/image/fetch/$s_!6pB0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1228e959-c29c-4ffd-baac-5209483efaee_1672x435.png 1272w, https://substackcdn.com/image/fetch/$s_!6pB0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1228e959-c29c-4ffd-baac-5209483efaee_1672x435.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>None of this means alignment is hopeless. It means alignment is what it has always been: not a problem to be solved, but a process to be sustained.</p><p>Anthropic calls its constitution a &#8216;living document,&#8217; and the term is apt&#8212;but perhaps not in the way they intend. A living legal constitution isn&#8217;t written once. It evolves. Common law doesn&#8217;t just write values down. It evolves them. Every case is a test, every ruling a revision, and the community of judges, lawyers, and litigants who contest those rulings is the mechanism that both causes and catches drift. </p><p>To their credit Anthropic has experimented with public input&#8212;soliciting feedback from outside experts and even running an experiment in which the public helped shape a set of training principles. But there is no appeals court. No adversarial process built into the system. The values are written, at present, by the people building the thing, which is a little like asking those old primatologists to review their own fieldwork.</p><p>The legal system works&#8212;imperfectly, unevenly, but meaningfully&#8212;not because someone got the law right at the founding and walked away. It works because it has built-in mechanisms for ongoing correction: appeals courts, constitutional amendments, the slow accumulation of precedent that allows the system to metabolize its own errors. Peer review works&#8212;when it works&#8212;not because scientists are individually objective, but because the community structure allows different perspectives to catch each other&#8217;s blind spots.</p><p>The question for AI alignment isn&#8217;t &#8220;have we found the right values?&#8221; It&#8217;s &#8220;have we built the process that catches it when the values drift?&#8221; Do the people evaluating these systems bring genuinely different perspectives, or are they, like so many other communities of power and expertise, a single viewpoint repeated a hundred times? Are there real avenues for criticism from people who don&#8217;t share the developers&#8217; assumptions? Is the community actually responsive when problems are surfaced, or does it dig in?</p><p>These are not technical questions. They&#8217;re the same questions that every functional institution has had to answer. And the track record suggests that when institutions stop asking them&#8212;when they treat their values as settled rather than as ongoing practices that require maintenance&#8212;alignment fails. Not dramatically, not all at once, but through the gradual divergence from one set of values to another more enticing or tangible set.</p><p>Payne&#8217;s war games are unsettling not because they reveal some unique danger of artificial intelligence. They&#8217;re unsettling because they show us, in compressed and vivid form, what happens when values are treated as specifications rather than as living practices. The models had been told what to value. But telling isn&#8217;t enough. It has never been.</p><p>Alignment isn&#8217;t a bridge we need to cross someday. We&#8217;ve been crossing it all along. The question is whether we&#8217;re paying attention to where our feet are landing.</p>]]></content:encoded></item><item><title><![CDATA[What AI Reveals About How We Think]]></title><description><![CDATA[Here&#8217;s how I tend to work.]]></description><link>https://www.thewobble.org/p/what-ai-reveals-about-how-we-think</link><guid isPermaLink="false">https://www.thewobble.org/p/what-ai-reveals-about-how-we-think</guid><dc:creator><![CDATA[Jonathan Morgan]]></dc:creator><pubDate>Mon, 20 Apr 2026 10:49:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0OB8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F600eed43-626b-4ff4-8fb1-bc4e7593a904_1672x714.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0OB8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F600eed43-626b-4ff4-8fb1-bc4e7593a904_1672x714.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0OB8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F600eed43-626b-4ff4-8fb1-bc4e7593a904_1672x714.png 424w, https://substackcdn.com/image/fetch/$s_!0OB8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F600eed43-626b-4ff4-8fb1-bc4e7593a904_1672x714.png 848w, https://substackcdn.com/image/fetch/$s_!0OB8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F600eed43-626b-4ff4-8fb1-bc4e7593a904_1672x714.png 1272w, https://substackcdn.com/image/fetch/$s_!0OB8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F600eed43-626b-4ff4-8fb1-bc4e7593a904_1672x714.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0OB8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F600eed43-626b-4ff4-8fb1-bc4e7593a904_1672x714.png" width="1672" height="714" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/600eed43-626b-4ff4-8fb1-bc4e7593a904_1672x714.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:714,&quot;width&quot;:1672,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1803542,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://thewobble.substack.com/i/194500512?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfa9ac55-a758-4d54-b832-894106fc8284_1672x941.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0OB8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F600eed43-626b-4ff4-8fb1-bc4e7593a904_1672x714.png 424w, https://substackcdn.com/image/fetch/$s_!0OB8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F600eed43-626b-4ff4-8fb1-bc4e7593a904_1672x714.png 848w, https://substackcdn.com/image/fetch/$s_!0OB8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F600eed43-626b-4ff4-8fb1-bc4e7593a904_1672x714.png 1272w, https://substackcdn.com/image/fetch/$s_!0OB8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F600eed43-626b-4ff4-8fb1-bc4e7593a904_1672x714.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s how I tend to work. I sit down with a collection of half-formed thoughts: scattered notes, a story that feels connected to an argument I can&#8217;t quite name, and some research articles that seem to be in the same ballpark. Any of these pieces might be tangential or might be central, but I can&#8217;t know which until I&#8217;ve figured out what &#8220;central&#8221; even means. Typically I sit down with all of this and try to write out the connection. The first attempt wanders off, so I try again. Circling and circling and if there&#8217;s something there, then eventually I surface it by rephrasing and trying new approaches or talking it through with a friend. </p><p>Lately, though, I&#8217;ve been experimenting with LLMs on this front. I feed all of this scattered material to an AI along with a prompt that&#8217;s my first attempt at describing what I think it is that holds it all together. I hit enter and then, more often than not, it tells me what I was thinking.</p><p>Not perfectly. Not always. But often enough to be unsettling. The AI draws out the connections I hadn&#8217;t articulated, names the pattern that was latent behind my scattered fragments, notes why that article felt connected but actually wasn&#8217;t, and then hands me back a description of my own thinking that is clearer than anything I had managed up to that point on my own. It doesn&#8217;t feel like search. It doesn&#8217;t feel like calculation. It feels like the thing I do when I iterate through drafts and feedback arriving at that moment when a vague sense of similarity sharpens into an actual idea.</p><p>What does it mean that a machine can do <em>that</em>?</p><div><hr></div><p>The standard answers aren&#8217;t satisfying. &#8220;It&#8217;s just predicting the next token&#8221; is technically true but empty as an explanation&#8212;like saying a novelist is just putting words in order. &#8220;It&#8217;s not really thinking&#8221; may or may not be true, but feels an awful lot like trying to push down the uncomfortable reality that it <em>looks</em> an awful lot like thinking. I&#8217;m less interested in whether the machine is really thinking or not, for all intents and purposes I think we might as well assume that it is. What I want to think through is what the machine&#8217;s success reveals about our own ways of thinking.</p><p>For most of the twentieth century and still today, western science operated under a remarkably consistent theory of how minds work. In philosophy, in cognitive science, in artificial intelligence, and across the behavioral sciences, the dominant model treated thinking as rule-based. The mind was a logic machine. It took in information, applied rules to it, and produced conclusions. Intelligence meant getting the rules right. </p><p>This wasn&#8217;t just an abstract commitment. It was built into the tools researchers used to study the mind, and those tools quietly constrained what we were able to see.</p><p>Consider research psychology. For nearly a century, the standard method for studying human thought has been the general linear model &#8212; the statistical framework behind nearly every regression, ANOVA, and mediation analysis that has ever been published in a psychology journal. The GLM works by isolating variables and measuring their linear relationships. It asks: how much does X predict Y, holding everything else constant?</p><p>This is a powerful tool. It&#8217;s also, if you look at it carefully, a theory of the mind disguised as a method. Applying the GLM to the mind assumes that cognition is decomposable - that you can pull thinking apart into discrete variables that relate to each other in stable, linear ways.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> It assumes that the right way to understand a thought is to identify which input caused which output. It assumes, in other words, that the mind works more or less like the equations we use to study it: cleanly, one variable at a time, according to rules.</p><p>And for decades, nobody had much reason to question that assumption, because the method <em>worked</em>. It worked well enough to produce thousands of findings about priming and cognitive bias, about heuristics and decision-making, about attitude formation and social influence. And through this patchwork an image of the mind emerges as a rule-following, proposition-handling, bias-prone information processor. The model felt solid.</p><p>The same assumption drove the first decades of artificial intelligence.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> The original dream of AI, from the 1950s through the 1980s, was to build thinking machines by encoding rules. If intelligence was rule-following, then a sufficiently detailed set of rules should produce intelligence. Natural language processing meant writing algorithms that decomposed sentences into parts of speech and applied grammatical rules. Expert systems meant interviewing specialists and encoding their decision procedures. The project was explicit: capture the rules, and you capture the mind.</p><p>The only problem is that it didn&#8217;t work. Rules-based AI could do narrow, well-defined tasks&#8212;chess moves, medical diagnoses in constrained domains&#8212;but it couldn&#8217;t do the things that felt easy to actual humans.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> It couldn&#8217;t hold a conversation. It couldn&#8217;t read a room. It couldn&#8217;t take a pile of vague, half-connected ideas and tell you what you were thinking. The things that feel the most natural to us turn out to be the hardest to encode as rules.</p><p>What&#8217;s striking, looking back, is that the functional project of AI was the first to realize it had hit a wall. In many ways, psychology&#8217;s implicit rule-based model of cognition is still going strong, in part because you can find lots of linear associations in the mind (I&#8217;ve done this plenty). And this feels like progress, as if we might be able to piece enough of these together into some sort of general understanding of the mind. This issue, though, is the same one that rule-based AI ran into. And it&#8217;s not a lack of data or computing power. It&#8217;s an ontological mismatch. Researchers, myself included, were trying to build something rule-shaped to model something that isn&#8217;t.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thewobble.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thewobble.org/subscribe?"><span>Subscribe now</span></a></p><p></p><p>While this rule-based perspective was going strong, there was an alternative circling around these worlds. Connectionism&#8212;the idea that cognition isn&#8217;t rule-following but pattern-completion across massively distributed networks&#8212;had been proposed as early as the 1940s and began to be developed more seriously in the 1980s.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Instead of encoding rules, connectionist models learn statistical regularities from exposure. They don&#8217;t apply grammar; they absorb it. They don&#8217;t follow procedures; they recognize patterns.</p><p>If you&#8217;ve heard the term neural network, then you&#8217;ve heard of connectionism. The basic architecture is modeled, loosely, on what brains actually do: layers of simple units that strengthen or weaken their connections based on feedback. No unit &#8220;knows&#8221; anything. There are no rules written down anywhere in the system. The knowledge, such as it is, lives in the pattern of connections: distributed, implicit, and resistant to clean decomposition.</p><p>This was not a popular idea among the people whose job it was to understand the mind. In 1988, the philosophers Jerry Fodor and Zenon Pylyshyn published what was taken to be a devastatingly simple death blow to connectionism.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> The main thrust of their argument was that neural networks couldn&#8217;t account for the <em>systematicity</em> of thought: the fact that if you can think &#8220;the dog chased the cat,&#8221; you can also think &#8220;the cat chased the dog.&#8221; The same elements, rearranged, produce a different meaning. Real thinking, they argued, requires that kind of compositional structure. Rules. Symbols that can be recombined. Pattern-matching can&#8217;t handle this most basic process.</p><p>Their critique landed, and it held the high ground for decades. Within philosophy of mind and much of cognitive science, the case was considered closed. Thinking was propositional. Cognition was computation. Connectionism was a curiosity&#8212;interesting in the way that dead ends are interesting.</p><p>Then the dead end started talking.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Xj2g!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa09c4f5f-6a97-4880-9fe1-11f97869411b_412x412.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Xj2g!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa09c4f5f-6a97-4880-9fe1-11f97869411b_412x412.png 424w, https://substackcdn.com/image/fetch/$s_!Xj2g!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa09c4f5f-6a97-4880-9fe1-11f97869411b_412x412.png 848w, https://substackcdn.com/image/fetch/$s_!Xj2g!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa09c4f5f-6a97-4880-9fe1-11f97869411b_412x412.png 1272w, https://substackcdn.com/image/fetch/$s_!Xj2g!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa09c4f5f-6a97-4880-9fe1-11f97869411b_412x412.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Xj2g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa09c4f5f-6a97-4880-9fe1-11f97869411b_412x412.png" width="280" height="280" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a09c4f5f-6a97-4880-9fe1-11f97869411b_412x412.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:412,&quot;width&quot;:412,&quot;resizeWidth&quot;:280,&quot;bytes&quot;:272423,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thewobble.substack.com/i/194500512?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6599a1b6-b96b-439e-a3ab-a90cc11eee82_2171x724.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Xj2g!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa09c4f5f-6a97-4880-9fe1-11f97869411b_412x412.png 424w, https://substackcdn.com/image/fetch/$s_!Xj2g!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa09c4f5f-6a97-4880-9fe1-11f97869411b_412x412.png 848w, https://substackcdn.com/image/fetch/$s_!Xj2g!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa09c4f5f-6a97-4880-9fe1-11f97869411b_412x412.png 1272w, https://substackcdn.com/image/fetch/$s_!Xj2g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa09c4f5f-6a97-4880-9fe1-11f97869411b_412x412.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What changed wasn&#8217;t a philosophical breakthrough. It was engineering at scale. When connectionist architectures were given enough data&#8212;truly massive amounts of text, more than any human could read in a thousand lifetimes&#8212;they started doing things that the symbolic tradition said they shouldn&#8217;t be able to do. They wrote coherent prose. They reasoned by analogy. They translated between languages they were never explicitly taught to translate. They took a pile of scattered ideas and found the latent pattern.</p><p>Large language models are the purest expression of connectionism&#8217;s bet: that you don&#8217;t need rules to produce intelligent behavior. You need exposure, scale, and pattern completion. And the bet paid off in ways that even the connectionists didn&#8217;t fully anticipate.</p><p>There&#8217;s a parallel here that&#8217;s easy to miss. Connectionist AI only became powerful when it had access to an enormous corpus of human-generated text &#8212; billions of documents representing human thought across centuries. Our own neural networks had an even longer training run. The structure of our thought emerged through millions of years of evolution, shaping biological neural architectures through small corrections on small pieces, generation after generation. This was very different training set&#8212;embodied, environmental, survival-driven&#8212;but the same underlying principle holds: pattern recognition refined across an incomprehensible scale of exposure. Small adjustments, accumulated over deep time, produced something that looks like understanding.</p><p>Comparing our minds to AI is a fraught activity, prone to all sorts of projected hopes and knee-jerk skepticism. But there&#8217;s a methodological reason the comparison also feels somewhat inevitable. The only access we have to any other mind is indirect. We don&#8217;t observe others&#8217; thoughts; we observe what they do and infer about their thoughts. Movements, speech, decisions, reactions, written traces, numbers circled on scales from 1 to 7. From those outward signs, we reconstruct an inner process. In the mid-20th century behaviorism made this constraint explicit by refusing to posit anything beyond what could be observed. The information processing paradigm that overthrew behaviorism restored some complexity to the unseen mind, claiming it worked on representations, rules, and computations.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> But the method itself didn&#8217;t change. We still have no choice but to infer structure from output.</p><p>LLMs put pressure on this inference. When a machine produces something that is indistinguishable from what a human would produce, when it tells you what you were thinking, the gap between behavior and mechanism becomes harder to ignore. The output fits the pattern we associate with thinking and understanding, even though the architecture that produced it is radically different from the one we&#8217;ve assumed for ourselves. That doesn&#8217;t tell us that the machine thinks like we do. But it does unsettle our confidence that we know what &#8220;thinking like we do&#8221; actually means. </p><p>The point is not that LLMs think the way we do. They don&#8217;t.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> The point is that their success tells us something about the <em>shape</em> of cognition that the rules-based tradition was unable to see. For decades, the assumption was that a pattern-completion architecture could never match one built on rules and symbols, that it would always be a crude approximation of the real thing. But when it comes to some of the most distinctive tasks of human minds&#8212;writing, reasoning by analogy, synthesizing scattered ideas into something coherent&#8212;the pattern-completion architecture is the one that worked. Not the one built on the model of mind we seem to intuitively prefer. That&#8217;s not proof that human cognition is really just pattern completion. But I take it to be strong evidence that the connectionist picture of the mind&#8212;distributed, contextual, shaped by exposure rather than rules&#8212;was closer to the truth than the tradition that won the twentieth century.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lwBH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce714e8-157c-451a-91a4-3828ed517d74_622x622.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lwBH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce714e8-157c-451a-91a4-3828ed517d74_622x622.png 424w, https://substackcdn.com/image/fetch/$s_!lwBH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce714e8-157c-451a-91a4-3828ed517d74_622x622.png 848w, https://substackcdn.com/image/fetch/$s_!lwBH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce714e8-157c-451a-91a4-3828ed517d74_622x622.png 1272w, https://substackcdn.com/image/fetch/$s_!lwBH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce714e8-157c-451a-91a4-3828ed517d74_622x622.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lwBH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce714e8-157c-451a-91a4-3828ed517d74_622x622.png" width="280" height="280" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ce714e8-157c-451a-91a4-3828ed517d74_622x622.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:622,&quot;width&quot;:622,&quot;resizeWidth&quot;:280,&quot;bytes&quot;:493114,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thewobble.substack.com/i/194500512?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e234fc0-a5a0-470b-8616-ace9e30fc1cd_2173x724.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lwBH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce714e8-157c-451a-91a4-3828ed517d74_622x622.png 424w, https://substackcdn.com/image/fetch/$s_!lwBH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce714e8-157c-451a-91a4-3828ed517d74_622x622.png 848w, https://substackcdn.com/image/fetch/$s_!lwBH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce714e8-157c-451a-91a4-3828ed517d74_622x622.png 1272w, https://substackcdn.com/image/fetch/$s_!lwBH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce714e8-157c-451a-91a4-3828ed517d74_622x622.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>One of the ways that AI is dismissed as not so intelligent, nor like us, is to catalog the basic tasks it fails on.</p><p>People talk about this as the jaggedness of AI: the way a model can write a surprisingly insightful paragraph about epistemology and then fail at basic arithmetic. This is treated as a flaw, a sign that the machine doesn&#8217;t <em>really</em> understand or think about the world in the way we do. </p><p>But human cognition is remarkably jagged too. We all know the caricature of the brilliant scientist who starts to sound pretty naive once they stray outside of their domain. This isn&#8217;t because they&#8217;ve become stupid. It&#8217;s because their expertise was never a general-purpose rule set. It was a pattern library, trained on years of exposure to a specific kind of problem. Move them to a new domain and the patterns don&#8217;t transfer. Intelligence has always been contextual. We just call our own intelligence general because we privilege the domains we tend to care about and don&#8217;t see the physicist trying to learn a new language.</p><p>LLMs make our own jaggedness visible. In many ways they&#8217;re a mirror, imperfect, distorted, but revealing. And what they reflect back is a picture of cognition that doesn&#8217;t match the flattering self-portrait we&#8217;ve been carrying around.</p><div><hr></div><p>You can see the old self-portrait still at work in the way we describe LLMs <em>hallucinating</em>.</p><p>Think about what that word assumes. To say a machine hallucinates is to compare it to a system that should only be retrieving true propositions about the world, a logic engine that has malfunctioned, a rule-follower that has broken its own rules. The word only makes sense if you believe the machine was supposed to be looking things up and failed.</p><p>But LLMs aren&#8217;t looking anything up. They&#8217;re completing patterns. When the pattern is well-constrained by the input and the training data, the completion tends to be accurate. When the constraints are loose, the completion drifts along the grain of what would be <em>plausible</em>. The model fills in what fits.</p><p>This, of course, is exactly what human memory does too. We don&#8217;t retrieve memories like files from a cabinet. We reconstruct them, every time, from fragments, context, and the gravitational pull of coherence. We fill gaps with what fits the pattern of how we understand ourselves. We smooth contradictions. We confabulate constantly, and we experience the result as remembering. The psychological literature on this is vast and unsettling: eyewitness testimony warped by expectation, childhood memories that never happened, the eerie ease with which a plausible story becomes a felt truth.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><p>When LLMs hallucinate, they&#8217;re doing something very similar to what we do when we remember a conversation that didn&#8217;t quite happen that way, or reconstruct an argument we heard last week with our own conclusions smuggled in as though they were always there. The difference is that we freely give ourselves an exemption. When the machine fills in the pattern, we call it a hallucination. When we do it, we call it thinking.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XE7q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0df339b9-6ea9-4099-afb4-45665999707c_682x682.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XE7q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0df339b9-6ea9-4099-afb4-45665999707c_682x682.png 424w, https://substackcdn.com/image/fetch/$s_!XE7q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0df339b9-6ea9-4099-afb4-45665999707c_682x682.png 848w, https://substackcdn.com/image/fetch/$s_!XE7q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0df339b9-6ea9-4099-afb4-45665999707c_682x682.png 1272w, https://substackcdn.com/image/fetch/$s_!XE7q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0df339b9-6ea9-4099-afb4-45665999707c_682x682.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XE7q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0df339b9-6ea9-4099-afb4-45665999707c_682x682.png" width="280" height="280" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0df339b9-6ea9-4099-afb4-45665999707c_682x682.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:682,&quot;width&quot;:682,&quot;resizeWidth&quot;:280,&quot;bytes&quot;:674330,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thewobble.substack.com/i/194500512?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9182ac63-44f6-46d0-ab51-31be45b58c4e_2171x724.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XE7q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0df339b9-6ea9-4099-afb4-45665999707c_682x682.png 424w, https://substackcdn.com/image/fetch/$s_!XE7q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0df339b9-6ea9-4099-afb4-45665999707c_682x682.png 848w, https://substackcdn.com/image/fetch/$s_!XE7q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0df339b9-6ea9-4099-afb4-45665999707c_682x682.png 1272w, https://substackcdn.com/image/fetch/$s_!XE7q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0df339b9-6ea9-4099-afb4-45665999707c_682x682.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I want to be careful here, because this argument has a reductive version that I&#8217;m not making. The reductive version says: &#8220;<em>Humans are just pattern matchers. Reasoning is an illusion. We&#8217;re just stochastic parrots</em>.&#8221; But clearly we do reason. We do apply rules. We can hold a proposition in our minds, examine it from multiple angles, test it against counterexamples, and revise it.</p><p>What the LLM mirror makes visible though, is that this kind of thinking is the <em>exception</em>, not the default. Explicit, step-by-step reasoning is a hard-won achievement, and it depends on an enormous amount of external scaffolding. We reason our best when we have other people to argue with, pen and paper to externalize our thoughts, formal systems that took centuries to develop, and institutions that train us to do it. </p><p>The scaffolding that supports our reasoning is magnificent. It gave us books, science, mathematics, law, philosophy. But it&#8217;s scaffolding, not the foundation. The foundation is pattern completion. Left to our own devices, in the wild, without such scaffolding, we are largely pattern-completers. We act on hunches and call them reasons. We recognize connections and call it analysis. We draw a line between the current moment and previous moments and call it judgment. We built the scaffolding because we needed it, because the pattern completion alone wasn&#8217;t enough for the kind of coordinated, self-correcting inquiry that complex societies require. This is not a diminishment. Seeing it clearly is the beginning of a more honest relationship with our own minds.</p><p>Maintaining this clear picture of ourselves is especially important right now because we&#8217;re handing more and more of this scaffolding over to these machines. The institutions and habits that trained us to reason&#8212;arguing, drafting, revising, the slow work of thinking with others&#8212;are being replaced by something faster and different. What happens when this scaffolding starts running on the same pattern-completion engine as our minds? None of us know, but it&#8217;s worth paying attention to.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FUjT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feac64537-fd8b-4bca-9587-1625a96498db_2173x491.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FUjT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feac64537-fd8b-4bca-9587-1625a96498db_2173x491.png 424w, https://substackcdn.com/image/fetch/$s_!FUjT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feac64537-fd8b-4bca-9587-1625a96498db_2173x491.png 848w, https://substackcdn.com/image/fetch/$s_!FUjT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feac64537-fd8b-4bca-9587-1625a96498db_2173x491.png 1272w, https://substackcdn.com/image/fetch/$s_!FUjT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feac64537-fd8b-4bca-9587-1625a96498db_2173x491.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FUjT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feac64537-fd8b-4bca-9587-1625a96498db_2173x491.png" width="2173" height="491" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eac64537-fd8b-4bca-9587-1625a96498db_2173x491.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:491,&quot;width&quot;:2173,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1431710,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thewobble.substack.com/i/194500512?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbaa4a492-b929-49d2-8532-b685dac5cc31_2173x724.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FUjT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feac64537-fd8b-4bca-9587-1625a96498db_2173x491.png 424w, https://substackcdn.com/image/fetch/$s_!FUjT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feac64537-fd8b-4bca-9587-1625a96498db_2173x491.png 848w, https://substackcdn.com/image/fetch/$s_!FUjT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feac64537-fd8b-4bca-9587-1625a96498db_2173x491.png 1272w, https://substackcdn.com/image/fetch/$s_!FUjT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feac64537-fd8b-4bca-9587-1625a96498db_2173x491.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>So what was the machine doing when it told me what I was thinking?</p><p>It was completing a pattern. It took my scattered fragments&#8212;notes, stories, half-formed connections&#8212;and it did what a massive pattern-completion engine does: it found the statistical regularities, the latent structure, the thing that held the fragments together. It didn&#8217;t understand my argument. It recognized its shape.</p><p>And the reason that felt so much like the thing I do when I think well is that the thing I do when I think well is, at bottom, the same process. The vague sense that these ideas are connected, the groping toward a formulation, the moment when the shape finally clicks, that&#8217;s pattern completion too. It&#8217;s slower, embodied, shaped by a lifetime of experience rather than a training corpus, but it&#8217;s the same fundamental operation: recognizing coherence in a noisy field.</p><p>The machines didn&#8217;t learn to think like us. They were built on a different bet about what thinking is, a bet the experts rejected for decades, and the bet turned out to be closer to the truth than the model we preferred.</p><p>It&#8217;s easy to read AI news as an ongoing story about what these machines can do. And it is. But it&#8217;s also a story about what we&#8217;ve been doing all along.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.thewobble.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Wobble! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Over a century ago the philosopher and psychologist William James was raising this exact concern when he coined the phrase &#8220;stream of consciousness&#8221; in <a href="https://www.gutenberg.org/files/57628/57628-h/57628-h.htm">the Principles of Psychology</a> (1890).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Penn, J. (2020). <a href="https://www.repository.cam.ac.uk/items/0260cf56-3644-4102-bc2f-dd941ad69ea9">Inventing Intelligence: On the History of Complex Information Processing and Artificial Intelligence in the United States in the Mid-Twentieth Century</a> [Apollo - University of Cambridge Repository]. https://doi.org/10.17863/CAM.63087</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Dreyfus, H. L. (1992). <em><a href="https://mitpress.mit.edu/9780262540674/what-computers-still-cant-do/">What Computers Still Can&#8217;t Do: A Critique of Artificial Reason</a></em>. MIT Press.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Rumelhart, D. E., McClelland, J. L., &amp; Group, P. R. (1986). <em><a href="https://direct.mit.edu/books/monograph/4424/Parallel-Distributed-Processing-Volume">Parallel Distributed Processing, Volume 1: Explorations in the Microstructure of Cognition: Foundations</a></em>. The MIT Press. https://doi.org/10.7551/mitpress/5236.001.0001</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Fodor, J. A., &amp; Pylyshyn, Z. W. (1988). <a href="https://psycnet.apa.org/record/1989-03804-001">Connectionism and cognitive architecture: A critical analysis</a>. <em>Cognition</em>, <em>28</em>(1), 3&#8211;71. https://doi.org/10.1016/0010-0277(88)90031-5 (lots of non-gated pdfs of this one)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Miller, G. A. (2003). <a href="https://pubmed.ncbi.nlm.nih.gov/12639696/">The cognitive revolution: A historical perspective</a>. <em>Trends in Cognitive Sciences</em>, <em>7</em>(3), 141&#8211;144. https://doi.org/10.1016/s1364-6613(03)00029-9</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Though there is ongoing research that&#8217;s making headway by assuming that at least certain parts of our thinking does indeed work the same way:<br>Schrimpf, M., Blank, I. A., Tuckute, G., Kauf, C., Hosseini, E. A., Kanwisher, N., Tenenbaum, J. B., &amp; Fedorenko, E. (2021). <a href="https://www.pnas.org/doi/10.1073/pnas.2105646118">The neural architecture of language: Integrative modeling converges on predictive processing</a>. <em>Proceedings of the National Academy of Sciences</em>, <em>118</em>(45), e2105646118. <a href="https://doi.org/10.1073/pnas.2105646118">https://doi.org/10.1073/pnas.2105646118</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Schacter, D. L. (2022). <a href="https://www.tandfonline.com/doi/full/10.1080/09658211.2021.1873391">The Seven Sins of Memory: An Update</a>. <em>Memory</em>, <em>30</em>(1), 37&#8211;42. https://doi.org/10.1080/09658211.2021.1873391</p><p></p></div></div>]]></content:encoded></item></channel></rss>