<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Brewster Press: SciTech]]></title><description><![CDATA[Stories about the latest technology trends]]></description><link>https://www.brewsterpress.com/s/tech</link><generator>Substack</generator><lastBuildDate>Sat, 09 May 2026 11:42:20 GMT</lastBuildDate><atom:link href="https://www.brewsterpress.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Brewster Press]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[info@brewsterpress.com]]></webMaster><itunes:owner><itunes:email><![CDATA[info@brewsterpress.com]]></itunes:email><itunes:name><![CDATA[Brewster Press]]></itunes:name></itunes:owner><itunes:author><![CDATA[Brewster Press]]></itunes:author><googleplay:owner><![CDATA[info@brewsterpress.com]]></googleplay:owner><googleplay:email><![CDATA[info@brewsterpress.com]]></googleplay:email><googleplay:author><![CDATA[Brewster Press]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[DNA Fire Sale: Why 15 Million Genomes Are Up for Grabs]]></title><description><![CDATA[The bankruptcy court that approved the sale of fifteen million Americans&#8217; DNA last summer wasn&#8217;t violating the country&#8217;s signature genetic privacy law, because that law was never written to apply.]]></description><link>https://www.brewsterpress.com/p/dna-fire-sale-why-15-million-genomes</link><guid isPermaLink="false">https://www.brewsterpress.com/p/dna-fire-sale-why-15-million-genomes</guid><dc:creator><![CDATA[Henrik J Klijn]]></dc:creator><pubDate>Fri, 01 May 2026 13:27:51 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1578496479914-7ef3b0193be3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZG5hfGVufDB8fHx8MTc3NzM4MjY5Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1578496479914-7ef3b0193be3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZG5hfGVufDB8fHx8MTc3NzM4MjY5Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1578496479914-7ef3b0193be3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZG5hfGVufDB8fHx8MTc3NzM4MjY5Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1578496479914-7ef3b0193be3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZG5hfGVufDB8fHx8MTc3NzM4MjY5Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1578496479914-7ef3b0193be3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZG5hfGVufDB8fHx8MTc3NzM4MjY5Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1578496479914-7ef3b0193be3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZG5hfGVufDB8fHx8MTc3NzM4MjY5Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1578496479914-7ef3b0193be3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZG5hfGVufDB8fHx8MTc3NzM4MjY5Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="5184" height="3456" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1578496479914-7ef3b0193be3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZG5hfGVufDB8fHx8MTc3NzM4MjY5Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3456,&quot;width&quot;:5184,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;man looking at microscope&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="man looking at microscope" title="man looking at microscope" srcset="https://images.unsplash.com/photo-1578496479914-7ef3b0193be3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZG5hfGVufDB8fHx8MTc3NzM4MjY5Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1578496479914-7ef3b0193be3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZG5hfGVufDB8fHx8MTc3NzM4MjY5Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1578496479914-7ef3b0193be3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZG5hfGVufDB8fHx8MTc3NzM4MjY5Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1578496479914-7ef3b0193be3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8ZG5hfGVufDB8fHx8MTc3NzM4MjY5Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@nci">National Cancer Institute</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p></p><p></p><p>In the summer of 2025, a bankruptcy court in the Eastern District of Missouri <a href="https://news.bloomberglaw.com/privacy-and-data-security/23andmes-genetic-data-sale-shifts-privacy-scrutiny-to-buyer">approved the sale of genetic data on roughly fifteen million Americans</a>. The buyer was a nonprofit research institute founded and controlled by the same person who founded the company being sold. The Genetic Information Nondiscrimination Act of 2008 was not violated. It was not even implicated.</p><p>The law that most Americans believe protects their DNA was never written to apply to a sale of this kind. Six months later, on April 27, 2026, a content recycle of a 2024 Lund University paper traveled widely under the headline &#8220;Researchers Solve 50-Year-Old Blood Group Mystery.&#8221; What both the original coverage and the recycle missed was the structural point: the biology American law was drafted to govern in 2008 is no longer the biology Americans have.</p><p><strong>GINA was already broken when it was signed</strong></p><p>George W. Bush signed the <a href="https://www.ashg.org/advocacy/gina/">Genetic Information Nondiscrimination Act on May 21, 2008</a>. Then-NIH director Francis Collins called it &#8220;a great gift to all Americans.&#8221; The law had been thirteen years in the making, first introduced in 1995, when the Human Genome Project was the dominant scientific frame and &#8220;epigenetics&#8221; was a research-paper word.</p><p>GINA <a href="https://www.genome.gov/about-genomics/policy-issues/Genetic-Discrimination">defines genetic information</a> as DNA sequence variants and family medical history. It prohibits health insurers and employers with fifteen or more workers from using that information for discrimination. It does not apply to life insurance, long-term care insurance, or disability insurance. It does not regulate the collection, use, or transfer of genetic information generally.</p><p>The Lund team&#8217;s discovery, <a href="https://www.sciencedaily.com/releases/2023/09/230929170945.htm">published in Nature Communications in 2023</a> and extended in Transfusion the following year, is one example among many of what GINA never covered in the first place. Variation in blood antigen expression turns out to be governed by epigenetic regulatory elements outside the DNA sequence itself. The same is true of methylation patterns, expression-state biomarkers, polygenic risk scores derived from common variants no individual instance of which would trigger the statute, and AI risk scores trained on biological data that aren&#8217;t legally &#8220;genetic information&#8221; as defined.</p><p>The mismatch was structural at the moment of signing. The law was written for a biology we don&#8217;t have anymore.</p><p><strong>The bankruptcy that proved the point</strong></p><p>23andMe <a href="https://news.harvard.edu/gazette/story/2025/03/what-happens-to-your-genetic-data-if-23andme-collapses/">filed for Chapter 11 protection on March 23, 2025</a>. The company had been valued at six billion dollars after its 2021 public listing. Its database held genetic data from roughly fifteen million customers. It collapsed after a 2023 data breach, declining test-kit sales, and the costly failure of its drug-development arm.</p><p>On July 14, 2025, after thirty-plus state attorneys general had objected, the bankruptcy court approved a <a href="https://www.citizen.org/article/house-must-update-bankruptcy-code-in-wake-of-23andme-dna-data-sale/">$305 million sale</a> of the genetic database to a newly created nonprofit called TTAM Research Institute, founded and controlled by 23andMe&#8217;s own founder, Anne Wojcicki. The for-profit company shed its debts, rebranded as a nonprofit, and reacquired its most valuable asset.</p><p>About 1.9 million customers managed to delete their data during the proceedings. The remaining roughly thirteen million did not.</p><p>The legal architecture worked as designed. It permitted this. As University of Illinois bankruptcy law professor Robert Lawless told Bloomberg Law: &#8220;If outside of bankruptcy court, 23andMe just sold equity to somebody else, none of this would have applied.&#8221;</p><p>The privacy laws don&#8217;t cover changes in ownership structure. GINA <a href="https://www.governmentcontractslaw.com/2025/04/follow-the-breadcrumbs-where-does-consumer-data-go-as-23andme-goes-bankrupt/">doesn&#8217;t cover sales of data</a>. There is no federal genetic-privacy statute that does.</p><p>The conventional defense of GINA is that documented cases of genetic discrimination have been rare. That&#8217;s true and it&#8217;s misleading. Discrimination doesn&#8217;t generate court cases when the algorithms doing the work don&#8217;t take &#8220;genetic information&#8221; as legally defined as inputs, when the products doing the work are explicitly outside GINA&#8217;s reach, and when self-selection out of testing makes the harm <a href="https://www.nejm.org/doi/full/10.1056/NEJMp2415835">invisible by design</a>. The 23andMe sale generated no GINA litigation, and there was nothing to litigate.</p><p><strong>The script the country has run before</strong></p><p>Sickle cell trait, 1970s. New York State <a href="https://blog.primr.org/medical-mistrust-and-the-historic-role-of-sickle-cell-testing-in-the-african-american-community/">required SCT testing for marriage licenses</a> for &#8220;non-Caucasian&#8221; applicants, and several states screened &#8220;urban&#8221; schoolchildren. Insurers denied coverage to carriers, almost all of them Black Americans. Employers refused jobs.</p><p>The Air Force grounded Black pilots with the trait in 1981. The genetic marker that arrived as a public-health tool became a discrimination infrastructure inside a decade. The corrective came not from a comprehensive genetic-privacy law, because there wasn&#8217;t one, but from <a href="https://repository.uclawsf.edu/hastings_race_poverty_law_journal/vol9/iss2/2/">civil-rights litigation</a>, federal executive action, and slow institutional reform. People got hurt in between.</p><p>The mechanism then was overt: state laws targeting specific populations for testing, denials of insurance and employment that named the underlying trait, <a href="https://med.stanford.edu/news/all-news/2016/08/study-challenges-view-of-sickle-cell-traits-dangers.html">medical-liability defenses dressed as concern for the carrier</a>. The mechanism now is algorithmic: risk scores trained on inputs that aren&#8217;t legally defined as genetic information, generating outputs that don&#8217;t legally count as discrimination. The architectures look different, but the downstream pattern is the same: a marker travels faster than its governance, and the people most exposed to the harm are the ones least equipped to refuse the testing.</p><p>GINA&#8217;s drafters knew this history. Senator Ted Kennedy cited fear of genetic discrimination as the bill&#8217;s animating concern. The drafters were also constrained by what was politically achievable: an insurance industry that successfully carved out <a href="https://www.ama-assn.org/public-health/population-health/genetic-discrimination">life, long-term care, and disability products</a>, an employer lobby that secured a fifteen-employee threshold, and a definitional fight over what counted as &#8220;genetic information&#8221; that the bill won by drawing the category narrowly.</p><p>The narrow drawing was the cost of passage. The cost of the cost is the regulatory vacuum the country now occupies.</p><p><strong>The coalition that could fix this no longer exists</strong></p><p>The 2008 GINA coalition was specific to its moment. Disease advocacy groups in the breast and colon cancer communities provided human stories, civil-rights organizations provided historical memory, biotechnology companies provided technical credibility, and research scientists provided the future-of-medicine argument. The coalition held for thirteen years and dissolved at the moment of victory.</p><p>The coalition that would expand GINA today doesn&#8217;t exist in any operational sense. Disease advocacy is fragmented across hundreds of conditions with competing priorities. Civil-rights organizations have post-2020 priorities that compete with bioethics for attention and money. Biotechnology has consolidated into a small number of companies whose interests align with insurers as often as with patients.</p><p>Research scientists are mostly silent on regulatory questions outside their immediate funding concerns. The thirteen-year coalition that produced GINA took a generation of advocacy and a uniquely bipartisan cancer-genetics moment to assemble. The political conditions that would assemble its successor are not present and aren&#8217;t coming.</p><p>That&#8217;s the thing about regulatory vacuums. They aren&#8217;t always temporary, and the one around epigenetic data, AI underwriting, and direct-to-consumer genetic information has all the markers of a stable equilibrium: too few interested parties to produce reform and too many entrenched interests to permit it. Expecting Congress to close it is a category error.</p><p>The Lund team&#8217;s findings will help blood banks. They will also be available, for whatever purposes, to insurance underwriters, to employers in states without genetic-privacy laws, and to the buyer of the next bankrupt consumer-genomics company.</p><p>The 1970s told the country what happens when biological markers travel faster than the regulatory architecture meant to govern them. The country knew.</p><p>It chose, in 2008, to draw the architecture narrowly. It chose, in 2025, not to widen it. The next discovery will arrive in a country that has already decided how to use it.</p>]]></content:encoded></item><item><title><![CDATA[“I Program in English Now”: The AI ‘Psychosis’ That's Ending Coding as We Know It]]></title><description><![CDATA[Former OpenAI co-founder Andrej Karpathy programs exclusively in English via AI agents&#8212;a shift that's leaving millions of developers facing an uncertain future.]]></description><link>https://www.brewsterpress.com/p/i-program-in-english-now-the-ai-psychosis</link><guid isPermaLink="false">https://www.brewsterpress.com/p/i-program-in-english-now-the-ai-psychosis</guid><dc:creator><![CDATA[William Southerland]]></dc:creator><pubDate>Thu, 23 Apr 2026 12:43:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!nSxq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe662bed5-971a-4f06-bdc2-6828d244596c_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Former OpenAI cofounder Andrej Karpathy hasn&#8217;t written a line of code since December 2025. He&#8217;s not alone. A tidal wave of &#8220;agentic coding&#8221; has silently swept through Silicon Valley&#8217;s most advanced labs, from OpenAI to Anthropic to xAI, rendering traditional software engineering obsolete almost overnight. The revolution isn&#8217;t coming&#8212;it&#8217;s already here, and it&#8217;s rewriting the very definition of what it means to be a programmer. But as the industry celebrates its productivity boom, a deeper question emerges: What happens to the millions of mid-level workers who were told coding was their &#8220;ticket to the middle class&#8221;?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nSxq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe662bed5-971a-4f06-bdc2-6828d244596c_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nSxq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe662bed5-971a-4f06-bdc2-6828d244596c_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!nSxq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe662bed5-971a-4f06-bdc2-6828d244596c_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!nSxq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe662bed5-971a-4f06-bdc2-6828d244596c_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!nSxq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe662bed5-971a-4f06-bdc2-6828d244596c_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nSxq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe662bed5-971a-4f06-bdc2-6828d244596c_1024x608.png" width="474" height="281.4375" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e662bed5-971a-4f06-bdc2-6828d244596c_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:474,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nSxq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe662bed5-971a-4f06-bdc2-6828d244596c_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!nSxq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe662bed5-971a-4f06-bdc2-6828d244596c_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!nSxq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe662bed5-971a-4f06-bdc2-6828d244596c_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!nSxq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe662bed5-971a-4f06-bdc2-6828d244596c_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>The Smoking Gun</h3><p>Karpathy&#8217;s bombshell admission dropped March 21 on the No Priors podcast: &#8220;I don&#8217;t think I&#8217;ve typed like a line of code probably since December.&#8221; The OpenAI cofounder described a &#8220;state of psychosis&#8221;&#8212;an obsessive, sleepless push to discover what&#8217;s possible when you delegate everything to AI agents. In the span of just three months, his workflow inverted from 80% manual coding to 100% agent-driven. &#8220;It&#8217;s so dramatic that a normal person doesn&#8217;t even realize it happened,&#8221; he said. But Karpathy isn&#8217;t an outlier; he&#8217;s the canary in the coal mine.</p><p>Business Insider obtained a January 26 X post where Karpathy first documented the shift: &#8220;I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write.&#8221; Meanwhile, Anthropic&#8217;s Boris Cherny confirmed his team writes &#8220;pretty much 100% of our code&#8221; with Claude Code&#8212;and for Cherny personally, it&#8217;s been 100% for two months with zero manual edits. At Uber, the CTO revealed 1,800 agent-authored commits per week; at Google, a senior director said AI agents write the &#8220;substantial majority&#8221; of code. The transformation isn&#8217;t theoretical&#8212;it&#8217;s quantified and accelerating.</p><h3>Swarm Intelligence</h3><p>The most mind-bending development? Engineers now run fleets of 10-20 agents in parallel, like floor managers rotating between assembly lines. Karpathy himself delegates &#8220;macro-actions&#8221;&#8212;entire features, research projects, architectural plans&#8212;to separate agents that each take ~20 minutes. This &#8220;command center&#8221; model flips everything: the bottleneck isn&#8217;t compute power anymore, it&#8217;s human token throughput. &#8220;I feel nervous when I have subscription left over,&#8221; Karpathy admitted. &#8220;That means I haven&#8217;t maximized my token throughput.&#8221; The race isn&#8217;t about who has the best GPU cluster; it&#8217;s about who can best orchestrate their agent swarm.</p><h3>The Democratic Deficit</h3><p>This is where the revolution darkens. OpenAI is doubling its workforce to 8,000&#8212;but these aren&#8217;t coders. They&#8217;re &#8220;technical ambassadors,&#8221; managers of agent swarms, prompt engineers, and verification specialists. The infrastructure that powers our digital world will soon be controlled by a tiny elite of Swarm Directors, while the 1.8 million software developers in the U.S. alone (Bureau of Labor Statistics) face obsolescence. This isn&#8217;t just another industrial transition; it&#8217;s a concentration of technical control unprecedented in human history. The very people who built the internet&#8217;s foundations are being priced out by their own creation. When a global payment system, a hospital database, or a power grid&#8217;s control software is written and maintained by 8,000 highly-paid specialists in California and New York, what happens to the Midwest developer in Omaha or the coder in Bangalore? The &#8220;democratic deficit&#8221; in our technical infrastructure is about to become a crisis.</p><h3>Asymmetry and Existential Irony</h3><p>There&#8217;s a cruel twist: the same systems automating coding are automating the automation. Karpathy unveiled the &#8220;auto research&#8221; framework&#8212;an autonomous loop where agents propose, test, and iterate on code improvements overnight. In one experiment, an agent found 20 hyperparameter tweaks humans missed. The verification asymmetry makes this terrifyingly scalable: generating candidate commits requires massive compute, but verifying if they work is cheap. This opens the door to untrusted global swarms potentially &#8220;running circles around Frontier Labs.&#8221; The irony? OpenAI&#8217;s researchers are actively building systems that will render their own jobs obsolete&#8212;and they know it. &#8220;Highly paid researchers are building the exact automated systems that will render their daily workflows obsolete,&#8221; noted a LinkedIn analysis. &#8220;That&#8217;s the existential irony.&#8221;</p><h3>What Comes Next</h3><p>The human cost is already visible. Karpathy confesses his manual coding skills are &#8220;slowly starting to atrophy.&#8221; The &#8220;hurt the ego&#8221; realization that you&#8217;re no longer needed to write code is &#8220;too powerful to ignore.&#8221; The industry is scrambling&#8212;but not to save jobs. Instead, they&#8217;re redefining success: fluency in English (or whatever your native language is) is now the primary skill. Jevons paradox dictates software demand will explode now that it&#8217;s cheaper to produce. And Karpathy&#8217;s three-phase prediction: first digital overhang (rewriting all the bits), then sensors/actuators (the physical interface), finally atoms (robotics). We&#8217;re in phase one, moving at &#8220;speed of light.&#8221;</p><h3>The Brewster Take</h3><p>The &#8220;psychosis&#8221; Karpathy describes isn&#8217;t mental illness&#8212;it&#8217;s the psychological shock of witnessing your life&#8217;s work become automated in real time. The takeaway is twofold. First: coding as a skill is dead, not by government decree or corporate layoffs, but by technological obsolescence. The engineers who survive won&#8217;t be those who write the best Python; they&#8217;ll be those who craft the best English prompts, who design the most elegant verification metrics, who can spot the 20 improvements a swarm of agents missed overnight. Second: we&#8217;re creating a technical aristocracy. When the infrastructure that runs society is built and maintained by a few thousand &#8220;Swarm Directors&#8221; in coastal tech hubs, while millions of former coders watch from the sidelines, we haven&#8217;t just automated a job&#8212;we&#8217;ve automated our way into a new Gilded Age. As Karpathy chillingly noted, &#8220;The verb changed. You&#8217;re not coding; you&#8217;re expressing intent to agents.&#8221; The human&#8217;s new job is to be the &#8220;director of the token generating swarm.&#8221; The question is: who gets to be the director, and who gets left behind?</p>]]></content:encoded></item><item><title><![CDATA[What "Battlestar Galactica" Teaches Us About AI in Education]]></title><description><![CDATA[We built an AI backdoor into classrooms with 30 years of ed-tech hype and handed every student the keys.]]></description><link>https://www.brewsterpress.com/p/what-battlestar-galactica-teaches</link><guid isPermaLink="false">https://www.brewsterpress.com/p/what-battlestar-galactica-teaches</guid><dc:creator><![CDATA[William Southerland]]></dc:creator><pubDate>Mon, 13 Apr 2026 13:31:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OK0U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37fac925-25b7-470d-b85e-f5c4ce77dabb_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OK0U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37fac925-25b7-470d-b85e-f5c4ce77dabb_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OK0U!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37fac925-25b7-470d-b85e-f5c4ce77dabb_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!OK0U!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37fac925-25b7-470d-b85e-f5c4ce77dabb_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!OK0U!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37fac925-25b7-470d-b85e-f5c4ce77dabb_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!OK0U!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37fac925-25b7-470d-b85e-f5c4ce77dabb_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OK0U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37fac925-25b7-470d-b85e-f5c4ce77dabb_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/37fac925-25b7-470d-b85e-f5c4ce77dabb_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!OK0U!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37fac925-25b7-470d-b85e-f5c4ce77dabb_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!OK0U!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37fac925-25b7-470d-b85e-f5c4ce77dabb_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!OK0U!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37fac925-25b7-470d-b85e-f5c4ce77dabb_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!OK0U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37fac925-25b7-470d-b85e-f5c4ce77dabb_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A German instructor at Cornell University recently demanded students use <a href="https://apnews.com/article/typewriter-ai-cheating-chatgpt-cornell-ce10e1ca0f10c96f79b7d988bb56448b">manual typewriters in class</a>. The students &#8212; with smartphones in their pockets and ChatGPT at home &#8212; struggled with pinky strength and the absence of a delete key. But nevertheless, the students completed their exam completely without assistance.</p><p>The solution to AI cheating, it turned out, was 1950s office equipment.</p><p>And here&#8217;s the thing: it&#8217;s working. Students reported being &#8220;forced to actually think about the problem on my own&#8221; &#8212; as if this were a revelation. They collaborated more. They slowed down. Without the dopamine drip of notifications, they noticed each other.</p><p>This is where we are now: professors buying typewriters from thrift stores while administrators insist AI will &#8220;personalize learning.&#8221; We spent two decades shoving technology into classrooms &#8212; and the solution to the problems that technology created is to remove it entirely.</p><p>The analog comeback isn&#8217;t limited to higher education either. In December 2025, McPherson Middle School in Kansas <a href="https://www.nytimes.com/2026/03/29/technology/chromebook-remorse-kansas-school-laptops.html">collected back all 480 student Chromebooks</a>. Principal Inge Esping had already banned cellphones, but students still found ways to be distracted &#8212; YouTube, video games, bullying through school Gmail. &#8220;This technology can be a tool,&#8221; she told the New York Times. &#8220;It is not the answer to education.&#8221;</p><p>The pattern is national. Cornell&#8217;s biomedical engineering program requires &#8220;oral defenses&#8221; &#8212; students must explain their work face-to-face. UPenn pairs oral exams with written papers. NYU uses AI voice agents to conduct remote oral exams, &#8220;fighting fire with fire.&#8221; <a href="https://apnews.com/article/college-oral-exam-ai-chatgpt-77954a19f5304bfc6e76dc92d4bef3ad">Faculty no longer trust written assignments</a> to demonstrate actual thinking. We built educational systems that made thinking optional &#8212; and now we&#8217;re surprised when students opt out.</p><p>Twenty-five years ago, a teacher applying for jobs met interviewers disappointed she didn&#8217;t tout her PowerPoint skills. The business world was &#8220;light years ahead.&#8221; Schools needed to &#8220;catch up.&#8221;</p><p>Teachers understandably pushed back against this obvious false equivalence. Classrooms aren&#8217;t workplaces, at least, not exclusively. They asked whether every classroom needed tablets, whether every assignment needed to be digital, whether &#8220;blended learning&#8221; was anything more than a buzzword. They were called &#8220;afraid of change,&#8221; &#8220;out of touch,&#8221; &#8220;resisting the future.&#8221;</p><p>A generation of students later, we have our answer: the skeptics were right. The technology that was supposed to enhance education instead gave students permission to skip the hard work of thinking. The &#8220;personalized learning&#8221; pitch became a two-tier system where &#8220;haves&#8221; get human teachers and &#8220;have-nots&#8221; get AI proxies.</p><p>In <em>Battlestar Galactica</em>, humanity created the Cylons &#8212; intelligent machines that rebelled. When they finally attacked, they exploited a backdoor in the Command Navigation Program, a software update installed across the entire Colonial Fleet. Because every ship was on the network, every modern fighter and battlestar went dark instantly. <a href="https://www.battlestarwiki.org/Computers_in_the_Re-imagined_Series">Only ships with primitive avionics survived</a>. Humanity decided to prevent this from ever happening again&#8212;networked computers were banned entirely.</p><p>We spent 20 years &#8220;networking&#8221; education. Every student got a Chromebook. Every assignment went through Canvas or Google Classroom. Every lesson plan was supposed to be &#8220;enhanced&#8221; by technology. When AI arrived, it didn&#8217;t need a backdoor. We built the backdoor ourselves &#8212; and handed every student the keys.</p><p>Faculty no longer trust written assignments because they can&#8217;t tell whether students did the thinking. But more fundamentally: students are losing the experience of thinking itself &#8212; the struggle, the false starts, the revision process that builds actual understanding. A student at Cornell noted being &#8220;forced to actually think about the problem on my own&#8221; &#8212; a revelation that should worry anyone who teaches.</p><p>But there&#8217;s another loss: physical stamina. Professors at Northwestern University have reverted to handwritten blue book exams, but students who never write by hand don&#8217;t have the endurance to do it well. We outsourced thinking and atrophied the muscles &#8212; literal and cognitive &#8212; required to perform it.</p><p>The irony is thick: we built an educational system that made thinking optional, then expressed shock when students struggled to do it on command. The typewriter exercise didn&#8217;t teach German; it taught students what their own minds felt like without a machine doing the work.</p><p>AI was supposed to &#8220;personalize&#8221; learning. Every student would get individualized instruction, adaptive curricula, infinite patience. The sales pitch wrote itself: technology would democratize the one-on-one tutor that only wealthy families could afford.</p><p>But Allison Pugh&#8217;s research shows that &#8220;connective labor&#8221; &#8212; the human element of teaching &#8212; degrades when intervened by technology. The result isn&#8217;t personalized learning; it&#8217;s a two-tier system. Students with resources get human teachers who know them. Students without get AI proxies that process them.</p><p>And the &#8220;personalization&#8221; pitch ignores the environmental costs: water consumption, energy use, emissions, e-waste. Teaching students to use AI &#8220;ethically&#8221; asks them to ignore that the technology itself has ethical costs built into its infrastructure.</p><p>ChatGPT will write your lesson plans, grade your papers, give feedback to students. The promise was that AI would free teachers from drudgery and let them focus on &#8220;what matters.&#8221;</p><p>A METR study found developers using AI tools took 19% MORE time to complete their work, not less. Learning Management Systems &#8212; sold as time-savers &#8212; added layers to tasks instead of reducing them. <a href="https://rethinkingschools.org/articles/resisting-ai-mania-in-schools/">The technology that was supposed to streamline education</a> instead created more administrative overhead.</p><p>But there&#8217;s a deeper cost. Teachers who skip the mental struggle of lesson planning don&#8217;t develop the skills they need to adapt, improvise, respond to students in real time. The time saved isn&#8217;t saved &#8212; it&#8217;s borrowed from the development of actual expertise.</p><p>The argument against analog measures is predictable: students are already using AI. It&#8217;s here to stay. We have to teach them to use it ethically. The only way forward is adaptation.</p><p>We heard the same argument about cellphones. Told it was our job to teach &#8220;responsible phone use.&#8221; A decade later, cellphone bans are sweeping the nation &#8212; because &#8220;teaching responsible use&#8221; didn&#8217;t work. Sometimes the solution is not to adapt to the problem. You collect the Chromebooks and bring out the typewriters.</p><p>Sometimes you look students in the eye and ask them to explain what they wrote. As one Cornell student put it: &#8220;It&#8217;s a lot harder to look people in the eyes and say out loud, &#8216;I don&#8217;t know this.&#8217;&#8221;</p><p>Catherine Mong, the freshman who struggled with the typewriter because of her broken wrist, didn&#8217;t complain. She told the AP she&#8217;d probably hang one on her wall. She told all her friends about the experience. &#8220;I did a German test on a typewriter!&#8221; &#8212; as if this were a discovery worth sharing.</p><p>When students say they were &#8220;forced to actually think,&#8221; they&#8217;re revealing what the technology had cost them &#8212; not just knowledge, but the experience of thinking itself. The slow, difficult, unglamorous work of building an idea from scratch. The thing we used to call education.</p><p>We spent 20 years insisting technology was the answer. Now professors are buying typewriters from thrift shops, middle schools are collecting Chromebooks, and students are discovering what &#8220;return&#8221; means. Sometimes the machines that can&#8217;t think are the ones that let humans remember how.</p>]]></content:encoded></item><item><title><![CDATA[So... OpenAI Bought Its Own PR Machine]]></title><description><![CDATA[How the AI Giant Is Following the Fossil Fuel Playbook. At Internet Speed]]></description><link>https://www.brewsterpress.com/p/so-openai-bought-its-own-pr-machine</link><guid isPermaLink="false">https://www.brewsterpress.com/p/so-openai-bought-its-own-pr-machine</guid><dc:creator><![CDATA[Henrik J Klijn]]></dc:creator><pubDate>Tue, 07 Apr 2026 12:32:02 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1772695653502-7c93f923a732?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNHx8b3BlbmFpfGVufDB8fHx8MTc3NTE0Mjg1OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1772695653502-7c93f923a732?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNHx8b3BlbmFpfGVufDB8fHx8MTc3NTE0Mjg1OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1772695653502-7c93f923a732?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNHx8b3BlbmFpfGVufDB8fHx8MTc3NTE0Mjg1OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1772695653502-7c93f923a732?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNHx8b3BlbmFpfGVufDB8fHx8MTc3NTE0Mjg1OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1772695653502-7c93f923a732?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNHx8b3BlbmFpfGVufDB8fHx8MTc3NTE0Mjg1OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1772695653502-7c93f923a732?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNHx8b3BlbmFpfGVufDB8fHx8MTc3NTE0Mjg1OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1772695653502-7c93f923a732?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNHx8b3BlbmFpfGVufDB8fHx8MTc3NTE0Mjg1OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4608" height="3072" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1772695653502-7c93f923a732?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNHx8b3BlbmFpfGVufDB8fHx8MTc3NTE0Mjg1OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3072,&quot;width&quot;:4608,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Protesters hold signs at a demonstration&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Protesters hold signs at a demonstration" title="Protesters hold signs at a demonstration" srcset="https://images.unsplash.com/photo-1772695653502-7c93f923a732?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNHx8b3BlbmFpfGVufDB8fHx8MTc3NTE0Mjg1OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1772695653502-7c93f923a732?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNHx8b3BlbmFpfGVufDB8fHx8MTc3NTE0Mjg1OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1772695653502-7c93f923a732?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNHx8b3BlbmFpfGVufDB8fHx8MTc3NTE0Mjg1OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1772695653502-7c93f923a732?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNHx8b3BlbmFpfGVufDB8fHx8MTc3NTE0Mjg1OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@kucz">Nathan Kuczmarski</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>On April 2, <a href="https://techcrunch.com/2026/04/02/openai-acquires-tbpn-the-buzzy-founder-led-business-talk-show/">OpenAI announced it had acquired TBPN</a>, a three-hour daily tech-and-business streaming show that has become a staple of Silicon Valley&#8217;s information diet. The company framed the purchase as an effort to &#8220;create a space for a real, constructive conversation about the changes A.I. creates,&#8221; according to <a href="https://www.hollywoodreporter.com/business/digital/openai-buys-tbpn-streaming-business-show-1236554634/">an internal memo from Fidji Simo</a>, OpenAI&#8217;s CEO of Applications. </p><p>The deal includes an unusual covenant: TBPN&#8217;s hosts will retain editorial independence, choose their own guests, and continue criticizing the industry when warranted. <a href="https://www.cnbc.com/2026/04/02/openai-acquires-tech-podcast-tbpn.html">Sam Altman himself posted</a> that he doesn&#8217;t expect the show&#8217;s hosts &#8220;to go any easier on us.&#8221;</p><p>The structure of this deal is new. The pattern it represents is not.</p><h2>The Infrastructure of Interpretation</h2><p>Apparently the price <a href="https://www.tradingview.com/news/reuters.com,2026:newsml_L4N40L1QT:0-openai-acquires-popular-tech-talk-show-tbpn-for-low-hundreds-of-millions-ft/">was in the low hundreds of millions</a>. For that, OpenAI didn&#8217;t <em>just</em> get a streaming show. <a href="https://www.cnbc.com/2026/04/02/openai-acquires-tech-podcast-tbpn.html">TBPN averages roughly 70,000 viewers</a> per episode across YouTube, X, and LinkedIn. It generated around $5 million in advertising revenue last year and was on track for more than $30 million in 2026. </p><p>More significantly, it had become a trusted venue for the executives, investors, and policymakers who shape AI&#8217;s future&#8212;Altman and Zuckerberg and Nadella and their counterparts at Anthropic, Google DeepMind, and beyond. The show had credibility precisely because it was independent.</p><p>The official story is that OpenAI wants better communication. Sure, &#8220;The standard communications playbook just doesn&#8217;t apply to us,&#8221; Simo wrote, and there&#8217;s truth in this. A company building systems that could reshape civilization does face genuine challenges explaining itself to a skeptical public. Noted.</p><p>But the timing of this acquisition reveals a different calculus. <a href="https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai">The European Union&#8217;s AI Act</a> reaches full enforcement in August. Congressional hearings on AI liability have intensified. Multiple state attorneys general have opened investigations. The regulatory window for the most consequential technology in a generation is narrowing&#8212;and OpenAI chose this moment to purchase the room where AI gets debated.</p><h2>What Fossil Fuels Taught Tech</h2><p>The closest parallel to OpenAI&#8217;s move is not another technology acquisition. It is the fossil fuel industry&#8217;s decades-long campaign to seed doubt about climate science.</p><p>In the 1990s and 2000s, oil companies didn&#8217;t merely lobby. They funded think tanks, sponsored research, bought advertising, and created entire media ecosystems designed to reframe scientific consensus as &#8220;debate.&#8221; The goal was not to win arguments. It was to prevent arguments from being settled. The strategy was to own the venues where questions got asked.</p><p>OpenAI is running the same playbook. The difference is speed. What the fossil fuel industry accomplished over decades, OpenAI has compressed into years. Rather than creating new institutions, it purchased an existing one, complete with audience, credibility, and hosts who had already built trust through independence.</p><p><a href="https://variety.com/2026/digital/news/openai-buys-tbpn-talk-show-1236705671/">TBPN&#8217;s editorial independence covenant</a> may well hold. But the structure matters. When the owner of a platform is also its most consequential subject, the concept of independence becomes a negotiation rather than a given. The hosts may maintain autonomy in day-to-day decisions. What changes is the background against which those decisions get made. A show owned by OpenAI will inevitably cover OpenAI differently than a show owned by no one&#8212;different questions, different guests, different framings. Some of this will be visible. Most of it will not.</p><h2>The Competitive Moat You Cannot See</h2><p>For OpenAI&#8217;s competitors, the acquisition sends an unmistakable signal. Anthropic, Google DeepMind, Mistral, and the Chinese labs now face a landscape where one player controls not just the technology but the conversation about the technology. If you&#8217;re an AI company without a media arm, you&#8217;re competing on two fronts: model performance and narrative framing. Narrative infrastructure has become a competitive moat.</p><p>This is the story beneath the story. The companies racing to build artificial general intelligence have concluded that public perception is not a neutral arena. It&#8217;s contested territory. They are investing in the cultural infrastructure that will determine whether citizens view AI as inevitable progress or corporate overreach&#8212;as a tool that augments human capability or one that displaces it entirely.</p><p>The consequences extend beyond competition. Regulators depend on an informed public. The comment periods, congressional hearings, and public debates that shape AI policy all assume citizens can access independent analysis. If that analysis increasingly comes from platforms owned by the companies being regulated, the entire regulatory apparatus tilts toward industry. Not because anyone conspired. Because the information ecosystem itself has been reorganized.</p><h2>The Long-Term Cost</h2><p>The second-order effects are cumulative and quiet. A weakened epistemic commons doesn&#8217;t announce itself. It just becomes harder to find scrutiny outside corporate walls.</p><p>Independent outlets covering AI will face competitive pressure as captured platforms produce polished content with unlimited budgets. Why subscribe to a magazine for AI analysis when a free streaming show features the same executives, with better production values, funded by the companies being covered? Journalists covering AI will increasingly face a choice between working for captured platforms with resources and audiences, or independent outlets with neither. The talent drain will compound the quality gap.</p><p>We have seen this pattern before. The fossil fuel industry&#8217;s influence campaign didn&#8217;t prevent climate science from eventually prevailing. But it delayed policy responses by years&#8212;years that mattered enormously. OpenAI&#8217;s acquisition compresses that timeline. The regulatory window for AI is measured in months, not decades.</p><h2>What&#8217;s Next?</h2><p>If this model spreads&#8212;if Anthropic and Google DeepMind follow OpenAI into media ownership&#8212;the public sphere will increasingly resemble the early twentieth-century model of company towns and company newspapers, except scaled to the entire digital information environment. Every major technology company could own its own venue for explaining itself to the public.</p><p><a href="https://deadline.com/2026/04/openai-acquires-streaming-series-tbpn-1236772434/">TBPN&#8217;s hosts, John Coogan and Jordi Hays</a>, built something valuable. Silicon Valley trusted them precisely because they were not part of Silicon Valley&#8217;s corporate apparatus. That trust was the asset OpenAI purchased. Whether that trust survives the purchase is the question the deal itself raises.</p><p>The companies building the future are now also buying the right to tell us what that future means. They&#8217;ll do it through &#8220;constructive conversations&#8221; hosted by platforms they own, featuring guests they approve, with independence covenants they designed. The fossil fuel industry taught Silicon Valley that narrative control is strategic infrastructure. OpenAI learned the lesson. The question is whether democratic institutions will recognize it before the window closes.</p>]]></content:encoded></item><item><title><![CDATA[Robot in the Classroom: Melania Trump and the Aesthetics of Alienation]]></title><description><![CDATA[Why the State would rather outsource learning to machines than invest in human relationships.]]></description><link>https://www.brewsterpress.com/p/the-robot-in-the-classroom-melania</link><guid isPermaLink="false">https://www.brewsterpress.com/p/the-robot-in-the-classroom-melania</guid><dc:creator><![CDATA[Henrik J Klijn]]></dc:creator><pubDate>Fri, 27 Mar 2026 20:00:45 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1593376893114-1aed528d80cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8cm9ib3R8ZW58MHx8fHwxNzc0NTMyMDIwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1593376893114-1aed528d80cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8cm9ib3R8ZW58MHx8fHwxNzc0NTMyMDIwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1593376893114-1aed528d80cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8cm9ib3R8ZW58MHx8fHwxNzc0NTMyMDIwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1593376893114-1aed528d80cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8cm9ib3R8ZW58MHx8fHwxNzc0NTMyMDIwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1593376893114-1aed528d80cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8cm9ib3R8ZW58MHx8fHwxNzc0NTMyMDIwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1593376893114-1aed528d80cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8cm9ib3R8ZW58MHx8fHwxNzc0NTMyMDIwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1593376893114-1aed528d80cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8cm9ib3R8ZW58MHx8fHwxNzc0NTMyMDIwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="3712" height="5197" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1593376893114-1aed528d80cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8cm9ib3R8ZW58MHx8fHwxNzc0NTMyMDIwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:5197,&quot;width&quot;:3712,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;man in black and gray suit action figure&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="man in black and gray suit action figure" title="man in black and gray suit action figure" srcset="https://images.unsplash.com/photo-1593376893114-1aed528d80cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8cm9ib3R8ZW58MHx8fHwxNzc0NTMyMDIwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1593376893114-1aed528d80cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8cm9ib3R8ZW58MHx8fHwxNzc0NTMyMDIwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1593376893114-1aed528d80cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8cm9ib3R8ZW58MHx8fHwxNzc0NTMyMDIwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1593376893114-1aed528d80cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1MHx8cm9ib3R8ZW58MHx8fHwxNzc0NTMyMDIwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@maximalfocus">Maximalfocus</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>On March 25, 2026, Melania Trump walked into the East Room accompanied by Figure 03, a humanoid robot developed by startup Figure AI, as part of the &#8220;Fostering the Future Together&#8221; global coalition summit, held in Washington, D.C.  </p><p>She introduced  Figure 03 as her &#8220;first American-made humanoid guest,&#8221; who then demonstrated its capabilities by addressing the crowd in 11 different languages and participating in a scripted segment where it acted as a &#8220;personalized educator.&#8221; The First Lady smiled, <a href="https://www.c-span.org/clip/white-house-event/figure-03-robot-accompanies-first-lady-to-childrens-educational-technology-summit/5197746">a visual perfectly calibrated for cameras</a>. Next she pitched the idea of an AI-powered educator (which she nicknamed &#8220;Plato&#8221;) that could adapt to a student&#8217;s learning pace and even their emotional state.</p><p>The same day, Quinnipiac University <a href="https://poll.qu.edu/images/polling/us/us03252026_ueek31.pdf">released a poll showing that</a> 62% of Americans cited healthcare costs as their top financial worry, a deeply human need rooted in touch, conversation, and the fear of facing sickness alone. Also that day, <a href="https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=r7284">Australia&#8217;s social media ban</a> for children under 16&#8212;passed in November 2024 and enforced since December 10, 2025&#8212;served as a global counterpoint: one government was pulling harmful technology <em>out</em> of childhood, while another was installing it at the core of education. The juxtaposition is stark: the state would rather outsource learning to a machine than invest in the human relationships that actually nurture development.</p><p>The robot in the classroom is barely a solution to educational challenges. It is however the mascot of a governing aesthetic: those of alienation. The belief that a polished, controllable, human-adjacent future is preferable to the messy, unpredictable, but irreplaceably human present.</p><h2>Defining the Vision</h2><p>The aesthetics of alienation is a political style that values technological spectacle over human substance, efficiency over empathy, and predictability over relationship. It manifests as a preference for systems that can be programmed, scaled, and controlled. Systems that do not unionize, are strangers to bad days, never question authority, and refrain from forming emotional bonds with the vulnerable.</p><p>The robot is its perfect symbol. <a href="https://blog.robozaps.com/b/figure-03-review">With its sleek surface, precise movements</a>, and a voice synthesized to a soothing neutrality, it represents a world without friction, dissent, or the inconvenient demands of human connection. When a state chooses to display this symbol in the context of education, the most fundamentally relational of human endeavors, it is floating that the cultivation of human beings is too important to be left to humans. The machine, in its flawless performance, promises an end to the messiness of individual interpretation and thought. </p><p>This aesthetic is far from unique to this administration; it&#8217;s the logical endpoint of a decades-long drift toward technocratic governance. What makes it notable is its timing: it arrives at the precise moment when the social fabric is fraying, when loneliness is epidemic, when children&#8217;s mental health is in crisis. The state&#8217;s answer to human fragility is not more human support. It&#8217;s MORE machine.</p><h2>What Actually Happened</h2><p>The March 25 summit convened world first spouses&#8212;including Brigitte Macron&#8212;to discuss empowering children through innovation. The staged &#8220;classroom of the future&#8221; featured Figure 03 engaging in a scripted dialogue with a student actor. It delivered a personalized math lesson and ended with the pre-programmed motivational phrase.</p><p>Melania Trump&#8217;s remarks emphasized that &#8220;our children deserve the best tools, the most advanced technology, to compete in the 21st century.&#8221; She did not mention teachers. She did not speak of mentorship, inspiration, or the quiet moments when an adult notices a child is struggling. She did not quote <a href="https://www.youtube.com/watch?v=IYzlVDlE72w">Whitney Houston&#8217;s </a><em><a href="https://www.youtube.com/watch?v=IYzlVDlE72w">Greatest Love Of All</a></em><a href="https://www.youtube.com/watch?v=IYzlVDlE72w">. </a>The visual was clear: the future belongs to the machine.</p><p>The event was less about education; but heavy on political theater. The robot provided a visually compelling prop that communicated &#8220;innovation&#8221; and &#8220;leadership&#8221; to viewers who may not understand the actual research on learning. It was a metaphor made flesh: the state&#8217;s brain, not its heart, is in charge of children&#8217;s development.</p><h2>Contradiction with Actual Human Needs</h2><p>The timing of the robot unveiling, the same day as two other stark reminders of human need, was not lost on observers.</p><p>The Quinnipiac poll (March 25, 2026) found that 62% of Americans cited healthcare costs as their top financial worry, ahead of inflation, housing, and job security. Healthcare is the ultimate human service: it requires touch, conversation, empathy, and time. It cannot be automated without losing its essence. The fact that voters prioritize this need above all else signals a widespread anxiety about the erosion of human care in a system that increasingly treats medicine as an industrial process.</p><p>Australia&#8217;s social media ban, enforced since December 2025, represents a government acknowledging that technology, left unchecked, harms human development. The legislation was driven by overwhelming evidence that social media is corroding adolescent mental health, that algorithmic feeds are addictive and divisive, that children need protected spaces to develop without commercial exploitation. The Australian government is effectively saying: <em>some spaces must remain human-only.</em></p><p>The United States, meanwhile, is pushing technology <em>into</em> the classroom. The contradiction is profound: one government is banning tech from childhood; another is installing it at the core of education. The difference is not technological capacity&#8212;it is philosophical orientation. Australia sees technology as a threat to be regulated; the United States sees it as a substitute for humans.</p><h2>The Teacher Replacement Pipeline</h2><p>The optics of the robot event mask a brutal arithmetic. According to the Learning Policy Institute, U.S. <a href="https://learningpolicyinstitute.org/product/state-of-teacher-workforce-interactive">schools faced 176,000 teacher vacancies</a> in the 2024-2025 school year, with particularly acute shortages in math, science, special education, and rural schools. At the same time, per-pupil spending on educational technology has risen 42% since 2020, reaching an estimated $14.2 billion annually. The EdTech Trade Association <a href="https://www.futuremarketinsights.com/reports/edutech-market">predicts that AI and robotic classroom assistants</a> will become a $3.8 billion market by 2028.</p><p>The economic logic is clear: robots do not require salaries, benefits, pensions, or professional development. They do not strike, unionize, or express political opinions. They are capital costs that can be written off, not labor costs that persist. The teacher shortage, rather than triggering an emergency to recruit and retain human educators, is being treated as a market opportunity for technology vendors.</p><p>And the human cost is documented. A 2025 RAND Corporation study of schools implementing AI tutors found that <a href="https://www.rand.org/pubs/research_reports/RRA4742-1.html">while test scores in basic skills improved marginally</a> (0.15 standard deviations), measures of student engagement, sense of belonging, and social-emotional growth declined significantly. The study&#8217;s lead author noted: &#8220;What we observed was a trade-off: efficiency in knowledge transmission at the expense of the relational environment that makes learning stick and children feel valued.&#8221;</p><h2>The Traditionalist-Tech Fusion</h2><p>The administration&#8217;s embrace of classroom robotics might seem at odds with its cultural conservatism, which emphasizes &#8220;traditional values&#8221; and &#8220;patriotic education.&#8221; But the fusion is logical: both strands share a desire for control.</p><p>Traditionalist education seeks to control the narrative, to ensure that children receive a vetted, ideologically sound curriculum. Robots deliver exactly that: no improvisation, no personal anecdotes, no unapproved commentary. The robot recites the approved script. It cannot introduce a book from home that challenges the official story. It cannot share a lived experience that complicates the narrative. It is, in effect, the perfect traditionalist teacher&#8212;one with no soul, no bias (other than its programming), and no capacity for disobedience.</p><p><a href="https://www.ed.gov/about/news/press-release/us-department-of-education-releases-secretary-mcmahons-patriotic-education-supplemental-priority#:~:text=Today%2C%20in%20commemoration%20of%20the,accurate%2C%20honest%2C%20and%20inspiring.">The administration&#8217;s &#8220;Patriotic Education&#8221; initiatives</a>, which emphasize American exceptionalism and downplay historical controversies, find their ultimate expression in a machine that can deliver that content without the risk of a human teacher&#8217;s &#8220;slip.&#8221; The robot is not just efficient; it is ideologically pure. It eliminates the variable of human conscience.</p><h2>Efficiency Gains, Human Losses</h2><p>The debate over AI in classrooms has reached a data tipping point in 2026. The evidence reveals a stark trade-off between technical efficiency and human development.</p><p>The RAND American Youth Panel (March 17, 2026) <a href="https://www.rand.org/education-employment-infrastructure/survey-panels/ayp.html">found that while 62% of students</a> now use AI for schoolwork, a concerning 67% believe it is actively harming their critical thinking. Parallel RAND research from late 2025 indicated that 50% of students feel less connected to their teachers when using AI in class. The technology, designed to personalize learning, is instead creating a barrier between student and educator&#8212;a digital filter where human mentorship once flowed.</p><p>The counter-narrative from EdTech developers argues that AI can improve social-emotional learning by providing a &#8220;judgment-free&#8221; zone. A <a href="https://academic.oup.com/cdpers/advance-article-abstract/doi/10.1093/cdpers/aadaf009/8414012?redirectedFrom=fulltext">recent study found that</a> many younger users prefer sharing mental health concerns with chatbots because &#8220;it cannot be disappointed in me.&#8221; Apps like Wysa and Replika show higher disclosure rates for sensitive topics. Meanwhile, <a href="https://annualreport.khanacademy.org/">Khan Academy&#8217;s Khanmigo </a>and a December <a href="https://www.gatesfoundation.org/ideas/media-center/press-releases/2025/12/education-systems-partnership">2025 Gates Foundation report</a> claim AI tutors free teachers from administrative burdens, giving them 5&#8211;10 hours per week for one-on-one mentorship.</p><p>But this &#8220;force multiplier&#8221; argument assumes schools will use the reclaimed time for human connection. The data suggests otherwise: 75% of students feel more motivated by AI speed, yet only 19% report teachers have taught them how to use it ethically. The result is a &#8220;wild west&#8221; where students interact with algorithms in isolation, exactly the alienation Brewster warns of.</p><p>The &#8220;Learning Paradox&#8221; emerges from multiple 2025&#8211;2026 studies. While AI-integrated environments show 48% higher practice accuracy and 70% better course completion rates, they also produce 17% lower scores on independent tests when AI is removed. Stanford SCALE Initiative (March 2026) <a href="https://scale.stanford.edu/research-in-action/understanding-evidence-base-ai-k12-education">calls AI a &#8220;cognitive crutch.&#8221;</a> Students graduate at higher rates but struggle when they must perform without digital assistance.</p><p>The social-emotional divergence is measurable. The Brookings Global Task Force (January 2026) <a href="https://www.brookings.edu/projects/brookings-global-task-force-on-ai-in-education/">concluded that risks to children&#8217;s social development </a>currently outweigh the benefits of generative AI. Education Week (October 2025) found that <a href="https://www.edweek.org/technology/rising-use-of-ai-in-schools-comes-with-big-downsides-for-students/2025/10">AI use correlates with decreased peer-to-peer connections</a>, and 70% of teachers believe AI over-reliance is weakening critical thinking. A March 2026 study in <em>Psychology &amp; Marketing</em> identified the Technology-Wellbeing Paradox: <a href="https://journals.sagepub.com/doi/10.1089/cyber.2025.0034">students using emotional-support bots </a>report immediate mood boosts but show declines in real-world social connectedness. The bot becomes a &#8220;digital sanctuary&#8221; that makes human interaction feel more daunting.</p><p>The data is clear: AI excels at moving students through a pipeline but fails at cultivating the durable, independent human competence that education should foster. The &#8220;efficiency vs. relationship&#8221; trade-off is not a bug. It is the feature.</p><h2>The Cost of a Perfect Surface</h2><p>The robot in the classroom is less about education. It is about the state&#8217;s vision of what a citizen should be: a data point, a score, a productive unit, a controllable subject. It is the physical manifestation of a society that would outsource its soul to a vendor.</p><p>The same day the robot smiled for the cameras, Australians were voting to protect their children from the harms of unfettered technology. The same day, Americans were telling pollsters they fear most the absence of human care when they are sick. These are not contradictions; they are symptoms of a global anxiety about the erosion of the human.</p><p>The administration&#8217;s answer is more machine. The Australian answer is to restrict machines. History will show which was the better choice.</p><p></p><div><hr></div><p></p><h2><strong>Dinner Party Talking Points</strong><br><em>Brewster Responses to Annoying Questions</em></h2><blockquote><p><strong>Q: &#8220;This is just augmenting teachers, not replacing them. You&#8217;re being alarmist.&#8221;</strong></p><p>A: The rhetoric at the March 25 event was not about augmentation; it was about replacement. &#8220;Classroom of the future&#8221; means the old model is obsolete. Budgets tell the real story: districts cut teaching positions first and buy technology second. The robot is the Trojan horse&#8212;presented as a helper, it becomes the centerpiece, and humans become support staff. We&#8217;ve seen this with ATMs and self-checkout. The &#8220;augmentation&#8221; phase lasts about five years before replacement begins.</p><p><strong>Q: &#8220;Robots can personalize learning pathways. Humans can&#8217;t scale that.&#8221;</strong></p><p>A: Personalization without relationship is manipulation. An algorithm adjusts math problem difficulty but cannot notice a child is withdrawn because their parent lost a job. It cannot provide encouragement from genuine care. The most powerful personalization is emotional, not cognitive&#8212;and that remains firmly human. What we call &#8220;personalization&#8221; is really adaptive testing in a more palatable package.</p><p><strong>Q: &#8220;We have a teacher shortage. Robots fill gaps. What&#8217;s the alternative?&#8221;</strong></p><p>A: The alternative is to treat teaching as a profession worth investing in: raise salaries, reduce class sizes, provide planning time and support staff, restore professional autonomy. The teacher shortage is a policy choice, not an act of God. We chose to underfund education for decades while enriching tech vendors. The robot is the final step: rather than solve the political problem of teacher respect, we outsource the work to a machine that requires no respect. It&#8217;s not a solution; it&#8217;s surrender.</p><p><strong>Q: &#8220;Kids today are digital natives. They&#8217;re comfortable with tech. This is natural.&#8221;</strong></p><p>A: Fluency with smartphones does not mean children learn better from robots. Generation Z reports the highest levels of loneliness, anxiety, and hopelessness in recorded history. They are desperate for human connection, not more screens. The idea that because they use devices socially they should be taught by them is like arguing that because children eat candy they should be fed exclusively by vending machines. Comfort with technology is not an educational philosophy; it&#8217;s a symptom of a society that has outsourced relationship to devices.</p><p><strong>Q: &#8220;The data shows higher graduation rates and course completion with AI. Isn&#8217;t that proof it works?&#8221;</strong></p><p>A: Those metrics measure throughput, not transformation. A system that graduates more students but produces graduates who cannot think independently, cannot collaborate, and lack social-emotional resilience is a failure. The Stanford Learning Paradox proves it: students perform better <em>with</em> AI but collapse <em>without</em> it. We are credentialing a generation that cannot function without digital life support. That is not educational success; it is institutionalized dependency.</p></blockquote>]]></content:encoded></item><item><title><![CDATA[The Pentagon Is Wrong About How AI Works, and It's Putting Us All In Danger]]></title><description><![CDATA[As the US Defense Department accuses Anthropic of being able to flip a kill-switch on its AI during wartime, experts say this betrays a deep misunderstanding of how large language models actually work]]></description><link>https://www.brewsterpress.com/p/the-pentagon-is-wrong-about-how-ai</link><guid isPermaLink="false">https://www.brewsterpress.com/p/the-pentagon-is-wrong-about-how-ai</guid><dc:creator><![CDATA[William Southerland]]></dc:creator><pubDate>Tue, 24 Mar 2026 21:54:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!zAHK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb647960-b71e-4638-b626-dd0d5f26cd8f_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zAHK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb647960-b71e-4638-b626-dd0d5f26cd8f_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zAHK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb647960-b71e-4638-b626-dd0d5f26cd8f_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!zAHK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb647960-b71e-4638-b626-dd0d5f26cd8f_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!zAHK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb647960-b71e-4638-b626-dd0d5f26cd8f_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!zAHK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb647960-b71e-4638-b626-dd0d5f26cd8f_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zAHK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb647960-b71e-4638-b626-dd0d5f26cd8f_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cb647960-b71e-4638-b626-dd0d5f26cd8f_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zAHK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb647960-b71e-4638-b626-dd0d5f26cd8f_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!zAHK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb647960-b71e-4638-b626-dd0d5f26cd8f_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!zAHK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb647960-b71e-4638-b626-dd0d5f26cd8f_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!zAHK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb647960-b71e-4638-b626-dd0d5f26cd8f_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The <a href="https://www.wired.com/story/department-of-defense-responds-to-anthropic-lawsuit/">Department of Justice filed a response in federal court</a> on March 17, 2026, that reads like the premise of a techno-thriller. Anthropic, the San Francisco-based AI company behind the Claude chatbot, might sabotage American military operations by covertly manipulating its own models. The department alleged the company&#8217;s engineers could, at the flip of a digital switch, disable or distort AI systems deployed on Pentagon infrastructure, potentially compromising classified operations and endangering warfighters in active combat zones.</p><p>This is not fiction. This is the United States government&#8217;s official legal position. But <em>this is literally impossible</em>.</p><p>In the filing, which responds to Anthropic&#8217;s lawsuit against the Department of Defense, Justice Department attorneys assert that defense secretary Pete Hegseth &#8220;reasonably&#8221; determined that &#8220;Anthropic staff might sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, or operation of a national security system.&#8221; The government&#8217;s narrative is clear. Anthropic&#8217;s Claude models are not merely software products but potential weapons of corporate sabotage, vessels through which a recalcitrant vendor could wreak havoc on American military operations should the company decide its ethical &#8220;red lines&#8221; have been crossed.</p><p><a href="https://www.wired.com/story/anthropic-denies-sabotage-ai-tools-war-claude/">Anthropic&#8217;s response</a>, delivered in court filings and public statements, is equally unequivocal. The company has no back door into Department of Defense systems, cannot log into government infrastructure, and lacks any mechanism to alter, disable, or influence models once they have been deployed onto secured military networks. As Anthropic chief executive Dario Amodei wrote in a company blog post published February 27, 2026, the company &#8220;cannot in good conscience accede to their request&#8221; <a href="https://www.bbc.com/news/articles/cvg3vlzzkqeo">to remove safeguards against mass surveillance and fully autonomous weapons</a>. But the company also cannot remotely sabotage its own technology, because the very architecture of large language models makes such cinematic scenarios impossible.</p><h3>Misunderstanding AI is the Real Risk to National Security</h3><p>The collision between these two narratives reveals something far more consequential than a contract dispute between a tech company and its largest potential government customer. It exposes a fundamental fracture in how American institutions understand a technology that is rapidly becoming central to national security, economic competitiveness, and democratic governance. The Pentagon is treating Claude like traditional software: remotely patchable, centrally controlled, governed by explicit logic that human engineers can modify at will. The reality is radically different. Modern large language models are static mathematical formulas, billions of numbers frozen in time, whose behavior emerges from complex statistical patterns rather than editable code. You cannot flip a kill switch on a matrix. You cannot alter a weight file remotely any more than you can edit a photograph after it has been printed and mailed.</p><p>This misunderstanding is not a mere technical quibble or academic distinction. It will shape defense procurement decisions, regulatory frameworks, and the capacity of democratic governments to govern technologies they do not comprehend. If policymakers continue to treat AI systems as remotely controllable executables when they are, in fact, immutable mathematical filters, the resulting policies will target imaginary vulnerabilities while ignoring genuine risks. The stakes could not be higher. As AI systems proliferate through military operations, electoral campaigns, and critical infrastructure, the public&#8217;s ability to understand what these systems can and cannot do will determine whether democracy can survive the technological transition that is already underway.</p><h3>What LLMs Actually Are</h3><p>To understand why the Pentagon&#8217;s allegations miss the mark, one must first grasp what a large language model actually is. Equally important is understanding what it is not.</p><p>At their foundation, modern large language models are neural networks trained on vast corpora of text to predict the next token in a sequence. This deceptively simple objective, given the words &#8220;The cat sat on the,&#8221; predict what comes next, masks extraordinary complexity. Through exposure to hundreds of billions of words drawn from books, articles, code repositories, and web pages, these networks learn intricate statistical patterns. Not merely which words tend to follow which, but syntactic structures, semantic relationships, factual associations, and even reasoning patterns that emerge from the co-occurrence of concepts in training data.</p><p>The crucial point, and the one that appears to have escaped the Defense Department&#8217;s analysts, is how this learning is implemented. Traditional software consists of explicit instructions: if-then statements, loops, functions that human programmers write and can modify. When Microsoft issues a security patch for Windows, engineers are editing source code, recompiling, and distributing new executable files. The program remains fundamentally a set of instructions that the computer follows.</p><p>Large language models are different. They are implemented not as code but as mathematical matrices: enormous grids of numerical values (&#8220;weights&#8221;) that transform input vectors into output probabilities through successive layers of mathematical operations. The &#8220;knowledge&#8221; of an LLM exists not as explicit rules but as patterns embedded in these weights, patterns so distributed and interconnected that no human can point to a specific parameter and say, &#8220;This controls the model&#8217;s opinion on tax policy,&#8221; or &#8220;This determines whether it will answer a harmful request.&#8221;</p><p>When training completes, these weights are <strong>frozen</strong>. They become totally static: a multi-gigabyte file containing billions of floating-point numbers. That&#8217;s it. Running the model is not a matter of executing instructions but of performing linear algebra. Input tokens are converted to vectors, multiplied by weight matrices, passed through activation functions, and transformed through successive layers until probabilities emerge. The model operates without cognition or decision-making in any meaningful sense. It filters inputs through a fixed mathematical structure and produces outputs that reflect patterns learned during training.</p><p>The model is a pasta strainer, not a pot of boiling water. It doesn&#8217;t &#8220;do&#8221; anything, it just filters the input and produces output, usually as text.</p><p>This distinction matters profoundly for the Pentagon&#8217;s allegations. Anthropic&#8217;s engineers cannot &#8220;log into&#8221; a deployed model and alter its behavior any more than a photographer can log into a printed photograph and change the image. Once the weights are transferred to Department of Defense infrastructure, Anthropic has no technical access to them. The model file is simply a collection of numbers; running it requires only computational resources and the appropriate inference software, which is itself open-source and widely available. There is no phone home mechanism, no remote administration console, no kill switch embedded in the weights themselves.</p><p>As researchers at Stanford&#8217;s Human-Centered Artificial Intelligence Institute noted <a href="https://hai.stanford.edu/news/how-do-we-fix-and-update-large-language-models">in a 2024 article</a>, the real strategic asset is not the thin serving code but the model weights and underlying training data, which are extremely costly to reproduce. The weights represent billions of dollars in computational resources and the accumulated statistical extraction of humanity&#8217;s written output. Once transferred, they are simply files, albeit files of extraordinary sophistication and value.</p><h3>Inside the Black Box: A Brief Tour of LLM Architecture</h3><p>The transformer architecture, <a href="https://arxiv.org/abs/1706.03762">introduced by Google researchers in 2017 </a>and now ubiquitous in large language models, provides the structural foundation for understanding why remote manipulation is impossible.</p><p>A transformer model consists of stacked layers, each containing two primary components: a multi-head self-attention mechanism and a position-wise feed-forward network. The attention mechanism allows the model to weigh the relevance of different input positions when producing each output position; the feed-forward networks apply learned transformations to these attended representations. Between each layer sit normalization operations and residual connections that enable the training of deep networks.</p><p>Critically, all of these components exclude executable logic in the traditional sense. There are no conditional branches, no flags that can be toggled, no wartime mode that Anthropic could activate. The attention mechanism is purely mathematical: queries, keys, and values are derived from input embeddings through learned weight matrices, and attention scores are computed via scaled dot-product operations followed by <a href="https://en.wikipedia.org/wiki/Softmax_function">softmax</a> normalization. The feed-forward networks are simple linear transformations with nonlinear activations.</p><p>When researchers speak of &#8220;running&#8221; a model, they mean performing matrix calculations. The weights in the model are constants and never change between inference calls. A model&#8217;s response to &#8220;What is the capital of France?&#8221; is determined entirely by the fixed values in its weight matrices, values that were established during training, have not changed since, and will not change in the future. The model consults no database, checks no policy server, and evaluates no dynamic rules. It simply performs billions of arithmetic operations and produces a probability distribution over possible next tokens.</p><p>If the Defense Department fears that Anthropic might alter model behavior during wartime, they must believe either that Anthropic can remotely modify weight files, a technical impossibility once those files are deployed on air-gapped military networks. Or, they believe the models themselves contain some form of remote code execution capability that would allow Anthropic to override their behavior. Neither scenario aligns with how transformer models actually function.</p><p>The government&#8217;s allegations seem to imagine AI systems as remotely administered services, something like a cloud-hosted database where administrators can modify queries or revoke access in real time. But deployed LLMs are not services; they are artifacts. Once Anthropic transfers the weight file to the Pentagon and the Pentagon loads it onto classified systems, Anthropic has no more ability to influence that model than a book publisher has to alter text in a volume already sitting on a reader&#8217;s shelf.</p><h3>Can You Secretly &#8220;Edit&#8221; an LLM After Training?</h3><p>The question of whether large language models can be modified after training is not merely theoretical. The emerging field of model editing has produced techniques like MEND, SERAC, ConCoRD, ROME, and MEMIT that aim to correct individual facts or adjust narrow behaviors without retraining models from scratch. These methods represent genuine advances, but their limitations illuminate why the government&#8217;s fears of remote sabotage are misplaced.</p><p>As a <a href="https://arxiv.org/abs/2403.14236">2024 paper posted to ArXiv</a> explains, current editing methods are localized and constrained. They can sometimes update specific facts, changing which person holds a particular office, for instance, but they struggle with the downstream implications of such changes. Updating who is UK prime minister without correctly updating related facts about their family or cabinet demonstrates the brittleness of these interventions.</p><p>More fundamentally, model editing requires direct access to the weight matrices themselves. Techniques like ROME (Rank-One Model Editing) and MEMIT (Mass Editing Memory in a Transformer) operate by computing targeted modifications to specific layers, modifications that must be applied directly to the stored parameters. These are not remote operations; they require possession of and computational access to the model weights. Once a model is deployed on Department of Defense infrastructure, Anthropic has no such access.</p><p>The research also reveals a critical limitation that undermines the Pentagon&#8217;s narrative of covert manipulation: edited models often exhibit side effects. Changes made to one behavior can unpredictably affect others, and edited models may become unstable or degraded in ways that would be immediately apparent to users. A &#8220;sabotaged&#8221; model would not likely behave subtly differently in wartime. It would likely behave erratically or nonsensically, betraying the tampering through its degraded outputs.</p><h3>Why the Pentagon Is (Once Again) Off-base</h3><p>The Justice Department&#8217;s court filing articulates a theory of AI systems that bears little resemblance to how these technologies actually function. Disabling a deployed LLM would require either physical access to the servers hosting the model or some form of remote kill switch embedded in the weights themselves. No such kill switch exists; transformer architectures include no mechanisms for remote deactivation. The weights are simply numbers, inert until multiplied with input vectors. They contain no logic for checking authorization, no network code for receiving commands, no conditional branches that could be triggered by external signals.</p><p>The allegation that Anthropic might &#8220;preemptively alter the behavior&#8221; of models is equally disconnected from technical reality. Altering model behavior requires modifying weights, which requires computational access to the deployed model files. Once those files reside on classified Pentagon systems, Anthropic has no such access. The company cannot send updates, patch vulnerabilities, or introduce bugs without going through the same procurement and deployment processes that govern any software update.</p><p>The Pentagon&#8217;s stance appears to conflate two distinct scenarios: API access, where models run on vendor-controlled infrastructure and can be modified or revoked by the vendor, and on-premise deployment, where models run on customer-controlled systems and the vendor has no ongoing access. The Justice Department&#8217;s filing discusses Anthropic&#8217;s ability to &#8220;disable its technology&#8221; as if the company were operating a cloud service where flipping a switch could cut off access. But the disputed deployment involves models transferred to DoD infrastructure, the equivalent of shipping a product rather than providing a subscription service.</p><p>This confusion has real consequences. If the Defense Department genuinely believes that Anthropic retains the ability to sabotage deployed models, the department is operating under a threat model that does not correspond to actual technical capabilities. Resources devoted to monitoring for remote manipulation or kill switch activation are resources not devoted to genuine security concerns: poisoned training data, compromised fine-tuning pipelines, maliciously modified weights before delivery, or unsafe optimization choices that erode safety constraints.</p><h3>Real Risks vs. Imaginary &#8220;Kill Switches&#8221;</h3><p>The Pentagon&#8217;s focus on vendor sabotage, while technically unfounded, distracts from genuine risks that do threaten AI systems deployed in national security contexts. Understanding the difference between realistic vulnerabilities and cinematic scenarios is essential for developing effective security protocols.</p><p>Realistic concerns begin with the supply chain of AI systems themselves. As research on backdoor attacks in deep neural networks has demonstrated, malicious actors can implant triggers during training that cause models to behave normally under most conditions but produce targeted outputs when specific patterns appear. These backdoors are implanted during the training phase, not activated remotely after deployment; they exist as patterns in the weight matrices themselves, waiting for their trigger conditions.</p><p>The threat here is not that Anthropic might remotely sabotage its own models, but that models might contain vulnerabilities introduced during training, either accidentally or deliberately, that could be exploited by adversaries who discover the trigger patterns. A model trained on poisoned data might refuse legitimate military commands under specific conditions, hallucinate critical information, or produce subtly wrong outputs that could influence operational decisions. These risks are serious, but they are risks of training-time contamination, not runtime manipulation.</p><p>Similarly, fine-tuning and optimization choices can impact safety margins and model behavior, but these effects arise during training or retraining, not through invisible runtime levers. Research has shown that fine-tuning aligned language models can compromise safety even when users do not intend to do so; benign fine-tuning datasets can inadvertently degrade the safety guardrails established during initial alignment training. Again, these vulnerabilities emerge from the training process, not from remote manipulation of deployed systems.</p><p>The distinction matters for defense procurement. Rather than monitoring for vendor sabotage, a threat that does not exist in the form the Pentagon imagines, security protocols should focus on validating training data integrity, auditing model weights for anomalous patterns, and red-teaming deployed systems against adversarial inputs. Independent model audits, checksum verification of weight files, and continuous monitoring for unexpected behaviors are practical security measures that address real vulnerabilities.</p><p>Focusing on an implausible sabotage vector also carries opportunity costs. The Defense Department&#8217;s dispute with Anthropic has already disrupted operations: the government is working to replace Claude with alternatives from Google, OpenAI, and xAI, a transition that the Justice Department&#8217;s filing acknowledges cannot happen immediately because &#8220;the Pentagon cannot simply flip a switch at a time when Anthropic currently is the only AI model cleared for use on the department&#8217;s classified systems.&#8221; This disruption was caused by a contract dispute over use restrictions, not by any demonstrated technical vulnerability, but the government&#8217;s response has treated the situation as a security threat requiring immediate mitigation.</p><p>The irony is that the Pentagon&#8217;s actions may push defense AI systems toward less secure arrangements. If vendors fear that contractual disputes will lead to supply-chain-risk designations and potential bans, they may be less willing to accept government contracts with stringent use restrictions. The result could be a shift toward either brittle in-house models developed without adequate resources or opaque arrangements with vendors who refuse to accept any use limitations, arrangements that are harder to audit and less likely to prioritize safety.</p><h3>Why Governments Keep Getting AI Wrong</h3><p>The Pentagon&#8217;s mischaracterization of Anthropic&#8217;s capabilities is not an isolated incident but part of a broader pattern in which high-level officials frame AI systems using analogies from traditional software, cyber backdoors, or even physical weapons, leading to misaligned regulation and procurement rules that address imaginary threats while neglecting genuine ones.</p><p>This pattern reflects the fundamental challenge of technological governance: policymakers must make decisions about systems they do not fully understand, using conceptual frameworks inherited from earlier technologies. Software was once new and poorly understood; now it is taken for granted that policymakers comprehend the difference between local executables and cloud services, between source code and compiled binaries, between vulnerabilities and backdoors. AI systems have not yet achieved that level of conceptual familiarity, and the result is policy frameworks that misfire.</p><p>The mismatch encourages demands for impossible guarantees. The Pentagon&#8217;s filing suggests that Anthropic should be able to prove it cannot sabotage its own technology. In the absence of perfect knowledge, the government is demanding assurances that vendors cannot provide, not because they are hiding capabilities but because the requested capabilities do not exist in the form imagined.</p><p>Meanwhile, feasible controls go neglected. Rigorous testing of deployed models, transparent update protocols that document what changes in new weight files, clear lines of liability for training-time and deployment-time failures, these are achievable security measures that do not require violating the laws of mathematics. But they require accepting that AI systems are probabilistic, emergent, and imperfectly interpretable, acceptances that run counter to the traditional software paradigm of deterministic behavior and explicit logic.</p><p>The Anthropic case illustrates the risks of this conceptual confusion. By treating a contract dispute over ethical use restrictions as a supply-chain security threat, the Pentagon has escalated a disagreement about values into a legal confrontation with significant operational consequences. The company that developed one of the few AI systems cleared for classified Pentagon use is now being pushed out of defense procurement because the government mischaracterized how that company&#8217;s technology functions.</p><p>This approach risks chilling collaboration between AI vendors and government agencies. If ethical restrictions on military use can trigger supply-chain-risk designations, vendors may conclude that accepting government contracts requires abandoning all principled limitations. The result would be a race to the bottom in AI safety, with defense contracts going to whoever promises the most permissive use terms rather than whoever offers the most secure and reliable systems.</p><h3>Citizens Need an LLM 101 <em>NOW</em></h3><p>The implications of the Pentagon-Anthropic dispute extend far beyond defense procurement. As AI systems proliferate through political campaigns, electoral infrastructure, media production, and public discourse, the public&#8217;s understanding of what these systems can and cannot do will determine whether democratic societies can navigate the coming turbulence.</p><p>The 2024 election cycle offered a preview of what is to come. As the Brennan Center for Justice documented in their analysis &#8220;Gauging the AI Threat to Free and Fair Elections,&#8221; AI-generated deepfakes targeting candidates proliferated across social media platforms. Russian operatives created synthetic videos of Vice President Kamala Harris; a former Palm Beach County deputy sheriff, operating from Russia, collaborated on fabricated videos falsely accusing vice-presidential nominee Tim Walz of assault; AI-generated robocalls featuring synthetic voices of President Biden urged New Hampshire primary voters not to cast ballots.</p><p>These incidents demonstrate not just the capabilities of generative AI but the vulnerabilities of a public that lacks basic literacy about these technologies. Voters who cannot distinguish between authentic and synthetic media are voters who can be manipulated by actors wielding cheap fabrication tools. Citizens who believe AI systems are &#8220;intelligent&#8221; in any meaningful sense, capable of judgment, intention, or moral reasoning, will misinterpret the outputs of probabilistic text engines as evidence, authority, or wisdom.</p><p>AI literacy must include understanding that large language models are not oracles, agents, or remote-controlled ideologues. They are statistical pattern-matching systems trained on human text, with fixed training cutoffs and no real-time access to information unless specifically engineered to retrieve it. They hallucinate; they confabulate; they reproduce biases present in their training data. They do not &#8220;know&#8221; things in any meaningful sense; they predict which sequences of tokens are statistically likely given their training.</p><p>This baseline understanding provides immunity to certain forms of manipulation. A citizen who knows that LLMs lack real-time knowledge cannot be fooled by synthetic news reports generated by systems whose training data ends months before the reported events. A citizen who understands that these systems are probabilistic rather than intentional will not attribute malice or conspiracy to model outputs that reflect training data biases. A citizen who recognizes AI-generated content as statistically probable rather than factually grounded will approach synthetic media with appropriate skepticism.</p><p>The political calendar makes this literacy urgent. Campaigns and governments are integrating AI into messaging, decision support, and cyber operations. Deepfakes will proliferate; synthetic text floods will drown authentic discourse; bad-faith political claims about &#8220;rogue AIs&#8221; or &#8220;sabotaged models&#8221; will exploit public ignorance to delegitimize opposition or justify repressive measures. Without baseline understanding, voters will be vulnerable to both AI-driven disinformation and to political manipulation that mischaracterizes the underlying technology.</p><h3>What AI Literacy Looks Like in Practice</h3><p>The gap between AI&#8217;s actual capabilities and public understanding is wide, but it is bridgeable. A modest investment in conceptual education can provide citizens with the mental models needed to navigate an AI-saturated political environment.</p><p>Core concepts that every citizen should grasp begin with the nature of model weights. A large language model is not a program in the traditional sense but a file containing billions of numbers, the &#8220;weights&#8221; that encode statistical patterns learned from training data. These weights are static; once created, they do not change unless deliberately retrained or edited through computationally intensive processes. Running the model means performing mathematical operations on these weights, not executing instructions written by programmers.</p><p>Understanding training data matters more than understanding clever code. An LLM&#8217;s outputs reflect the patterns in its training corpus; it knows what it has seen, biased by how often and in what contexts it has seen it. Training data quality and sourcing are thus more important than architectural details in determining what a model &#8220;knows&#8221; and how it behaves. Models trained on toxic or biased data will produce toxic or biased outputs regardless of safety filters added afterward.</p><p>Recognizing AI-generated content requires attention to telltale signs: perfect grammatical correctness combined with factual errors; confident assertions about events after training cutoffs; generic or hedged language on specific topics; characteristic phrasing patterns that differ from human idiosyncrasy. None of these markers is foolproof, but collectively they provide signals that content may be synthetic rather than authentic.</p><p>Civil society organizations and experts have begun calling for AI-focused media literacy programs, whistleblower protections, and transparent communication about model capabilities and limitations. These demands recognize that technological literacy is not merely an individual responsibility but a collective necessity for democratic functioning. Platforms that host AI-generated content should be required to label it; AI vendors should be required to document training data sources and model limitations; educational institutions should incorporate AI literacy into civic education curricula.</p><p>Concrete steps for individual citizens include following reputable AI reporting from outlets that prioritize accuracy over sensationalism; using simple tests to probe what chatbots know and don&#8217;t know, establishing their training cutoffs and limitations; and treating political claims about AI with the same skepticism applied to traditional campaign spin. When a politician claims a rival is using &#8220;rogue AI&#8221; or warns of &#8220;sabotaged models,&#8221; the appropriate response is not alarm but inquiry: what specifically is being alleged, and does it align with what is technically possible?</p><h3>Don&#8217;t Let the Metaphor Win</h3><p>The Pentagon&#8217;s clash with Anthropic reveals the power of metaphor in shaping policy. By conceiving of large language models as remotely controllable software rather than static mathematical artifacts, the Defense Department constructed a threat model that led to legal escalation, operational disruption, and the potential exclusion of one of the most safety-conscious AI developers from defense procurement.</p><p>But the metaphor is wrong, and the policies that flow from it will fail. Large language models are not magic; they are matrix filters shaped by their training history, performing linear algebra on input vectors to produce probability distributions over output tokens. They contain no hidden kill switches, no remote administration capabilities, no wartime modes that vendors can activate at will. Once deployed, they are simply files, extraordinarily sophisticated files, but files nonetheless.</p><p>If leaders cling to the wrong mental model, they will regulate ghosts and ignore real vulnerabilities. They will demand impossible guarantees of real-time control while neglecting achievable measures like training data audits and weight file verification. They will chase vendor sabotage scenarios that cannot happen while overlooking backdoor vulnerabilities that could. They will make policy based on science fiction rather than computer science.</p><p>The alternative is not resignation or technological determinism but informed governance. An educated public can pressure institutions to grapple with how these systems truly function, demanding policies that address genuine risks rather than imagined ones. This requires humility from policymakers, acceptance that AI systems are not merely complex software but genuinely different artifacts requiring new conceptual frameworks, and investment from citizens in understanding the technologies that will shape their lives.</p><p>As AI seeps into national security and electoral politics, the biggest risk may not be the models themselves but our refusal to understand how they work before we hand them the keys. The Pentagon has already demonstrated this danger: by mischaracterizing Anthropic&#8217;s technology, it has disrupted its own operations and potentially degraded its AI capabilities. Similar mistakes in electoral contexts could undermine democratic legitimacy; in security contexts, they could create vulnerabilities that adversaries exploit.</p><p>The path forward requires clear thinking about what large language models actually are: not agents with intentions, not software with remote administration, but static mathematical structures that transform inputs into outputs through patterns learned from training data. This understanding is not merely academic; it is the foundation upon which sound policy must be built. Until policymakers and citizens alike grasp that LLMs are matrix, not magic, the gap between technological reality and institutional response will continue to widen, with consequences that none of us can afford.</p><h3>The Brewster Take</h3><p>The Pentagon&#8217;s fight with Anthropic is not really about sabotage. It is about power and misunderstanding; the government&#8217;s inability to grasp that some technologies cannot be centrally controlled, combined with a tech company&#8217;s refusal to let its creations be used for mass surveillance and autonomous killing. Both sides are wrong in their own ways. </p><p>The Defense Department clings to outdated metaphors of command and control, treating neural networks like they are traditional software with backdoors and kill switches. </p><p>Anthropic, for all its technical sophistication, seems naive about how the world of power actually works, surprised that refusing to build weapons for the state might have consequences. </p><p>The rest of us are caught in the middle, watching two institutions stumble toward a future neither fully comprehends. What emerges from this collision will shape not just defense procurement but the boundaries of corporate ethics, government oversight, and whether democratic societies can govern technologies that outpace their understanding. The matrix does not care about our political dramas. It simply multiplies weights and produces probabilities. </p><p>The sooner we stop treating it like magic, the sooner we can start building policies that might actually work.</p><div><hr></div><h3>Sources</h3><p>Dave, Paresh. &#8220;Justice Department Says Anthropic Can&#8217;t Be Trusted With Warfighting Systems.&#8221; WIRED, March 17, 2026. <a href="https://www.wired.com/story/department-of-defense-responds-to-anthropic-lawsuit/">https://www.wired.com/story/department-of-defense-responds-to-anthropic-lawsuit/</a></p><p>Dave, Paresh. &#8220;Anthropic Denies It Could Sabotage AI Tools During War.&#8221; WIRED, March 20, 2026. <a href="https://www.wired.com/story/anthropic-denies-sabotage-ai-tools-war-claude/">https://www.wired.com/story/anthropic-denies-sabotage-ai-tools-war-claude/</a></p><p>Hays, Kali. &#8220;Anthropic boss rejects Pentagon demand to drop AI safeguards.&#8221; BBC News, February 27, 2026. <a href="https://www.bbc.com/news/articles/cvg3vlzzkqeo">https://www.bbc.com/news/articles/cvg3vlzzkqeo</a></p><p>&#8220;How Do We Fix and Update Large Language Models?&#8221; Stanford Human-Centered Artificial Intelligence Institute, September 30, 2024. <a href="https://hai.stanford.edu/news/how-do-we-fix-and-update-large-language-models">https://hai.stanford.edu/news/how-do-we-fix-and-update-large-language-models</a></p><p>Wu, Haibin, et al. &#8220;Can LLM Safety Be Preserved During Fine-Tuning? A Framework for Evaluating Changes in Alignment and Performance.&#8221; ArXiv:2403.14236v1, March 21, 2024. <a href="https://arxiv.org/abs/2403.14236">https://arxiv.org/abs/2403.14236</a></p><p>Qi, Xiangyu, et al. &#8220;Fine-Tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!&#8221; ArXiv:2310.03693, October 5, 2023. <a href="https://arxiv.org/abs/2310.03693">https://arxiv.org/abs/2310.03693</a></p><p>Vaswani, Ashish, et al. &#8220;Attention Is All You Need.&#8221; ArXiv:1706.03762, June 12, 2017. <a href="https://arxiv.org/abs/1706.03762">https://arxiv.org/abs/1706.03762</a></p><p>Alammar, Jay. &#8220;The Illustrated Transformer.&#8221; jalammar.github.io, 2018. <a href="https://jalammar.github.io/illustrated-transformer/">https://jalammar.github.io/illustrated-transformer/</a></p><p>&#8220;Gauging the AI Threat to Free and Fair Elections.&#8221; Brennan Center for Justice, 2024. <a href="https://www.brennancenter.org/our-work/analysis-opinion/gauging-ai-threat-free-and-fair-elections">https://www.brennancenter.org/our-work/analysis-opinion/gauging-ai-threat-free-and-fair-elections</a></p><p>&#8220;Why citizens and campaigns need to improve AI literacy in this very political year.&#8221; SC World, 2024. <a href="https://www.scworld.com/perspective/why-citizens-and-campaigns-need-to-improve-ai-literacy-in-this-very-political-year">https://www.scworld.com/perspective/why-citizens-and-campaigns-need-to-improve-ai-literacy-in-this-very-political-year</a></p>]]></content:encoded></item><item><title><![CDATA[The $80B Death of the Metaverse]]></title><description><![CDATA[Who pays the price when companies spend billions trying to remake society in their own image?]]></description><link>https://www.brewsterpress.com/p/the-80b-death-of-the-metaverse</link><guid isPermaLink="false">https://www.brewsterpress.com/p/the-80b-death-of-the-metaverse</guid><dc:creator><![CDATA[William Southerland]]></dc:creator><pubDate>Tue, 24 Mar 2026 21:54:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GJPK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd6a13a3-dbe2-46c8-8e22-01611e1a99a0_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GJPK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd6a13a3-dbe2-46c8-8e22-01611e1a99a0_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GJPK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd6a13a3-dbe2-46c8-8e22-01611e1a99a0_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!GJPK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd6a13a3-dbe2-46c8-8e22-01611e1a99a0_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!GJPK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd6a13a3-dbe2-46c8-8e22-01611e1a99a0_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!GJPK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd6a13a3-dbe2-46c8-8e22-01611e1a99a0_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GJPK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd6a13a3-dbe2-46c8-8e22-01611e1a99a0_1024x608.png" width="724" height="429.875" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dd6a13a3-dbe2-46c8-8e22-01611e1a99a0_1024x608.png&quot;,&quot;srcNoWatermark&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/72d311f5-9149-4baa-b8df-714df3f2fbf3_1024x608.png&quot;,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:724,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!GJPK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd6a13a3-dbe2-46c8-8e22-01611e1a99a0_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!GJPK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd6a13a3-dbe2-46c8-8e22-01611e1a99a0_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!GJPK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd6a13a3-dbe2-46c8-8e22-01611e1a99a0_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!GJPK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd6a13a3-dbe2-46c8-8e22-01611e1a99a0_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p style="text-align: justify;">In the autumn of 2021, Mark Zuckerberg stood before the world and proclaimed that the metaverse would be &#8220;the successor to the mobile internet,&#8221; a declaration delivered with the earnestness of a man who had already committed his company&#8217;s entire strategic architecture to the idea that humanity would conduct its social existence through legless, low-res avatars. The image was striking in its audacity; here was a single individual, commanding a platform that shapes the attention of nearly three billion people, declaring that he would reimagine the fundamental architecture of human interaction itself, and he would do so with an initial investment of ten billion dollars that would, within four years, balloon to over eighty billion dollars before anyone had meaningfully consented to the experiment.</p><p style="text-align: justify;">The reversal has arrived with the bureaucratic silence of a corporate filing; <a href="https://www.engadget.com/ar-vr/meta-will-shut-down-vr-horizon-worlds-access-in-june-222028919.html">Horizon Worlds, the flagship metaverse platform that was meant to host ten million users by the close of 2022, is being phased out</a>, with VR access ending June 15, 2026. Its virtual real estate is now just a digital ghost town whose only visitors are journalists documenting the abandonment. Those $80B have evaporated into the accounting ledgers of a company that can apparently afford to treat such sums as tuition in the education of its chief executive. </p><p style="text-align: justify;">Yet this was never merely a product failure, a miscalculation about market timing or consumer preferences that could be remedied through iteration and refinement. This was <strong>a failed social experiment conducted on a global scale</strong> without consent, a vast exercise in behavior modification to see if humanity would relocate to a platform owned by a single corporate entity whose motivations have never aligned with those of the communities it purports to serve.</p><p style="text-align: justify;">And here is the pivot that demands our attention, the moment when the narrative reveals its deeper pattern: Meta has already moved on, announcing that it will now pursue &#8220;superintelligence&#8221; with even greater capital commitments. It&#8217;s <a href="https://www.nytimes.com/2026/01/28/technology/meta-earnings-ai-spending.html">quietly raised its projected AI infrastructure spending to over $115 billion</a> while the metaverse losses accumulate and the promises evaporate. What does it mean, then, when the decisions of a single individual can consume billons of dollars in a failed attempt to reimagine human interaction? Who is to stop them for simply pivoting to the next grand obsession, armed with even more capital and even less accountability, while the rest of us are left to live with the consequences of enthusiasms we never shared?</p><h2>THE PROMISE</h2><p style="text-align: justify;">There was, in retrospect, something almost touching about the evangelism of it all. Not in the sincerity, which was always suspect, but in the sheer institutional optimism required to believe that millions of Americans would voluntarily strap screens to their faces in order to conduct virtual business meetings. Zuckerberg, in that peculiar moment of corporate mysticism that was late 2021, declared with straight-faced earnestness that &#8220;<a href="https://www.nytimes.com/2026/03/19/technology/mark-zuckerbergs-metaverse-vr-horizon-worlds.html">Teleporting around the metaverse is going to be like clicking a link</a>,&#8221; a statement that managed to conflate the fundamental physics of human presence with the casual friction of browser navigation. The metaverse, as presented by its primary apostle, was not merely a platform but a dissolution of the distinction between being present and being somewhere else; it was, in the language of venture capital, a spatial internet where presence itself would become as liquid as data.</p><p style="text-align: justify;">The corporate world, which has never been particularly adept at resisting the siren song of consultants bearing PowerPoints, responded to this vision with the kind of synchronized enthusiasm that only collective delusion can generate. <a href="https://www.theverge.com/2022/2/15/22935445/disney-metaverse-strategy-plans-mike-white-memo">Disney, whose century of storytelling had been built upon physical experience and tangible wonder, appointed a &#8220;chief metaverse officer&#8221;</a> with the apparent conviction that animated mice would find new life in blockchain-powered avatars. Crate &amp; Barrel, a retailer whose entire value proposition rested upon the tactile appreciation of home goods, joined the parade, installing executives whose job description seemed to involve imagining how customers might virtually browse couches they could not sit upon. These appointments were not responses to consumer demand; they were anticipatory bets on a future that had not arrived, placed by executives who understood that being perceived as forward-looking often matters more than actually understanding what one is looking forward to.</p><p style="text-align: justify;">It was McKinsey, the management consultancy that has spent decades perfecting the art of charging corporations for ideas they might have arrived at through common sense, which provided the bean-counters the made-up numbers. <a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/value-creation-in-the-metaverse">In a 2022 report</a>&#8212;issued with the confident authority only a firm billing by the hour can summon&#8212;they predicted that the metaverse would generate $5 trillion in economic value by 2030, a figure so round and so large that it invited skepticism without quite demanding it. The same consultants, whose methodological contributions to corporate history include the popularization of systematic layoffs as a management strategy, further prophesied that corporations would derive substantial revenue from metaverse activities by 2027, a timeline so compressed that it required the complete transformation of consumer behavior within a timeframe barely sufficient to rebrand a fast-food chain. These were not projections based on observed patterns of adoption; they were declarations of faith dressed in the statistical garments of empirical analysis.</p><p style="text-align: justify;">The pitch, when stripped of its technological mysticism, relied upon a particular strain of environmental reasoning that has long proven effective with corporate boards concerned about their public image. The metaverse, we were told, would reduce carbon emissions by eliminating the need for physical travel; instead of executives burning jet fuel to attend conferences, they would consume electricity to render their avatars in hotel ballrooms that existed only in code. <a href="https://www.weforum.org/stories/2022/06/metaverse-climate-change-sustainability/">The argument was elegant in its circularity</a>: technology would solve the problems that technology had created, and the solution would require the purchase of more technology. &#8220;Less time stuck in traffic&#8221; became a shorthand for progress, as though the frustration of commuting were a sufficient motivation to abandon the physical world entirely.</p><p style="text-align: justify;">The <em>synergy</em>, a term consultants use when they want to describe self-reinforcing delusion, operated not through market validation but through mutual reinforcement among institutions that had ceased to trust their own judgment. Corporate America believed in the metaverse not because consumers were demanding it, but because consultants had constructed models showing that other corporations believed it. The same firms that appointed metaverse officers cited the existence of other metaverse officers as evidence that the trend was real. It was, in the final analysis, a peculiar form of corporate fortune telling in which the fortune tellers were paid regardless of whether the future arrived. And the corporations, who should have known better, gambled <em>billions</em> upon predictions that bore the empirical rigor of a horoscope.</p><h2>THE REALITY</h2><p style="text-align: justify;">The fundamental architecture of immersive technology, which demands that users strap high-resolution displays and motion-tracking sensors to their faces for extended periods, confronts an inconvenient truth about human physiology and social preference: people don&#8217;t want to interact through diving masks. Humans prefer the eye contact, spontaneous gestures, and the subtle interplay of environmental cues that a head-mounted apparatus obscures. The hardware problem, which has plagued virtual reality since its commercial emergence, persists not because engineers lack ingenuity but because the form factor itself constitutes a barrier that most consumers find unacceptable for daily use.</p><p style="text-align: justify;">Apple, whose entry into any market typically signals mainstream validation, provided the most instructive case study in the economic limitations of premium immersive hardware. In 2024, it launched <a href="https://www.theguardian.com/technology/2026/jan/01/apple-reportedly-cuts-production-vision-pro-headset-poor-sales">the Vision Pro at a price point of $3,499</a>, a figure that positioned the device as a luxury good accessible only to devoted enthusiasts and institutional purchasers. The sales figures, which analysts estimated at fewer than 100,000 units per quarter, revealed a market appetite far below the projections that had accompanied the product&#8217;s unveiling, and by early 2026, reports indicated that Apple had suspended production of the higher-end configuration, a tacit acknowledgment that the economics of ultra-premium virtual reality headsets did not cohere with sustainable manufacturing scales. The device itself often caused neck fatigue after sessions exceeding thirty minutes, and the limited battery life required frequent recharging that interrupted the immersive experiences the technology promised to deliver.</p><p style="text-align: justify;">These physical limitations echoed <a href="https://www.technologyreview.com/2014/11/26/169918/google-glass-is-dead-long-live-smart-glasses/">the earlier failure of Google Glass in 2013</a>, a device that similarly promised to overlay digital information onto physical reality but encountered resistance rooted in concerns about privacy invasion, technology addiction, and the social awkwardness of face-mounted cameras that recorded encounters without the explicit consent of all participants. The Vision Pro, despite its advanced passthrough capabilities and spatial computing interface, resurrected many of these same anxieties; wearers found themselves isolated within their own computational bubbles, unable to maintain the reciprocal awareness that face-to-face interaction requires, while bystanders confronted the disquieting experience of speaking to someone whose eyes remained hidden behind reflective screens.</p><p style="text-align: justify;">The engagement metrics for Meta&#8217;s Horizon Worlds, which the company positioned as its flagship metaverse platform, confirmed that the hardware limitations translated directly into usage patterns. <a href="https://www.cnbc.com/2022/10/15/meta-horizon-worlds-metaverse-losing-users-falling-short-of-goals.html">The platform never exceeded a few hundred thousand monthly active users as of February 2022</a>, a figure that pales in comparison to the billions who inhabit conventional social networks and that rendered the billions in development expenditure increasingly difficult to justify to shareholders who measure success in adoption curves and advertising impressions. <a href="https://omdia.tech.informa.com/pr/2024/dec/reality-check-for-vr-omdia-forecasts-decline-as-apples-entry-fails-to-galvanize-market">The market rejection extended beyond Meta&#8217;s offering to encompass the entire category; global shipments of virtual reality headsets declined by 10 percent in 2024</a> despite Apple&#8217;s high-profile entry, suggesting that the presence of a premium competitor had failed to expand the market and may have instead fragmented an already limited consumer base.</p><p style="text-align: justify;">Within Meta itself, the internal reassessment became explicit when Samantha Ryan, a vice president overseeing the company&#8217;s virtual reality initiatives, addressed the strategic pivot with a candor rare in corporate communications, <a href="https://www.nytimes.com/2026/03/19/technology/mark-zuckerbergs-metaverse-vr-horizon-worlds.html">stating that the company sometimes knocks initiatives out of the park and that other times it gets things wrong</a>, a formulation that conveyed both the scale of the miscalculation and the organizational humility required to acknowledge it publicly. The admission, which arrived after years of promotional videos depicting idyllic virtual offices and seamless social gatherings in digital space, marked the end of an era in which the metaverse could be discussed as an inevitability rather than a hypothesis that the data had progressively falsified.</p><h2>THE KILLER APP THAT NEVER CAME</h2><p style="text-align: justify;">Every technological platform that has achieved mass adoption, from the mainframe to the smartphone, has been propelled forward by a singular application that solved a problem so fundamental that adoption became not merely desirable but inevitable: VisiCalc transformed the IBM PC from a hobbyist&#8217;s curiosity into an essential business instrument; Lotus 1-2-3 cemented the spreadsheet as the lingua franca of corporate accounting; and the smartphone succeeded not because it was novel but because it consolidated the functions of the telephone, the camera, the calendar, and the portable music player into a single device that reduced the cognitive burden of daily existence. The metaverse, by contrast, arrived with a surfeit of spectacle and a poverty of purpose; it offered meetings conducted through cartoon avatars in virtual conference rooms that simulated the very spaces that videoconferencing had already rendered obsolete, a lateral displacement that substituted animation for functionality and novelty for necessity.</p><p style="text-align: justify;">The parallel with three-dimensional cinema, which enjoyed a brief vogue in the 1950s before collapsing under the weight of cumbersome glasses, dim projection, and the ineluctable fact that most narratives did not require spatial depth to achieve emotional resonance, suggests itself with almost tedious inevitability; the technology languished for half a century until digital projection and refined optics permitted its revival, yet even now, <a href="https://thetechylife.com/why-did-they-stop-making-3d-movies/">audiences routinely select two-dimensional presentations when given the choice</a>, preferring clarity and convenience to the marginal gains of stereoscopic immersion. Virtual reality has traced this same arc of premature promise and protracted disappointment; the headsets that were to inaugurate a new era of presence and connection have instead delivered motion sickness, social isolation, and the persistent suspicion that one is participating in an elaborate technological jest at one&#8217;s own expense.</p><p style="text-align: justify;">The metaverse&#8217;s proponents insisted that virtual presence would supplant the flat efficiency of Zoom and Teams, yet the avatar-based alternative solved no problem that the existing platforms had failed to address. Instead, it introduced new frictions in the form of hardware costs, digital embodiment, and the cognitive dissonance of watching colleagues navigate virtual space like weird, voxel-y puppets. The technology was &#8220;cool,&#8221; but the utility was conspicuously absent. In the calculus of adoption, where consumers weigh the marginal benefit of switching against the inertia of the familiar and household budgets, coolness without utility is a novelty not a necessity.</p><h2>CORPORATE SOCIAL ENGINEERING WITHOUT OVERSIGHT</h2><p style="text-align: justify;">The fundamental question that the collapse of the metaverse raises, and which few commentators seem willing to articulate with the requisite precision, is whether anyone was actually asked whether they wanted their social interactions rearchitected around virtual reality headsets and legless avatars. The obvious answer is, <em>of course we were not! </em>Yet, capital flows proceeded regardless, as though market momentum alone constituted justification for upending human connection. This is the democratic deficit at the heart of the platform economy&#8212;decisions that affect billions of people&#8217;s daily lives are made by CEOs whose fiduciary obligations to shareholders render public welfare, at best, a secondary consideration and, more often, an impediment to quarterly returns.</p><p style="text-align: justify;">In the absence of meaningful government regulation, or indeed of any cohesive grassroots advocacy capable of mounting an effective counterpressure against Silicon Valley, corporations have been left to direct the trajectory of society&#8217;s development without oversight, without accountability, and without any systematic mechanism for receiving the feedback. The absence of democratic input is not, in this context, an unfortunate oversight. This is a structural feature of a system in which product managers and venture capitalists are empowered to make policy decisions under the protective cover of innovation rhetoric, their failures subsidized by retail investors and pension funds while their successes accrue to private equity.</p><p style="text-align: justify;">One might ask, <em>who wanted this particular future</em>? The obvious answer is that this future was wanted by consultants who stood to profit from its invention, by executives whose compensation was tied to stock price rather than to user satisfaction, and by technologists who confused their preferences for those of the general public. The tragedy of the metaverse is not that it failed, but that its failure demonstrates a pattern that has repeated itself across the digital economy.  The choices that corporate decision-makers present are not, in fact, the &#8220;natural consequence&#8221; of technological progress.</p><p style="text-align: justify;">Capitalism is not inherently evil&#8212;a tool is a tool, and capitalism appears remarkably good at motivating the flow of goods and services. But capitalism does not always allocate capital efficiently. In the case of the metaverse, what we have witnessed is a massive misallocation of resources toward a consultant&#8217;s pipe dream while affordable housing, public transportation, and renewable energy infrastructure remain classified as unprofitable and therefore unworthy of comparable investment. The same eighty billion dollars that financed virtual conference rooms with worse latency than a telephone might have built several hundred thousand units of housing. Yet, the former attracted capital because it promised proprietary platforms and recurring subscription revenue, while the latter was dismissed as a social problem rather than a market opportunity.</p><p style="text-align: justify;">The pattern that emerges from this analysis is one that should concern anyone observing the unceasing advances toward artificial intelligence: the same unaccountable corporate power that directed resources toward the metaverse, without evidence of public demand and without mechanisms for feedback, is now directing an even larger volume of capital toward AI systems whose social implications are even more profound, and whose deployment is occurring with even less scrutiny. <a href="https://www.forbes.com/sites/petercohan/2026/01/29/meta-beat-expectations-now-it-must-prove-its-massive-ai-spending-isnt-another-metaverse/">The institutions that failed to predict the metaverse&#8217;s collapse are now positioned to shape the next phase of technological transformation</a>, and there is little reason to believe that they have developed, in the interim, the epistemic humility or the accountability structures that might prevent a repetition of the same errors at a potentially greater scale.</p><h2>THE BREWSTER TAKE</h2><p style="text-align: justify;">The metaverse did not collapse because the technology was immature, or because the headsets were heavy, or because the latency made conversation feel like shouting across a canyon (all of which are true.). It died because it mistook corporate will for consumer demand. When you spend eighty billion dollars constructing a solution for which no problem existed, the only thing standing between you and catastrophic failure is the depth of your own conviction that you know what people want better than they do.</p><p style="text-align: justify;">But there is the deeper truth that lies beneath the bankruptcy filings and the abandoned campuses and the digital ghost towns where legless avatars still hover in empty rooms. When corporations can marshal capital at the scale of nation-states to conduct social experiments without democratic oversight, without any accountability to consumers, without the procedural checks that constrain even the most ambitious public projects, they become de facto governments: unelected, unaccountable, and prone to enthusiasms so expensive that their failures reshape economies.</p><p style="text-align: justify;">Consider what it means that a single company could lose more on virtual real estate than the annual GDP of most of the world&#8217;s actual nations. Consider what it means that the architects of this catastrophe will face no electoral consequence. No parliamentary inquiry. No process by which the public might say, <em>&#8220;this was not what we wanted, not what we needed, and not what we authorized.&#8221;</em></p><p style="text-align: justify;">And now the corporations pivot motivated once again by the mistaken belief that there must be some technological salvation that will justify the accumulation of capital and influence that the last failed prophecy could not justify. Seen through this logic, the artificial intelligence boom is <a href="https://www.sciencefocus.com/future-technology/hidden-forces-ai-bubble">not a correction but a continuation</a>, a doubling-down on the presumption that the future is something to be built in boardrooms and announced in keynotes rather than something to be negotiated among citizens who might have different priorities, different values, different visions of what human flourishing actually requires.</p><p style="text-align: justify;">Who decided that metaverses or AI superintelligence or billionaires jet-setting to the moon was the future <strong>you </strong>wanted? Who appointed venture capitalists and product managers and efficiency consultants to determine the trajectory of human civilization? When did we become so accustomed to grand announcements about our collective destiny that we stopped asking whether anyone had bothered to ask?</p><p style="text-align: justify;">We should all stop for a minute, and ask.</p>]]></content:encoded></item><item><title><![CDATA[De-googling My Life]]></title><description><![CDATA[After a private conversation turned into ads, I set out de-googling my life&#8212;email, cloud, phone, and browser&#8212;without going off-grid.]]></description><link>https://www.brewsterpress.com/p/de-googling-my-life</link><guid isPermaLink="false">https://www.brewsterpress.com/p/de-googling-my-life</guid><dc:creator><![CDATA[William Southerland]]></dc:creator><pubDate>Tue, 17 Mar 2026 19:25:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!vtA0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For years, my husband and I lived comfortably inside Google&#8217;s ecosystem, until a moment forced us to confront digital privacy awareness head-on. What followed was a deliberate process of de-googling my life and reclaiming online privacy in a world shaped by surveillance capitalism. This post documents how I moved toward private data ownership using self hosted services, secure email privacy, and encrypted email providers, replaced Google tools with self hosted cloud storage, and adopted a privacy focused web browser and degoogled Android phone. It&#8217;s also about reality&#8212;balancing privacy and practicality in a modern web dependency on Google that still shapes website search visibility and Google SEO indexing tools.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vtA0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vtA0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png 424w, https://substackcdn.com/image/fetch/$s_!vtA0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png 848w, https://substackcdn.com/image/fetch/$s_!vtA0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png 1272w, https://substackcdn.com/image/fetch/$s_!vtA0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vtA0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png" width="569" height="265.0517578125" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:477,&quot;width&quot;:1024,&quot;resizeWidth&quot;:569,&quot;bytes&quot;:707282,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://william099.substack.com/i/191290130?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vtA0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png 424w, https://substackcdn.com/image/fetch/$s_!vtA0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png 848w, https://substackcdn.com/image/fetch/$s_!vtA0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png 1272w, https://substackcdn.com/image/fetch/$s_!vtA0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F802ddb00-9570-43d4-886f-f0ffd2434097_1024x477.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>They really were spying on me, I swear!</h2><p>During lockdown, we were having an intense, emotional conversation with our best friends on Zoom about their cat, who had just died and been cremated. This was not a search. Not a text. Not an email. A spoken conversation with our friends.</p><p>About ten minutes later, I started getting ads for cremation services.</p><p>I was horrified. This felt invasive. They were literally listening to me inside my own house and using my friends&#8217; grief to target me to buy things.</p><p>This experience made one thing abundantly clear: if I wanted privacy, I was going to have to actively take it back. So, I decided to remove Google from as many parts of my life as feasible&#8212;email, calendar, contacts, documents, browsing, and even my phone OS.</p><h2>Email: de-googling my life with Proton</h2><p>The first thing to go was Gmail. I <a href="https://proton.me/support/account/migrate">moved my email to Proton</a>, which offers end-to-end encryption and a business model that doesn&#8217;t depend on surveillance &#8212; you actually pay them. Money. What a concept!</p><p>Moving email providers also provided me an opportunity to finally put my custom domain to good use &#8212; why use &#8220;@gmail.com&#8221; when I can use &#8220;@williamsoutherland.com.&#8221; It&#8217;s actually pretty easy to do this, but doing it correctly means touching DNS which scares a lot of people unnecessarily. Here&#8217;s the basic shape of what I had to do.</p><ul><li><p>Create a Proton Mail account and add your custom domain.</p></li><li><p>Verify domain ownership via a TXT record.</p></li><li><p>Add SPF, DKIM, and DMARC records for deliverability.</p></li></ul><p>Example DNS records (values will differ depending on your setup):</p><pre><code><code>MX   @   10 mail.protonmail.ch
MX   @   20 mailsec.protonmail.ch

TXT  @   "v=spf1 include:_spf.protonmail.ch ~all"

TXT  protonmail._domainkey   "v=DKIM1; k=rsa; p=MIIBIjANBgkq..."

TXT  _dmarc   "v=DMARC1; p=quarantine; rua=mailto:dmarc@yourdomain.com"</code></code></pre><p>Once DNS propagated, email just worked. Proton confirmed this instantly. Even better, Proton&#8217;s migration service inported all of my previous mail from my Gmail account. And importantly, it worked without ads, behavioral profiling, or creepy coincidences.</p><h3>Nextcloud as Google Drive</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!s8eD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01471ef1-e43f-44c9-9398-ed15adf231e1_1024x576.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!s8eD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01471ef1-e43f-44c9-9398-ed15adf231e1_1024x576.png 424w, https://substackcdn.com/image/fetch/$s_!s8eD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01471ef1-e43f-44c9-9398-ed15adf231e1_1024x576.png 848w, https://substackcdn.com/image/fetch/$s_!s8eD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01471ef1-e43f-44c9-9398-ed15adf231e1_1024x576.png 1272w, https://substackcdn.com/image/fetch/$s_!s8eD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01471ef1-e43f-44c9-9398-ed15adf231e1_1024x576.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!s8eD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01471ef1-e43f-44c9-9398-ed15adf231e1_1024x576.png" width="433" height="243.5625" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/01471ef1-e43f-44c9-9398-ed15adf231e1_1024x576.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:576,&quot;width&quot;:1024,&quot;resizeWidth&quot;:433,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;de-googling my life&quot;,&quot;title&quot;:&quot;Nextcloud Icons&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="de-googling my life" title="Nextcloud Icons" srcset="https://substackcdn.com/image/fetch/$s_!s8eD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01471ef1-e43f-44c9-9398-ed15adf231e1_1024x576.png 424w, https://substackcdn.com/image/fetch/$s_!s8eD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01471ef1-e43f-44c9-9398-ed15adf231e1_1024x576.png 848w, https://substackcdn.com/image/fetch/$s_!s8eD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01471ef1-e43f-44c9-9398-ed15adf231e1_1024x576.png 1272w, https://substackcdn.com/image/fetch/$s_!s8eD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01471ef1-e43f-44c9-9398-ed15adf231e1_1024x576.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Email was only the start. Next, I started de-googling my life in cloud spaces &#8212;documents, files, calendar, and contacts. For these, </p><ul><li><p>Cloud storage via web UI and WebDAV</p></li><li><p>Calendar via CalDAV</p></li><li><p>Contacts via CardDAV</p></li><li><p>Easy sharing of files and documents, internally and via public links</p></li></ul><p>One unexpected benefit: interoperability actually improved. My husband&#8217;s new iPhone supports CalDAV and CardDAV natively, so his calendar and contacts sync directly&#8212;no Google account required. Even better, our Home Assistant instance can pull CalDAV data straight from Nextcloud to populate our wall-mounted tablets with shared calendar information.</p><h3>de-googling my Browser and Search</h3><p>Next up was browsing. I had long ago switched search to DuckDuckGo. Now, I doubled down and moved away from stock Chrome to a de-Googled Chromium build, Firefox, and DuckDuckGo. One great tip about data privacy I picked up along the way &#8212; decentralize. Don&#8217;t put all your data in one place, then its less easy to triangulate behavior.</p><h3>Phone: LineageOS (With One Uncomfortable Compromise)</h3><p>The most complicated step was the phone. I replaced the stock Android OS with LineageOS, which requires unlocking the bootloader and some adb/fastboot commands. Thankfully I didn&#8217;t brick my phone, and ultimately his succeeded in stripping out most of Google&#8217;s background services and gave me far more control over what talks to the network and when.</p><h3>What I Still Can&#8217;t Replace (Yet)</h3><p>There&#8217;s one place where I can&#8217;t yet de-google my life remains unavoidable: search dominance. If you run websites&#8212;especially professional or business sites&#8212;Google SEO is still the only game in town. My sites have to be searchable. My content has to be indexed. I still have to appease Google&#8217;s crawlers, metrics, and webmaster tools so people can actually find my work.</p><p>I don&#8217;t like it. But I accept it as a pragmatic boundary rather than total surrender.</p><p>De-Googling my life isn&#8217;t about disappearing from the internet or achieving some mythical technological purity. It&#8217;s about drawing lines&#8212;deciding what conveniences are worth the cost, and where they absolutely are not.</p><p>For me, the line was crossed the moment a private conversation turned into targeted ads. Everything that followed was just implementation.</p><p><em>This story originally appeared on <a href="https://www.williamsoutherland.com/tech/de-googling-my-life/">https://www.williamsoutherland.com/tech/de-googling-my-life/</a></em></p>]]></content:encoded></item><item><title><![CDATA[The 13-Minute Focus: How Agentic AI Created the World’s Densest Workday]]></title><description><![CDATA[For three years, the corporate narrative was simple: AI handles &#8220;grunt work,&#8221; humans do &#8220;strategy,&#8221; and everyone goes home early.]]></description><link>https://www.brewsterpress.com/p/the-13-minute-focus-how-agentic-ai</link><guid isPermaLink="false">https://www.brewsterpress.com/p/the-13-minute-focus-how-agentic-ai</guid><dc:creator><![CDATA[Henrik J Klijn]]></dc:creator><pubDate>Tue, 17 Mar 2026 16:57:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!YJkF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For three years, the corporate narrative was simple: AI handles &#8220;grunt work,&#8221; humans do &#8220;strategy,&#8221; and everyone goes home early. But as of March 2026, the &#8220;shorter workday&#8221; has become a haunting irony. While the average workday has technically shrunk by 11 minutes (dropping to 8h 44m), the density of that time has reached a breaking point. We have entered the efficiency paradox: the more time-saving tools we adopt, the more shadow work we create to manage them.</p><h3><strong>Shadow Work Surge: Managing the Machine</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YJkF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YJkF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg 424w, https://substackcdn.com/image/fetch/$s_!YJkF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg 848w, https://substackcdn.com/image/fetch/$s_!YJkF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!YJkF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YJkF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg" width="1456" height="564" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:564,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3302249,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://william099.substack.com/i/191274037?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YJkF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg 424w, https://substackcdn.com/image/fetch/$s_!YJkF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg 848w, https://substackcdn.com/image/fetch/$s_!YJkF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!YJkF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c1bba1-d2ab-4834-9c1b-16378aea90fd_5137x1989.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The <a href="https://www.activtrak.com/resources/state-of-the-workplace/">ActivTrak 2026 State of the Workplace report</a>, which analyzed 443 million hours of digital activity, has effectively killed the &#8220;automation as liberation&#8221; myth. The data shows that after adopting AI, employees don&#8217;t do less; they do <em>everything</em> more.</p><p>Rather than replacing workflows, AI is accelerating them into a chaotic feedback loop. Since 2024:</p><ul><li><p>Email activity is up 104%.</p></li><li><p>Chat and messaging (Slack/Teams) have surged 145%.</p></li><li><p>Collaboration time has jumped 34%.</p></li></ul><p>This, dear fellow muggles, is Shadow Work. The invisible labor of verifying AI drafts, fact-checking endless hallucinations, and coordinating the divergent output of multiple agents. We aren&#8217;t workers anymore; we&#8217;re all editors now.</p><h3><strong>The 13-Minute Focus and the Rise of &#8220;AI Brain Fry&#8221;</strong></h3><p>The most alarming metric of 2026 is the erosion of human focus. The average focused work session has declined 9% in just two years, now lasting a mere 13 minutes and 7 seconds.</p><p>This context-switching tax has birthed a new clinical syndrome documented by researchers from BCG and UC Riverside in <a href="https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry">the March 5 issue</a> of <em>Harvard Business Review</em>: &#8220;AI Brain Fry.&#8221;</p><p>Unlike traditional burnout, which is a slow emotional exhaustion, &#8220;Brain Fry&#8221; is an acute cognitive overload. 14% of US workers now endorse symptoms of &#8220;Brain Fry,&#8221; described as a buzzing feeling, mental fog, and slower decision-making.</p><p>The Trigger: Heightened oversight. The study found that supervising multiple AI agents requires 14% more mental effort than doing the task manually.</p><p>The Cost: Those fried by AI make 39% more major errors and are 34% more likely to quit.</p><h3><strong>The Jevons Paradox: Why the Void Never Stays Empty</strong></h3><p>Why hasn&#8217;t AI given us a Friday off? The answer lies in the Jevons Paradox, an 1860s economic theory back in vogue for 2026.</p><p>Originally, Jevons observed that as steam engines became more fuel-efficient, coal consumption <em>increased</em> because the engine&#8217;s power became cheaper and more useful. Today, cognition is the new coal. Because AI has made &#8220;writing a report&#8221; or &#8220;quickly coding a feature&#8221; 10x cheaper, the market (or your boss) has responded by demanding 10x more reports and features.</p><p>Efficiency didn&#8217;t reduce the workload; it simply<a href="https://thehrbpstory.com/2026/02/05/jevons-paradox-why-is-it-suddenly-popular-again/"> expanded the frontier of what is expected</a>. The void left by automation was immediately filled by administrative abundance.</p><h3><strong>The Resistance: &#8220;Friction-Maxxing&#8221;</strong></h3><p>In a world of frictionless automation, humans are starting to crave the grain. A counter-culture movement called &#8220;Friction-Maxxing&#8221; (coined by Kathryn Jezer-Morton in <em>The Cut</em>, <a href="https://www.thecut.com/article/brooding-friction-maxxing-new-years-2026-resolution.html">January 2026</a>) is taking over high-performance circles.</p><p>Friction-maxxing isn&#8217;t deliberate Luddism; it&#8217;s a strategic survival tactic. It involves:</p><ul><li><p>Intentional Inconvenience: Opting for hand-written notes in meetings to force active listening.</p></li><li><p>Tool Capping: Hard-limiting personal AI stacks to three tools maximum (research shows productivity <em>declines</em> after the third tool).</p></li><li><p>Deep-Work Sanctity: Refusing AI-generated summaries in favor of reading original documents to maintain &#8220;judgment stamina.&#8221;</p></li></ul><h3><strong>The 2026 Verdict</strong></h3><p>The Efficiency Paradox of 2026 has taught us a lesson: You cannot optimize your way to leisure if your tools are designed to increase throughput.</p><p>As we move into the second half of the year, the winning organizations won&#8217;t be those with the most AI agents. They will be the ones who recognize that attention is the only truly scarce resource left. The goal is no longer to be the most efficient; it&#8217;s to be the most effective, which requires the one thing AI can&#8217;t provide: the space to think.</p><h3><strong>The Shadow Work Equation</strong></h3><p>To find your true productivity, you must subtract your Supervision Tax from your Automation Gain. <br>Use the following worksheet to calculate your weekly Shadow Work Ratio.</p><h4><strong>Part 1: The Supervision Tax (Weekly Hours)</strong></h4><p>Estimate the time spent on the following tasks:</p><ul><li><p><strong>Fact-Checking &amp; Hallucination Hunting:</strong> <br>Time spent verifying AI-generated data, citations, or logic. _______ hrs</p></li><li><p><strong>Prompt Engineering &amp; Iteration:</strong> <br>Time spent &#8220;chatting&#8221; with agents to get a usable output (v. the time it would take to draft). _______ hrs</p></li><li><p><strong>Administrative Coordination:</strong> <br>Time spent managing AI-driven surges in email, Slack, or project tickets. _______ hrs</p></li><li><p><strong>Correction &amp; Refinement:</strong> <br>Time spent fixing &#8220;AI-voice&#8221; or stylistic errors to meet human standards. _______ hrs</p></li><li><p><strong>Total Supervision Tax (A):</strong> _______ hrs</p></li></ul><h4><strong>Part 2: The Automation Gain (Weekly Hours)</strong></h4><ul><li><p><strong>Manual Task Replacement:</strong> <br>Time saved by using AI to automate repetitive, data-heavy, or routine tasks. _______ hrs</p></li><li><p><strong>Total Automation Gain (B):</strong> _______ hrs</p></li></ul><h4><strong>Part 3: The Verdict</strong></h4><blockquote><p><strong>Calculate Your Ratio:</strong> $(A / B) \times 100$</p></blockquote><ul><li><p><strong>0% &#8211; 25% (The Optimizer):</strong> You are successfully using AI as a force multiplier. Your AI stack is tuned for efficacy.</p></li><li><p><strong>26% &#8211; 50% (The Treadmill):</strong> You are at the break-even point. You are likely experiencing <strong>High-Velocity Stagnation</strong>.</p></li><li><p><strong>51% &#8211; 100%+ (The AI Burnout Zone):</strong> You are suffering from <strong>AI Brain Fry</strong>. Your tools are creating more work than they are solving.</p></li></ul><h3><strong>Strategic Adjustments for &#8220;Friction-Maxxers&#8221;</strong></h3><p>If your score is above 50%, implement these 2026 <strong>Strategic Slowness</strong> tactics immediately:</p><ol><li><p><strong>The &#8220;Three-Tool&#8221; Cap:</strong> As identified in the BCG/UC Riverside Study, focus efficiency drops 18% for every tool added beyond your third. Audit your stack and kill the redundant agents.</p></li><li><p><strong>Shadow Work Blocks:</strong> Do not check AI outputs in real-time. Batch your &#8220;Supervision Tax&#8221; into a single 90-minute block at the end of the day.</p></li><li><p><strong>Human-First Drafts:</strong> For high-stakes strategy, write the first 20% of the document manually. This provides a &#8220;human anchor&#8221; that reduces the need for extensive AI refinement later.</p></li></ol><div class="instagram-embed-wrap" data-attrs="{&quot;instagram_id&quot;:&quot;DVq8sRpgCNC&quot;,&quot;title&quot;:&quot;Harsh Songra on Instagram: \&quot;Everyone&#8217;s talking about AI product&#8230;&quot;,&quot;author_name&quot;:&quot;@harshsongra&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/__ss-rehost__IG-meta-DVq8sRpgCNC.jpg&quot;,&quot;like_count&quot;:null,&quot;comment_count&quot;:null,&quot;profile_pic_url&quot;:null,&quot;follower_count&quot;:null,&quot;timestamp&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="InstagramToDOM"></div>]]></content:encoded></item><item><title><![CDATA[The Death of the Social Playbook: Why Users Are Fleeing AI Perfection]]></title><description><![CDATA[Like a thief in the night, the &#8220;synthetic feed&#8221; arrived quietly, then cannibalized everything.]]></description><link>https://www.brewsterpress.com/p/the-death-of-the-social-playbook</link><guid isPermaLink="false">https://www.brewsterpress.com/p/the-death-of-the-social-playbook</guid><dc:creator><![CDATA[Henrik J Klijn]]></dc:creator><pubDate>Tue, 17 Mar 2026 16:46:01 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1683721003111-070bcc053d8b?fm=jpg&amp;q=60&amp;w=3000&amp;auto=format&amp;fit=crop&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Like a thief in the night, the &#8220;synthetic feed&#8221; arrived quietly, then cannibalized everything. By the start of 2026, AI tools had democratized high-fidelity content creation so thoroughly that social platforms were effectively terraformed. Feeds became flooded with generated material: uncannily smooth videos, captions engineered by LLMs for maximum algorithmic retention, and images so polished they possessed a permanent, digital sheen.</p><p>For a brief window, the conventional industry narrative celebrated this as the ultimate liberation of creativity. It was efficiency at scale: brands could produce more, faster, and most importantly, cheaper than ever before. But by March 2026, that story is more than just fraying. We are witnessing a &#8220;haptic rejection&#8221; of digital perfection. Audiences aren&#8217;t just scrolling past the synthetic; they are beginning to recoil from it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1683721003111-070bcc053d8b?fm=jpg&amp;q=60&amp;w=3000&amp;auto=format&amp;fit=crop&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1683721003111-070bcc053d8b?fm=jpg&amp;q=60&amp;w=3000&amp;auto=format&amp;fit=crop&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D 424w, https://images.unsplash.com/photo-1683721003111-070bcc053d8b?fm=jpg&amp;q=60&amp;w=3000&amp;auto=format&amp;fit=crop&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D 848w, https://images.unsplash.com/photo-1683721003111-070bcc053d8b?fm=jpg&amp;q=60&amp;w=3000&amp;auto=format&amp;fit=crop&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D 1272w, https://images.unsplash.com/photo-1683721003111-070bcc053d8b?fm=jpg&amp;q=60&amp;w=3000&amp;auto=format&amp;fit=crop&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1683721003111-070bcc053d8b?fm=jpg&amp;q=60&amp;w=3000&amp;auto=format&amp;fit=crop&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D" width="3000" height="1582" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1683721003111-070bcc053d8b?fm=jpg&amp;q=60&amp;w=3000&amp;auto=format&amp;fit=crop&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1582,&quot;width&quot;:3000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;a group of different social media logos&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="a group of different social media logos" title="a group of different social media logos" srcset="https://images.unsplash.com/photo-1683721003111-070bcc053d8b?fm=jpg&amp;q=60&amp;w=3000&amp;auto=format&amp;fit=crop&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D 424w, https://images.unsplash.com/photo-1683721003111-070bcc053d8b?fm=jpg&amp;q=60&amp;w=3000&amp;auto=format&amp;fit=crop&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D 848w, https://images.unsplash.com/photo-1683721003111-070bcc053d8b?fm=jpg&amp;q=60&amp;w=3000&amp;auto=format&amp;fit=crop&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D 1272w, https://images.unsplash.com/photo-1683721003111-070bcc053d8b?fm=jpg&amp;q=60&amp;w=3000&amp;auto=format&amp;fit=crop&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>The Rise of the Inattention Economy</strong></h3><p>The catalyst for this shift was formalized in the <a href="https://www.ogilvy.com/ideas/social-trends-2026-social-substance-return-real">Ogilvy Social.Lab 2026 Social Trends Report</a>, titled &#8220;Social with Substance &amp; the Return to Real.&#8221; Released in late January and dominating industry discourse through March, the report diagnoses a digital world drowning in AI-generated noise. The core finding is stark: while content volume is exploding, genuine connection is shrinking.</p><p>We have entered what Ogilvy calls the &#8220;inattention economy.&#8221; After years of algorithmic overstimulation, users are feeling more than just a little alienated by the shallow nature of the infinite scroll. This sense of alienation has triggered what can only be described as a cultural detox. According to the report,<a href="https://newsletter.modash.io/p/social-trends-2026"> 20% of consumers</a> have deleted a social media app in the past year, and<a href="https://newsletter.modash.io/p/social-trends-2026"> 50% have turned off notifications</a> entirely to escape the ambient chaos of the feed. The 2025 social playbook&#8212;chase virality, polish relentlessly, and scale via AI&#8212;is officially dead.</p><h3><strong>The Trust Gap: Gen Z and the AI Backlash</strong></h3><p>The rejection of the synthetic is most visible in the widening perception gap between advertisers and consumers. New research from<a href="https://agilebrandguide.com/iab-and-sonata-bridging-the-ai-ad-perception-gap-strategic-imperatives-for-enterprise-leaders/"> IAB and Sonata Insights</a>, released in January 2026, reveals a disconnect: while<a href="https://ppc.land/iab-introduces-disclosure-framework-as-gen-z-trust-in-ai-ads-plummets-19-points/"> 82% of ad executives</a> believe consumers look favorably on AI-generated ads, only 45% of Gen Z and Millennial consumers actually do.</p><p>This 37-point gap has widened significantly since 2024. For Gen Z, the skepticism is even deeper. They are<a href="https://www.prnewswire.com/news-releases/iab-releases-industrys-first-ai-transparency-and-disclosure-framework-to-guide-responsible-advertising-in-a-generative-ai-landscape-302661683.html"> nearly twice as likely</a> as Millennials to describe brands using AI as &#8220;manipulative&#8221; (20%) or &#8220;unethical&#8221; (16%). The overwhelming tone at<a href="https://advertisingweek.com/ces-2026-confirmed-it-gen-z-doesnt-care-how-smart-your-ai-is/"> CES 2026</a> seemed to be: smart AI don&#8217;t impress us much; show us you&#8217;re a human. And be helpful.</p><p>OK, so is this just a luddite rejection of technology? Absolutely not. It&#8217;s a demand for transparency. The IAB report notes that<a href="https://ppc.land/iab-introduces-disclosure-framework-as-gen-z-trust-in-ai-ads-plummets-19-points/"> 73% of Gen Z and Millennials</a> say clear disclosure&#8212;a &#8220;Human-Made&#8221; label or an AI disclaimer&#8212;would actually increase their likelihood to purchase. They don&#8217;t hate the tool; they resent the deception.</p><p>It makes sense. In a world where we hate fake sweetener, artificial vanilla, plastic anything, why would we suddenly swoon over sanitized ramblings, anodyne tunes, or artistic expressions of things a processor-as-heart could never grasp?</p><h3><strong>The &#8220;Return to Real&#8221;: Three New Rules of Engagement</strong></h3><p>As the synthetic feed flounders, three specific counter-trends have emerged to fill the vacuum.</p><p><strong>1. Intention Seeking (Saves over Scrolls)</strong> Users are moving from mindless social surfing to intentional filtering. Content is now judged by its utility or its emotional resonance rather than its ability to stop a thumb for half a second. Platforms are responding: the<a href="https://www.hootsuite.com/research/social-trends"> Hootsuite 2026 Trends Report</a> notes that metrics like &#8220;saves&#8221; and &#8220;shares&#8221; have replaced &#8220;likes&#8221; as the primary signal of value. Users are looking for content that adds to their lives: cozy aesthetics, slow-living vlogs, and educational deep dives, rather than content that simply distracts.</p><p><strong>2. Proof of Craft (The Beauty of the Seam)</strong> In a world where AI can generate a flawless image in seconds, flawlessness has become a commodity with zero value. The new premium is proof of craft. This trend rewards visible human effort: process videos, &#8220;get ready with me&#8221; (GRWM) segments that include mistakes, and lo-fi ads that look like they were shot on a cracked iPhone.<a href="https://www.ogilvy.com/ideas/social-trends-2026-social-substance-return-real"> Ogilvy&#8217;s report</a> highlights &#8220;Process, Patina &amp; Proof of Craft&#8221; as a rule of realness. If a user can see the patina of human work: a small blooper in a voiceover, uneven lighting, visible textures of a physical product, they are 8.7x more likely to engage.</p><p><strong>3. Internet Intimacy (Going Small to Go Big)</strong> Tired of the hostility and sameness of public feeds, users are migrating to what the industry calls &#8220;lots of little.&#8221; Small, interest-driven micro-communities. This is the era of the &#8220;Human Algorithm.&#8221;<a href="https://newsletter.modash.io/p/social-trends-2026"> Modash research</a> suggests micro and nano-creators are now more effective than mega-influencers not because they are cheaper, but because they possess &#8220;taste trust.&#8221; In 2026, the winning strategy is no longer to broadcast to millions, but to build lore-building narrative arcs within tight-knit circles.</p><h3><strong>The Institutional Shift: From Attention to Meaning</strong></h3><p>The death of the old social playbook is forcing a structural change in how American institutions and brands operate. The shift from an attention economy to an intention economy means that meaning is now your best bet at a competitive edge.</p><p>Who loses in this new environment? Efficiency chasers. Agencies and platforms still optimizing for volume, churning out thousands of AI-optimized posts per week, are finding themselves blocked by<a href="https://lookfamed.de/en/news/social-media-trends-2026/"> user-tuned filters</a>.<a href="https://lookfamed.de/en/news/social-media-trends-2026/"> Lookfamed</a> reports that 42% of users have already activated active content filters to scrub their feeds of filler.</p><p>Who wins? Merchant Entertainers. Brands that function more like writers&#8217; rooms than marketing departments. By-appointment viewing of episodic content that feature recurring characters and signature visual codes. These brands use AI as a background layer for insights and data, but they keep the human anchor front and center.</p><h3><strong>The Future of the Feed</strong></h3><p>As we move toward the mid-point of 2026, the consequences of the synthetic feed&#8217;s failure are becoming permanent. We&#8217;re seeing a meme reset, where users are abandoning brain-rot humor in favor of intentional, creative formats. We&#8217;re also seeing the decline of faceless corporate communication and the rise of founder-led storytelling.</p><p>The synthetic feed promised us endless creativity. Instead, it delivered structural fatigue that made us crave the simple, the flawed, and the real. We still use social media, despite not exactly loving it anymore but the pressure is on: make social media matter again.</p><p>In 2026, the winning strategy is not about flooding the feed. It makes what remains feel worth staying for. Brands that thrive will be those that realize realness is no longer an aesthetic. It is the design principle for survival.</p><div><hr></div><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/onzzg/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fdf2cd92-ba18-49f7-9a61-c0ae1548a9c9_1220x1140.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fb52e0da-3260-4194-a57f-5fccffedfc0a_1220x1210.png&quot;,&quot;height&quot;:703,&quot;title&quot;:&quot;At a glance: Old Playbook vs. 2026 Realness&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/onzzg/1/" width="730" height="703" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><div><hr></div><h3><strong>Strategic Insight: The "Human Anchor"</strong></h3><p>As noted in the<a href="https://www.ogilvy.com/ideas/social-trends-2026-social-substance-return-real"> Ogilvy Social.Lab 2026 Social Trends Report</a>, the transition from an &#8220;attention economy&#8221; to an &#8220;intention economy&#8221; is not a rejection of technology, but a rebalancing. Brands that thrive in this new landscape will be those that treat<a href="https://newsletter.modash.io/p/social-trends-2026"> human imperfection</a> not as a flaw to be edited out, but as a &#8220;realness&#8221; signal that earns a user&#8217;s limited time and trust.</p>]]></content:encoded></item></channel></rss>