<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Frontier]]></title><description><![CDATA[How AI is changing everything]]></description><link>https://www.thefrontier.ai</link><generator>Substack</generator><lastBuildDate>Wed, 13 May 2026 09:35:13 GMT</lastBuildDate><atom:link href="https://www.thefrontier.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Tim Finnigan]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[thefrontierai@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[thefrontierai@substack.com]]></itunes:email><itunes:name><![CDATA[Tim Finnigan]]></itunes:name></itunes:owner><itunes:author><![CDATA[Tim Finnigan]]></itunes:author><googleplay:owner><![CDATA[thefrontierai@substack.com]]></googleplay:owner><googleplay:email><![CDATA[thefrontierai@substack.com]]></googleplay:email><googleplay:author><![CDATA[Tim Finnigan]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[How to Stay Irreplaceable When AI Can Do Your Job]]></title><description><![CDATA[Here&#8217;s something that should make you uncomfortable: last week, I looked at five documents from a colleague&#8217;s team.]]></description><link>https://www.thefrontier.ai/p/how-to-stay-irreplaceable-when-ai</link><guid isPermaLink="false">https://www.thefrontier.ai/p/how-to-stay-irreplaceable-when-ai</guid><dc:creator><![CDATA[Tim Finnigan]]></dc:creator><pubDate>Wed, 11 Feb 2026 19:00:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Q0O7!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F0469273e-a3fd-4d74-a01a-c90599837049_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Here&#8217;s something that should make you uncomfortable: last week, I looked at five documents from a colleague&#8217;s team. Strategy decks, reports, product specs. All solid. All polished. I had no idea which human had written which one.</p><p>That&#8217;s because none of them really did. AI wrote all of it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.thefrontier.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Frontier! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>For the first time in modern history, being competent isn&#8217;t rare anymore. The skills that used to take years to build can now be approximated in minutes. So the question isn&#8217;t &#8220;Will AI take my job?&#8221; anymore. It goes deeper than that.</p><p>If machines can do what I do, what actually makes me valuable?</p><h2>The Trap Most People Fall Into</h2><p>The instinctive response is to run faster. Learn more tools. Ship more output. Stay ahead.</p><p>That instinct will burn you out and leave you behind anyway.</p><p>The real opportunity isn&#8217;t to be faster than AI. It&#8217;s to become something AI can&#8217;t easily replace. To become, for lack of a better term, non-fungible.</p><p>Think about what that means for a person: if you disappeared tomorrow, there&#8217;s no clean replacement. Not because you&#8217;re irreplaceable on an org chart, but because your value comes from a specific, hard-to-replicate combination of judgment, taste, experience, and perspective.</p><p>Most people are accidentally training themselves to be the opposite. Optimizing for speed. Chasing best practices. Copying high-performing formats. Specializing in tasks AI is rapidly absorbing.</p><p>If your value can be fully captured in a prompt, you should be worried.</p><p>AI loves that kind of work. That&#8217;s the trap.</p><h2>Cogs vs. Linchpins</h2><p>Seth Godin drew a crucial distinction years ago that has become even more relevant now: there are cogs, and there are linchpins.</p><p>Cogs follow instructions. They do what&#8217;s asked, stay in their lane, and optimize for not making mistakes. They&#8217;re reliable, predictable, and interchangeable.</p><p>Linchpins are different. They&#8217;re connectors. They take responsibility without asking permission. They do the emotional labor of creating trust and holding teams together. They make decisions when the instructions are incomplete or wrong.</p><p>Here&#8217;s the thing about AI: it&#8217;s phenomenal at following instructions.</p><p>It cannot decide which instructions matter.</p><p>That&#8217;s linchpin work. And AI doesn&#8217;t eliminate linchpins. It makes their absence painfully obvious.</p><p>When everyone has access to the same AI tools, the people who stand out are the ones who know what matters to build, not just how to build it. The ones who can read a room, sense when a project is going sideways, and course-correct before anyone asks them to.</p><h2>The Ultimate Intern Problem</h2><p>Think of AI as the most brilliant intern you&#8217;ve ever met. Tireless at research. Impressive at analysis. Capable of producing work at superhuman speed.</p><p>But like any intern, it lacks the wisdom that comes from lived experience. The intuition born from thousands of human interactions. The cultural fluency you develop over years of navigating messy, complicated situations.</p><p>AI excels at pattern recognition. It can analyze thousands of legal documents, generate marketing copy, write code that actually works. But it struggles with the spaces between the patterns. The subtle cultural nuances. The unspoken client concerns. The innovative leaps that connect ideas nobody thought to connect.</p><p>Look at how radiologists are adapting. The successful ones aren&#8217;t being replaced. They spend less time spotting obvious tumors (AI handles that brilliantly) and more time on complex cases requiring judgment, patient communication, and coordination with other specialists. They&#8217;ve become more human, not less relevant.</p><h2>What Actually Makes Humans Valuable Now</h2><p>Here&#8217;s the counterintuitive truth almost nobody is pricing in: we actually need AI.</p><p>Population growth is slowing across much of the world. Without productivity gains, economies stagnate. AI arrives at exactly the moment when labor is scarcer and complexity is higher.</p><p>Which means the remaining human workers, especially those who can span domains and make real judgment calls, aren&#8217;t becoming cheaper. They&#8217;re becoming rarer.</p><p>And rarity drives value.</p><h2>Jobs Don&#8217;t Disappear. Tasks Do.</h2><p>Everyone wants to talk about job loss. The real story is task loss.</p><p>Secretaries didn&#8217;t disappear when email arrived. Executives didn&#8217;t disappear when they had to type their own memos. The tasks moved. The roles adapted.</p><p>AI accelerates this unbundling dramatically. The people who win aren&#8217;t the ones clinging to a narrow set of tasks. They&#8217;re the ones who can absorb more scope. When tasks get automated, they adapt rather than panic.</p><h2>The Skill-Stacking Math That Changes Everything</h2><p>Here&#8217;s a piece of career math that has aged absurdly well in the AI era.</p><p>If you&#8217;re in the top 25% of two different skills, you&#8217;re in the top 6% of people with that combination. Top 25% of three skills? Top 1.5%. The intersections empty out fast.</p><p>You don&#8217;t have to be the best at any single thing. You have to be unusually good at a specific combination.</p><p>The designer who understands constraints. The engineer who can write. The project manager who knows engineering tradeoffs.</p><p>Being good at two things is more than twice as valuable. Being good at three? You stop competing in crowded markets and start occupying empty intersections.</p><p>AI doesn&#8217;t invalidate this logic. It supercharges it. What used to take decades to learn across domains can now be compressed into years, or months, if you know how to use AI as a tutor instead of just a tool.</p><h2>The Four Things AI Still Can&#8217;t Do</h2><p><strong>Reading between the lines.</strong> While AI processes information, humans process meaning. Understanding not just what&#8217;s said, but what isn&#8217;t said. The cultural subtext. The political undercurrents. The emotional undertones.</p><p>A marketing executive I know uses AI to generate campaign ideas, but human insight determines which concepts will actually resonate across different cultures. AI might suggest a brilliant tagline, but only a human knows whether it&#8217;ll offend certain audiences or ride an emerging cultural wave.</p><p><strong>Building real trust.</strong> AI can generate connections. It cannot generate trust. The most irreplaceable people are bridges: between departments, between cultures, between ideas and execution. They&#8217;re the ones others call when they need honest advice or someone who actually gets the human side of the problem. This is linchpin territory. The emotional labor of showing up, being present, and making people feel heard. AI can simulate empathy. It can&#8217;t actually care.</p><p><strong>Making unexpected connections.</strong> AI recombines existing ideas impressively. But breakthrough innovation often comes from unexpected connections that emerge from diverse human experience. An architect I know uses AI for initial design iterations, but her real value lies in understanding how spaces make people feel. Her designs reflect something AI can&#8217;t access: how people actually want to live.</p><p><strong>Navigating ethical gray areas.</strong> As AI gets more powerful, the need for human judgment about how to use it gets more critical. The most valuable people are those who can make values-based decisions under uncertainty and help organizations use AI responsibly.</p><h2>The 3-2-1 Framework</h2><p>Here&#8217;s a simple rule for becoming non-fungible:</p><p><strong>3 years</strong> building real depth in one domain. The kind of expertise that lets you spot what AI gets wrong.</p><p><strong>2 adjacent skills</strong> where you&#8217;re operationally competent. Not world-class. Functional. Enough to collaborate without translation, prototype without permission, and see connections others miss.</p><p><strong>1 clear point of view</strong> that makes your combination legible. Not a job title. A perspective. Something that lets people understand what you uniquely bring.</p><p>Don&#8217;t be a role. Be a combination.</p><h2>Agency Is the Real Multiplier</h2><p>The trait that compounds most aggressively with AI isn&#8217;t intelligence or credentials.</p><p>It&#8217;s agency.</p><p>The mindset of: I&#8217;ll figure it out. I&#8217;ll own the outcome. I don&#8217;t need permission to start.</p><p>This is the heart of what makes a linchpin. Not waiting for instructions. Not asking for approval. Seeing what needs to happen and making it happen.</p><p>Give AI to someone without agency and you get more output, more noise, more dependency.</p><p>Give AI to someone with high agency and you get faster learning, wider scope, asymmetric impact.</p><p>Same tool. Completely different result.</p><h2>Stop Waiting to Be Picked</h2><p>Linchpins don&#8217;t wait to be picked. They pick themselves.</p><p>The tools are available to everyone now. AI included. The barrier isn&#8217;t access anymore. It&#8217;s initiative.</p><p>The people thriving right now aren&#8217;t waiting for permission. They&#8217;re using AI to learn faster, build more, and ship things that would have taken entire teams a decade ago. They&#8217;re treating AI as leverage, not as a replacement for their own judgment.</p><h2>The Paradox of This Moment</h2><p>The more powerful AI becomes, the more valuable human distinctiveness gets.</p><p>In a world of infinite content, clarity beats volume. Depth beats speed. Trust beats novelty.</p><p>The future doesn&#8217;t belong to the most optimized humans. It belongs to the most specific ones.</p><p>Use AI to speed up execution and expand your reach. But never outsource your judgment, your curiosity, your values, or your voice. If AI speaks for you, you disappear. If AI works with you, you compound.</p><p>The question isn&#8217;t whether AI will change your industry. It&#8217;s whether you&#8217;ll become the kind of human that AI makes more powerful, rather than obsolete.</p><p>Become someone who cannot be cleanly replaced. Not because you&#8217;re indispensable. Because you&#8217;re distinct.</p><p>Become a linchpin.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.thefrontier.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Frontier! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Metaphors and Why They Matter]]></title><description><![CDATA[When the EU drafted its AI Act, lawmakers kept reaching for the word &#8220;tool.&#8221; When OpenAI pitched investors, they talked about &#8220;intelligence.&#8221; When researchers published safety papers, they warned about &#8220;agents.&#8221; When product managers shipped features, they introduced &#8220;assistants.&#8221;]]></description><link>https://www.thefrontier.ai/p/ai-metaphors-and-why-they-matter</link><guid isPermaLink="false">https://www.thefrontier.ai/p/ai-metaphors-and-why-they-matter</guid><dc:creator><![CDATA[Tim Finnigan]]></dc:creator><pubDate>Wed, 04 Feb 2026 19:01:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Q0O7!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F0469273e-a3fd-4d74-a01a-c90599837049_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When the EU drafted its AI Act, lawmakers kept reaching for the word &#8220;tool.&#8221; When OpenAI pitched investors, they talked about &#8220;intelligence.&#8221; When researchers published safety papers, they warned about &#8220;agents.&#8221; When product managers shipped features, they introduced &#8220;assistants.&#8221;</p><p>Same technology. Different metaphors. Different futures quietly written into law, capital allocation, and user expectations.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.thefrontier.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Frontier! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>If you follow AI discourse for more than five minutes, you&#8217;ll hear the same phrases on rotation:</p><p><em>It&#8217;s just autocomplete on steroids.</em></p><p><em>A calculator for words.</em></p><p><em>A stochastic parrot.</em></p><p><em>An intern who works really fast.</em></p><p><em>A country of geniuses in a data center.</em></p><p><em>The steam engine for the mind.</em></p><p>These aren&#8217;t clever descriptions. They&#8217;re compressed worldviews. Each one encodes assumptions about trust, fear, regulation, and whether we&#8217;re building a tool, a collaborator, or something closer to an institution.</p><p>The metaphors are shaping AI culture as much as the models themselves. Maybe more, because the models keep changing while the metaphors stick.</p><div><hr></div><h2>Why We Reach for Metaphors at All</h2><p>AI systems are strange in a way that resists analogy.</p><p>They&#8217;re intangible, statistical, non-human yet fluent in human language, capable of surprising outputs without anything resembling intention. There is no everyday object that maps cleanly onto &#8220;a probabilistic model trained on a large fraction of the internet.&#8221;</p><p>So we reach for metaphors to make the unfamiliar legible. This is what humans do. We did it with electricity (&#8221;current,&#8221; &#8220;flow&#8221;), with computers (&#8221;memory,&#8221; &#8220;virus&#8221;), with the internet (&#8221;surfing,&#8221; &#8220;the cloud&#8221;).</p><p>But with AI, especially large language models, metaphors aren&#8217;t just explanatory. They&#8217;re behavioral. They determine how much authority we grant these systems, how comfortable we feel outsourcing cognition to them, and who bears responsibility when they fail.</p><p>The metaphor comes first. The behavior follows.</p><div><hr></div><h2>The Deflationary Metaphors: Keeping AI in Its Place</h2><p>Some metaphors exist primarily to push back against hype.</p><p><strong>&#8220;Stochastic Parrot&#8221;</strong></p><p>Coined by Emily Bender and colleagues, this frames language models as systems that remix linguistic patterns without understanding meaning. The goal isn&#8217;t technical precision. It&#8217;s epistemic humility. The metaphor warns us not to confuse fluency with comprehension, or coherence with truth.</p><p>It&#8217;s powerful because it counters anthropomorphism. It reminds us that the model isn&#8217;t thinking, believing, or intending. It&#8217;s predicting.</p><p>But here&#8217;s the tension: parrots don&#8217;t surprise you with emergent capabilities. They don&#8217;t generalize to tasks they weren&#8217;t trained on. They don&#8217;t get better at reasoning when you make them larger. If we take the metaphor seriously <em>and</em> acknowledge that something unexpected is happening at scale, we&#8217;re left with a parrot that doesn&#8217;t behave like a parrot. That might be precisely the point, or it might be a sign the metaphor has hit its limit.</p><p>The stochastic parrot has also become tribal. Using it now signals a position in a culture war as much as it describes a technology. That&#8217;s what happens to metaphors that work: they get captured.</p><p><strong>&#8220;Calculator for Words&#8221;</strong></p><p>Popularized by Simon Willison, this is beloved by engineers. It frames LLMs as performing mechanical operations over symbols, just not numerical ones.</p><p>It&#8217;s one of the cleanest metaphors available. It explains why models can be precise in some contexts and wildly wrong in others. Calculators don&#8217;t understand math; they implement rules. Garbage in, garbage out.</p><p>The limitation: calculators don&#8217;t improvise essays, refactor code, or tutor students through Socratic dialogue. The metaphor grounds expectations, but only for people who already know what calculators can&#8217;t do. For everyone else, it might undersell the technology&#8217;s range while overselling its reliability.</p><div><hr></div><h2>The Scaling Metaphors: When Size Changes the Story</h2><p>Other metaphors emphasize scale over mechanism.</p><p><strong>&#8220;Autocomplete on Steroids&#8221;</strong></p><p>This became popular early in the GPT era, and it&#8217;s technically accurate: LLMs are next-token predictors. The architecture is autocomplete, scaled up.</p><p>The problem is rhetorical, not factual.</p><p>Autocomplete doesn&#8217;t plan. It doesn&#8217;t reason across paragraphs. It doesn&#8217;t simulate personas, maintain context over thousands of words, or argue with itself. Calling modern LLMs &#8220;autocomplete&#8221; often functions as dismissal rather than explanation, a way of saying <em>nothing to see here</em> while the thing keeps getting more capable.</p><p>It&#8217;s a metaphor that explains mechanism while obscuring what happens when mechanism meets scale. And in AI, scale has a way of producing qualitative shifts that the original frame can&#8217;t accommodate.</p><p><strong>&#8220;Country of Geniuses in a Data Center&#8221;</strong></p><p>This phrase, used by Dario Amodei, swings the pendulum hard in the other direction.</p><p>It&#8217;s not saying the model is conscious. It&#8217;s pointing at <em>power</em>. Millions of expert-level competencies, available instantly, housed in centralized infrastructure controlled by a handful of organizations.</p><p>A country of geniuses changes geopolitics even if no single citizen is omniscient. The metaphor doesn&#8217;t explain how AI works. It explains why it matters, and why we might want to think carefully about who owns the data center.</p><p>This is a metaphor designed for boardrooms and Senate hearings, not technical understanding. That&#8217;s not a flaw. Different contexts need different frames.</p><div><hr></div><h2>The Industrial Metaphors: AI as Infrastructure</h2><p>Some metaphors skip intelligence entirely and focus on productivity.</p><p><strong>&#8220;Steam Engine for the Mind&#8221;</strong></p><p>This frames AI as a general-purpose amplifier of cognitive labor, analogous to how the steam engine amplified physical labor during industrialization.</p><p>What it captures:</p><ul><li><p>AI isn&#8217;t just another app. It&#8217;s a platform shift</p></li><li><p>It reshapes workflows, not just individual tasks</p></li><li><p>Gains compound across the economy</p></li></ul><p>What it hides:</p><ul><li><p>Externalities (environmental, labor displacement, concentration of power)</p></li><li><p>The fact that steam engines didn&#8217;t hallucinate</p></li><li><p>Uneven access and who captures the surplus</p></li></ul><p>Like all industrial metaphors, it&#8217;s optimistic by default. Historically, that optimism tends to peak right before the regulation debates get serious.</p><p>A close cousin is Steve Jobs&#8217;s &#8220;bicycle for the mind,&#8221; a metaphor that emphasizes augmentation over replacement, human agency over automation. AI discourse increasingly borrows this frame, sometimes consciously, sometimes not. It&#8217;s a warmer, more democratic vision. It&#8217;s also, perhaps, wishful thinking dressed up as product philosophy.</p><div><hr></div><h2>The Social Metaphors: Training Ourselves to Trust</h2><p>The most influential metaphors today might not be technical at all.</p><p><strong>&#8220;Intern,&#8221; &#8220;Assistant,&#8221; &#8220;Coworker&#8221;</strong></p><p>These exploded after ChatGPT went mainstream, especially in product, UX, and management circles.</p><p>They&#8217;re useful because they encourage supervision, normalize fallibility, and suggest collaboration rather than blind authority. Nobody expects an intern to be right every time. You check their work.</p><p>But these metaphors are also doing something more subtle, and more dangerous.</p><p>Interns have intentions. Coworkers have accountability. Assistants understand context in ways that models don&#8217;t. When we frame AI as a junior colleague, we start responding to fluency as if it implied comprehension. We trust outputs <em>socially</em>, even when we know intellectually that the system has no understanding.</p><p>This is the ELIZA effect at scale: the tendency to attribute human qualities to systems that produce human-like outputs.</p><p>There&#8217;s another problem. When a company tells you to &#8220;treat it like an intern,&#8221; they&#8217;re performing a quiet transfer of liability. The product maintains its authority (it&#8217;s in your workflow, making suggestions, drafting your emails) while responsibility for errors shifts to you, the supervisor who should have checked. The intern metaphor isn&#8217;t just anthropomorphizing. It&#8217;s liability laundering.</p><div><hr></div><h2>The Metaphors We Don&#8217;t Have</h2><p>It&#8217;s worth asking: what&#8217;s <em>missing</em> from our metaphorical vocabulary?</p><p>We don&#8217;t have a widely-adopted metaphor for AI as <em>mirror</em>, a system that reflects the biases and patterns of its training data back at us, revealing what we&#8217;ve written and thought and valued, whether we like the reflection or not.</p><p>We don&#8217;t have a metaphor for AI as <em>fossil fuel</em>, something powerful, transformative, and extractive, with costs that are real but deferred, unevenly distributed, and easy to ignore until they&#8217;re not.</p><p>We don&#8217;t have a metaphor for AI as <em>dialect</em>, a new form of language production that emerges from human language but isn&#8217;t reducible to it, something genuinely novel that we don&#8217;t yet have the vocabulary to describe.</p><p>The absence of these frames isn&#8217;t neutral. It shapes what we notice and what we ignore. Metaphors that don&#8217;t exist can&#8217;t do work in policy debates, product decisions, or public understanding.</p><div><hr></div><h2>The Core Problem: No Metaphor Is Stable</h2><p>Here&#8217;s the uncomfortable truth running through all of this:</p><p>There is no single good metaphor for AI.</p><p>Each one clarifies a dimension, obscures another, and encourages specific behaviors. If a metaphor feels complete, it&#8217;s probably doing more persuasion than explanation.</p><p>AI systems are simultaneously statistical and generative, tool-like and unpredictable, narrow in mechanism but broad in impact. They produce outputs that feel creative without anything we&#8217;d recognize as creativity. They fail in ways that tools don&#8217;t fail and succeed in ways that tools don&#8217;t succeed.</p><p>Metaphors struggle because AI crosses categories we&#8217;re used to keeping separate.</p><div><hr></div><h2>Using Metaphors Situationally</h2><p>Instead of asking &#8220;What is AI?&#8221; and reaching for a universal answer, try a better question: &#8220;What metaphor is appropriate for this context?&#8221;</p><p>Debugging code? &#8594; Calculator. Expect precision in syntax, not judgment about architecture.</p><p>Brainstorming? &#8594; Collaborator. Treat outputs as starting points, not conclusions.</p><p>Policy and governance? &#8594; Amplifier with externalities. Focus on concentration, access, and second-order effects.</p><p>Education? &#8594; Tutor with hallucinations. Useful for explanation, dangerous for facts.</p><p>Geopolitics? &#8594; Cognitive infrastructure. Think about who controls it and what that control enables.</p><p>Metaphors should be situational, not tribal. The person who uses &#8220;stochastic parrot&#8221; in a safety discussion and &#8220;creative collaborator&#8221; in a brainstorming session isn&#8217;t being inconsistent. They&#8217;re being precise.</p><div><hr></div><h2>Four Questions Before Using a Metaphor</h2><p>Before reaching for an AI metaphor in conversation, in writing, in product copy, in policy testimony, ask:</p><ol><li><p><strong>What behavior does this metaphor encourage?</strong></p></li><li><p><strong>What does it hide or downplay?</strong></p></li><li><p><strong>Who benefits from this framing?</strong></p></li><li><p><strong>What failure mode does it make harder to see?</strong></p></li></ol><p>If you can&#8217;t answer these, the metaphor is doing more work than you realize. And possibly not the work you intend.</p><div><hr></div><h2>Metaphors Are Inputs Too</h2><p>We spend enormous energy learning how to prompt AI systems. We tune our inputs, refine our instructions, iterate on our queries.</p><p>We spend far less time thinking about how we&#8217;re prompting ourselves.</p><p>Metaphors are cognitive inputs. They shape trust, fear, delegation, and responsibility. They influence policy debates before any policy is written, product decisions before any feature is shipped, user behavior before any documentation is read.</p><p>We don&#8217;t need to eliminate AI metaphors. We need to treat them as provisional tools rather than settled truths, useful in context, dangerous when they harden into the only way we can see.</p><p>Because once a metaphor stops being useful and we keep using it anyway, it stops explaining the system.</p><p>It starts quietly mistraining us instead.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.thefrontier.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Frontier! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why Your Taste Matters More Than Ever in the Age of AI]]></title><description><![CDATA[I asked ChatGPT to write me a novel.]]></description><link>https://www.thefrontier.ai/p/why-your-taste-matters-more-than</link><guid isPermaLink="false">https://www.thefrontier.ai/p/why-your-taste-matters-more-than</guid><dc:creator><![CDATA[Tim Finnigan]]></dc:creator><pubDate>Wed, 28 Jan 2026 18:34:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Q0O7!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F0469273e-a3fd-4d74-a01a-c90599837049_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I asked ChatGPT to write me a novel. In thirty seconds, it delivered a complete 80,000-word science fiction epic with aliens, romance, and plot twists. The grammar was perfect. The structure followed every rule in the screenwriter&#8217;s handbook. And it was absolutely, mind-numbingly boring.</p><p>This little experiment crystallized something I&#8217;ve been thinking about as AI tools become increasingly sophisticated: we&#8217;re entering an era where <strong>technical competence is becoming commoditized</strong>, but taste remains irreplaceably human. When anyone can generate professional-looking content with a few prompts, the ability to recognize what&#8217;s actually good becomes the ultimate differentiator.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.thefrontier.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Frontier! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Key Takeaways</strong></h2><ul><li><p><strong>Taste is the new scarcity</strong>: As AI democratizes content creation, aesthetic judgment becomes the primary competitive advantage</p></li><li><p><strong>Human-AI collaboration works best</strong>: The most compelling AI-assisted work combines machine capability with human curatorial vision</p></li><li><p><strong>New creative roles are emerging</strong>: AI art directors, prompt artists, and content curators represent entirely new professional categories</p></li><li><p><strong>Cultural implications run deep</strong>: How we develop and apply taste in the AI era will shape the future of human creativity and cultural expression</p></li></ul><h2><strong>The Great Creative Leveling</strong></h2><p>We&#8217;re witnessing the most dramatic democratization of creative tools in human history. A decade ago, creating professional-looking graphics required years of Photoshop training. Writing compelling copy demanded deep understanding of language and persuasion. Music production was locked behind expensive software and hardware.</p><p>Today, <strong>Midjourney can generate stunning visuals from simple text descriptions</strong>. Claude can write compelling marketing copy in any style you specify. Suno can compose and perform original songs in minutes. The technical barriers to creation have essentially collapsed.</p><p>But something interesting happened when everyone gained access to these superpowers: the results started looking remarkably similar. Browse AI-generated art on any platform and you&#8217;ll notice recurring aesthetic patterns. The same ethereal lighting. The same digital painting style. The same compositional choices.</p><p>This is the AI aesthetic trap. <strong>When artificial intelligence learns from existing human work, it gravitates toward statistical averages</strong>. It produces what&#8217;s most commonly associated with &#8220;good&#8221; rather than what might actually be exceptional or innovative.</p><h2><strong>AI as Creative Catalyst</strong></h2><p>Think of great art as an invitation. Literature doesn&#8217;t give you instructions about how to live, but it expands your sense of what&#8217;s possible. A film doesn&#8217;t solve your problems, but it might help you see them differently. Architecture doesn&#8217;t tell you where to go, but it shapes how you move through space.</p><p>AI works the same way in creative practice. It doesn&#8217;t replace judgment, but it <strong>multiplies the space of possibility</strong>. By making iteration cheap and exploration fast, AI functions like exposure to great art: it creates motion, opens doors, and lets you move more quickly through ideas.</p><p>The difference between this and traditional creative inspiration is speed and scope. Where you might spend weeks developing a single concept manually, AI lets you prototype dozens of variations in hours. This isn&#8217;t about automating creativity. It&#8217;s about <strong>accelerating the feedback loop between imagination and execution</strong>.</p><p>Consider how filmmaker <strong>Denis Villeneuve</strong> approaches pre-visualization for his science fiction films. He doesn&#8217;t just storyboard key scenes, he explores dozens of visual possibilities to find the ones that best serve the story&#8217;s emotional core. AI tools now make this kind of rapid visual exploration accessible to creators who previously couldn&#8217;t afford teams of concept artists.</p><h2><strong>Where Humans Still Reign Supreme</strong></h2><p>The catch? Both great art and powerful AI reward people who already know what &#8220;good&#8221; looks like.</p><p>Art doesn&#8217;t teach taste directly. You need context, experience, and aesthetic literacy to recognize what a piece is inviting you toward. AI delegation works similarly: the scarce resource isn&#8217;t execution, it&#8217;s <strong>judgment</strong>. Knowing what to ask for, how to evaluate results, and when something feels off requires the modern equivalent of aesthetic literacy.</p><p>Consider how <strong>Refik Anadol</strong> approaches his AI art installations. He doesn&#8217;t just prompt an AI system and display whatever emerges. Instead, he curates massive datasets, guides the training process, and then selects from thousands of generated possibilities to create cohesive artistic statements. His taste shapes every step of the process, from data selection to final presentation.</p><p>Or look at how <strong>Holly Herndon</strong> trained an AI system on her own voice. The technology handled the complex audio processing, but her aesthetic vision determined how to integrate these AI vocals into emotionally resonant compositions that feel distinctly human despite their artificial elements.</p><h3><strong>The Three Pillars of AI-Era Taste</strong></h3><p><strong>Curation</strong>: When AI can generate hundreds of variations, knowing which ones deserve attention becomes crucial. This isn&#8217;t just about picking the &#8220;prettiest&#8221; option. It&#8217;s about understanding context, audience, and purpose.</p><p><strong>Direction</strong>: The best AI-assisted creators don&#8217;t just prompt and accept. They engage in iterative conversations with AI systems, refining and redirecting based on their aesthetic intuitions.</p><p><strong>Integration</strong>: Perhaps most importantly, taste helps determine how to blend AI capabilities with human insight, knowing when to lean on the machine and when to assert creative control.</p><h2><strong>The New Creative Professionals</strong></h2><p>As AI reshapes the creative landscape, entirely new professional roles are emerging around the intersection of technology and taste.</p><p><strong>AI Art Directors</strong> understand both the capabilities of generative systems and the principles of visual design. They can coax specific aesthetic qualities from AI tools while maintaining consistent creative vision across projects.</p><p><strong>Prompt Artists</strong> have developed sophisticated techniques for communicating with AI systems. But the best ones aren&#8217;t just technically proficient: they bring aesthetic sensibilities that guide their prompting strategies toward more compelling outcomes.</p><p><strong>AI Content Curators</strong> can quickly evaluate large volumes of AI-generated material and identify the pieces worth developing further. They function like talent scouts in a world where everyone has access to the recording studio.</p><p>These roles didn&#8217;t exist five years ago. They represent a new category of creative professional who combines technological fluency with aesthetic judgment.</p><h2><strong>The Homogenization Problem</strong></h2><p>There&#8217;s a darker side to this AI creative revolution. When everyone uses the same tools trained on similar data, we risk aesthetic convergence: a flattening of creative expression toward whatever the algorithms consider optimal.</p><p><strong>We&#8217;re already seeing this happen</strong>. Instagram influencers increasingly look identical thanks to AI-powered beauty filters. AI-generated art often shares recognizable stylistic signatures. Even AI writing tends toward similar sentence structures and vocabulary choices.</p><p>This is where human taste becomes not just valuable, but essential for cultural diversity. Good taste often involves <strong>deliberately moving away from what&#8217;s popular or expected</strong>. It requires understanding current trends well enough to meaningfully subvert them.</p><p>The most interesting AI-assisted creators are those who fight against the algorithms&#8217; tendency toward sameness, using their aesthetic judgment to push AI systems in unexpected directions.</p><h2><strong>The Curation Economy</strong></h2><p>We&#8217;re shifting from an economy of creation scarcity to one of attention scarcity. When anyone can generate professional-quality content, <strong>the bottleneck becomes figuring out what&#8217;s worth paying attention to</strong>.</p><p>This transforms taste from a luxury into an economic necessity. In a world flooded with AI-generated content, the people who can consistently identify and develop the most compelling material will capture disproportionate value.</p><p>Think about how this plays out in different industries:</p><p><strong>Marketing agencies</strong> now use AI to generate dozens of campaign concepts, but creative directors apply their taste to select and refine the ideas worth presenting to clients.</p><p><strong>Publishers</strong> might use AI to generate hundreds of book cover designs, but rely on human aesthetic judgment to choose covers that will actually drive sales.</p><p><strong>Music producers</strong> can create unlimited instrumental variations, but taste determines which combinations will resonate with human audiences.</p><p>In each case, <strong>taste acts as the crucial filter</strong> between AI&#8217;s raw generative power and outcomes that actually matter to people.</p><h2><strong>The Paradox of Infinite Choice</strong></h2><p>Counterintuitively, having unlimited creative options through AI often leads to more conservative choices. When you can generate infinite variations of anything, decision paralysis sets in. People gravitate toward safer, more conventional options because they seem like solid ground in an ocean of possibilities.</p><p>This is where developed taste becomes a practical skill. <strong>Good aesthetic judgment includes knowing when to stop iterating</strong>. It involves developing confidence in your creative instincts even when you could theoretically explore endless alternatives.</p><p>The best AI-assisted creators often deliberately constrain their options, using their taste to define productive boundaries rather than exploring every possible direction the technology could take them.</p><h2><strong>Developing Taste in the AI Era</strong></h2><p>If taste is becoming the crucial differentiator, how do we develop it? Traditional advice about consuming lots of great art and literature still applies, but the AI era adds new dimensions.</p><p><strong>Study AI capabilities deeply</strong>. Understanding what these systems can and can&#8217;t do helps you apply them more thoughtfully. Know where the algorithms tend toward clich&#233; so you can consciously push in other directions.</p><p><strong>Practice rapid evaluation</strong>. When AI can generate content quickly, you need to develop equally quick aesthetic judgment. This comes through repeatedly making choices about what works and what doesn&#8217;t.</p><p><strong>Maintain cultural awareness</strong>. AI systems often lag behind current cultural moments. Staying attuned to emerging trends and social shifts helps you guide AI in more relevant directions.</p><p><strong>Collaborate with the machines</strong>. The future belongs to humans who can work fluidly with AI systems, using technology as a creative partner rather than a replacement for human judgment.</p><h2><strong>The Invitation Economy</strong></h2><p>Both art and AI create invitations rather than instructions. Great literature invites you to see the world differently. Powerful cinema invites emotional engagement. AI tools invite creative exploration.</p><p>But invitations require someone capable of accepting them. The advantage belongs to people who have been trained to recognize quality, articulate intent, and steer work toward something meaningful. This training often happens accidentally through exposure to great work across multiple disciplines.</p><p><strong>The invitation is everywhere. The difference is who knows how to accept it.</strong></p><p>In creative fields, this means developing what we might call &#8220;aesthetic management skills.&#8221; Just as traditional managers coordinate human teams toward shared goals, creative professionals now need to coordinate human insight with AI capabilities. The best results emerge when human taste guides machine execution toward outcomes that neither could achieve alone.</p><h2><strong>Looking Forward</strong></h2><p>We&#8217;re still in the early stages of this transformation. Current AI systems are impressive but limited in their aesthetic judgment. They can mimic styles but struggle with genuine innovation or cultural commentary.</p><p>But that may not matter. <strong>The most exciting creative work emerging from this AI revolution isn&#8217;t trying to replace human taste: it&#8217;s amplifying it</strong>. We&#8217;re seeing new forms of human-AI collaboration that would have been impossible just a few years ago.</p><p>The future likely belongs to creators who can seamlessly blend their aesthetic judgment with AI capabilities, using technology to explore creative territories that neither humans nor machines could reach alone.</p><p>Your taste, your ability to recognize what&#8217;s compelling, meaningful, and worth sharing, isn&#8217;t becoming less important in the age of AI. It&#8217;s becoming the most valuable creative skill you can develop.</p><p>While everyone else is learning to prompt, you should be learning to choose. In a world where anyone can create, only humans can decide what should exist.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.thefrontier.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Frontier! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>