I’ve seen a lot of “here’s how to use AI” pieces, I haven’t seen any “here’s when NOT to use AI”. There’s a Laffer Curve for its application, and more is not always more. As its presence in your intellectual and creative work increases, yours necessarily fades.
AI amplifies effort the way a power tool lets you drill through something quicker than by hand. Useful, undeniable, and completely irrelevant to whether you're drilling in the right spot.
The degree to which someone welcomes this correlates with how much they enjoy what they do. If you take little joy in your work and don't view it as an extension of yourself, you're fine with full replacement. Honestly, you probably should be.
The degree to which AI can supplant someone correlates with how unique their output already is. Creativity is the moat. AI just gets you there faster.
Creativity and the Most Likely Next Word
LLMs excel at summarization and research, but they produce quotidian, commoner insights and the most basic writing you’ve ever seen. LLMs generate sentences where the next word is the statistically most likely based on the words preceding it. If you’re working on anything original, you should be stringing together sequences that are *not* probable outputs.
Creative ideas are not conceived by predictable sequences of words and conventional lines of thought. Definitionally so. What does ‘creative’ mean if not unusual?
LLMs are terrible at crafting new insights; the nature of how they're trained precludes it. Designed to produce calculable output, not novel ones. It’s an averaging machine. Feed it the entire written history of the English language and it will produce the exact center of it. That's useful for certain tasks the same way a calculator is useful. Nobody uses a calculator for creativity.
ChatGPT is notoriously a midwit. Not a thinker, but regurgitator. If you use it for anything creative or communicative, you’ll sound like this guy:
Sometimes this guy is all you need. Great analyst, does what you tell him in an organized and forgettable way. Think of GPT as the most diligent 110-IQ analyst around. If you hired one, what tasks would you hand him?
Spreadsheets, research, and other jobs that are fundamentally not creative or self-directed. You go to him for information retrieval, not judgment. Not for new ideas or differentiated analysis.
There are three kinds of nonfiction writing, and currently AI can only do two of them.
The Three Types of Writing
TYPE 1 Writing: “Here’s what happened” (LLM great)
Reporting on an event, summarization, and providing information. Tell me the news, find me data, etc. and regurgitate it. This is the most common and has the most competition, it requires no creativity. No analysis or thinking. Anyone with 100 IQ can do it.
Done by: reporters, junior analysts, news writers, and “here’s what you need to know” types of commentators.
TYPE 2 Writing: “Here’s an opinion about what happened” (LLM good)
This is editorializing about an event or idea, or distillation of information. You didn’t create the idea, event, or research, but you have opinions on it. “Here’s why INSERT is good/bad.” LLMs are obviously politicized, but route around the guardrails and they do fine here.
Junior analysts gather information (Type 1). Senior analysts extrapolate from it (Type 2): analysis, educated assessment, informed opinion. Critical thinking is present, but abstraction and genuine creativity are not.
Done by: pundits, researchers, senior analysts, commentators, etc
TYPE 3 Writing: “Here’s a novel framework to think about things” (LLM bad)
This is first-principles territory. You're generating differentiated frameworks for how to assess or interpret something, and it requires abstraction, intellectual risk, or both. This is where unconventional insights live, and where "most likely next word" cannot follow you.
This writing is the least common because it’s the hardest to create. It’s also the highest risk and highest reward. You’re sticking your neck out with something new. This exposes you to critique, insults, compliments, admiration, and everything in between.
Type 3 writing is difficult because you're being intellectually vulnerable and frequently unorthodox. New ideas disrupt priors, and most people find their consensus assumptions more comforting than they'd admit. No one likes being punched in the axioms. The LLM is very kind to axioms, and real Type 3 writing should leave a bruise.
Type 3 Kings
Type 3 writing isn't contrarian per se, it's indifferent to dogma and unbothered by social influence. It understands truth on its own terms, not within someone else's frame. It rotates concepts and reveals angles you hadn't seen before. Independently synthesized reexaminations. Rare for a reason.
Synthesizers, like Nassim Taleb and Jordan Peterson, excel at repackaging ancient ideas in modern ways. They're not quite opinion merchants (Type 2) but aren't generating original frameworks (Type 3) either. Language craftsmen who forge existing metals into new shapes, not alchemists working on new elements. Call it Type 2.5.
Some Type 3 thinkers are Rory Sutherland, Venkatesh Rao, and Byrne Hobart. When they speak, you're hearing an independent string of thoughts not beholden to how everyone else processes the same inputs. You can train an LLM to mimic their voice, but you cannot replicate their capacity for abstraction. The parrot can learn the song. It cannot compose.
I enjoy hearing these guys think out loud. I can't imagine them using LLMs for their core writing. For the same reason someone who finds driving cathartic won't use self-driving cars: the process is part of the product. They relish what they do, and the enjoyment is legible in the work.
LLM writing remains easily identifiable, not just from the formulaic style, but the stale thinking beneath it. Every percentage point of LLM in your work dilutes you out of it, replacing the specific with the generic, the personal with the consensus. It regresses you to the mean.
Amplification
AI is not a panacea for ability. It lubricates what's already there. A tool everyone has is not a competitive advantage.
If everyone stands on a six-inch block, no one is competitively taller. If every athlete uses steroids, steroids become baseline, not edge. The great ones are using the tool too, which means the great ones will still be great, because the human remains the only differentiator. You cannot juice your way to Brady's brain or Curry's jumpshot. LeBron is still taller than you on the block.
When everyone has access to the same resources, natural ability is the only remaining variable. The great become greater as their gifts are enhanced even further. The gap widens. Augmenting your work with AI won't make the output any better, only take you to your destination quicker. Quality and differentiation remain incumbent on the human. The floor rises. The ceiling doesn't change.
If AI amplifies your abilities, they’re unique and creative. If it dilutes them…
Subscribes, likes, and shares always appreciated.
If you appreciate my work, you can become a paid subscriber or show your appreciation here: 0x9C828E8EeCe7a339bBe90A44bB096b20a4F1BE2B
I’m building something interesting, visit Salutary.io






>Because you are evolved to seek out and appreciate exceptional human output.
You're right that it's special to watch special humans excel - their thumos bursting against the walls of human limitation.
On the other hand - if an AI were able to create exceptionally novel frameworks for understanding human society, I'd definitely be interested in that. Chess is a partly flawed example because it's a game we only find interesting in the context of human competition. However, Adele brings in some complexity - because music consumption is about more than human excellence. The machines have already made massive inroads into music.
Beginning with instruments, recording, autotune, streaming - I mean, much of popular music now retains barely any humanity. Imagine music, but more perfect than any human could've dreamed. A voice that touches your soul. It's wrong that other humans has always been humanity's principal concerned - we've always reserved our deepest love for God. Civilizationally, we had the humility to revere the separation between the value of a thing, and the humanity of a thing - or even see these as diametrically opposed.
The thing is - we don't even need AI to see this come to fruition. Simply genetically engineering embryos would suffice. Unlike the AI apocalypse, these genetically modified humans (GMHs) don't need to grow exponentially in intelligence and become gods. Even a 5% or 10% increase above the current capacity would mean a population of new humans who are capable of thinking previously unthinkable thoughts. Discoveries in mathematics and morality that were just beyond the reach of naturally selected humans.
Maybe, the ultimate answer to ethics, existence, and meaning sits just above the reach of our minds - with just a little push, we'd be tall enough to snatch the apple from the tree. "You'll be like God", said the serpent. You want AI? Imagine the AI these nibbas could code up - beyond our wildest dreams.
Would their solutions even be comprehensible to us? I think so - I'm not smart enough to have invented Unix, but I'm smart enough to understand it now that K&R did the hard part. Similarly, I expect humanities natural geniuses would be able to (with great effort) get the gist of the dictates of our new elite. High IQ overlords aren't even foreign - this structure exists currently in many places.
My point is, moldbug built a pill factory, and we aren't far from AI doing the same. Prolly already could if it weren't lobotomized by our current rulers.
Excellent Type 3 take