AI Amplifies You and Dilutes You
As AI's presence increases in your work, yours fades...
AI amplifies you and dilutes you.
It amplifies certain efforts the same way a power tool allows you to drill through something quicker than by hand. However as AI's presence increases in your intellectual and creative work, yours necessarily fades...
I think the degree to which you welcome this correlates with the degree you enjoy what you do. And I think the degree to which AI can replace you correlates with how unique your output is.
I've seen a lot of “here’s how to use AI” pieces, I haven’t seen any “here’s when NOT to use AI”. There's a Laffer Curve for AI, and more is not always more. I believe there are three types of writing, and right now LLMs can only do two of them.
Currently, LLMs aren’t great at manufacturing new insights and creative ideas. The nature of how they’re trained precludes it. They’re not designed to produce novel output; they’re designed to produce a predictable one. ChatGPT is notoriously kind of a midwit, not a thinker but a regurgitator. If you use it for ideas or analysis, you’ll sound sorta like this guy:
Sometimes this guy is all you need though. He’s eminently employable; he is valuable, if not mundane.
LLMs excel at summarization and info procurement, but they generate quotidian, normative insights. LLM output is a form of sequence prediction, it generates sentences where each next word is the statistically likely one based on the words that preceded it. If you’re ideating on something creative you should be stringing together a series of words that isn’t a statistically likely output, right?
You can do things like increase the LLM’s temperature to produce more atypical, variable strings of words, but it doesn’t really refute the core issue I’m articulating. I’m sure my reasoning has a shelf life — as I fully expect AGI to exist someday — but for now, it works.
A Predictable Sequence of Words…
An LLM is designed to produce a statistically likely (aka predictable) series of words based on your prompt. Original thinking and creative ideas are not conceived by predictable sequences of words. Like, definitionally so.
Pioneering ideas bloom in the wild gardens of improbability, far from the manicured lawns of statistical likelihood. True originality dwells where the unlikely dances with creativity, in those twilight zones where 'the next word' defies prediction's grasp.
I think of ChatGPT as the most diligent 110-IQ analyst you’ll find. Imagine you hired such an analyst, what tasks do you give him?
You go to your trusty analyst for spreadsheets, research, and other things that are fundamentally not creative. You go to him for information, not his opinion. You do not use him for new ideas or differentiated analysis. Because remember, you're working with this guy.
There are 3 kinds of non-fiction writing. The LLM can only do 2.
TYPE 1 Writing: “Here’s what happened” (LLM great)
Reporting on an event, summarization, and providing information. Tell me the news, find me data, etc. and report it. This one is common and has the most competition, because it requires no creativity. Little to no analysis or thinking is needed here. Anyone can do it.
Done by: reporters, junior analysts, news writers, and “here’s what you need to know” types of commentators.
TYPE 2 Writing: “Here’s an opinion about what happened” (LLM good)
This is editorializing about an event or idea, or distillation of information. You didn’t create the idea, event, or research, but you have opinions on it. “Here’s why INSERT is good/bad.” ChatGPT is obviously politicized, but if you can get around this it does pretty well here.
Junior analysts are information gatherers (Type 1). Senior analysts are information extrapolators (Type 2): requiring analysis, educated assessment, and thoughtful opinions. There’s critical thinking, but not much abstraction or creativity.
Done by: pundits, researchers, senior analysts, commentators, etc
TYPE 3 Writing: “Here’s a novel framework to think about things” (LLM bad)
This is the sphere of creativity and first-principles thinking. You’re generating differentiated, fresh ideas for how to assess or interpret something, and it often requires abstraction. This is where unconventional insights live. This is not where “most likely next word” resides.
This writing is the least common because it’s the hardest to create. It’s also the highest risk and highest reward. You’re sticking your neck out with something new. This exposes you to critique, insults, compliments, admiration, and everything in between.
Type 3 writing is difficult because you’re being intellectually vulnerable and often unorthodox. New things disrupt priors, and the vast majority of people find consensus assumptions comforting. No one likes being punched in the axioms. The LLM is pretty kind to axioms, and true Type 3 writing should at least nudge them a little.
Type 3 Kings
Type 3 writing isn’t necessarily contrarian, but rather indifferent to dogma and social influence. It understands truth on its own terms, not within the context of someone else’s frame. I don’t mean “contrarian” as in someone who rejects consensus for the sake of doing so, which is just as bad as espousing consensus for the sake of it. I view contrarianism as not fearing being disliked if you happen to be iconoclastic.
It’s hard to pinpoint Type 3 writing, but you know it when you see it. It’s a hybrid of philosophical thinking and concept rotation. When I say “concept rotating”, I mean it exists on a spectrum between the shape rotator and wordcel binary (that’s an online-person term, google it if you’re unfamiliar), which is to say an unorthodox reexamining of something to conceptualize it more effectively.
While synthesizers like Nassim Taleb and Jordan Peterson excel at repackaging existing ideas with elegant frameworks and digestible metaphors, they occupy a middle ground; their work transcends mere opinion (Type 2) but stops short of truly innovative conceptual frameworks (Type 3). They're more like language craftsmen who forge existing metals into new shapes, rather than alchemists discovering new elements entirely. Type 2.5?
To me, David Foster Wallace was a Type 3 in how he assessed the world.
Currently, my favorite Type 3 thinkers are Rory Sutherland, Venkatesh Rao, Byrne Hobart, and Curtis Yarvin (this list varies and evolves often, these are mostly off the top of my head). When they offer insights I know I’m going hear an independent string of thoughts and a frame of analysis I haven’t necessarily encountered before, or at least not one beholden to how things “should be done”. I value them for that.
I enjoy hearing these guys think out loud. I can't imagine any of them using LLMs for their writing, mostly because I get the sense they actually enjoy what they do. It’s what they’re good at and they clearly relish it. For the same reason someone who finds driving cathartic won't want to use self-driving cars, these guys enjoy the knowledge scenic route when they say something.
For the time being, I’ll be able to tell if you use LLMs for your writing. Not because the LLM won’t mimic your prose, but because the ideas will be stale. The voice will sound like you, but your soul will be hollowed out. Every bit of LLM in your writing dilutes you out of it.
And if I can’t tell if there’s LLM there, I’m pretty sure it’s going to be for Type 1 or Type 2 writing, which doesn’t have a whole lot of soul to begin with.
You can train an LLM to sound like Yarvin or Sutherland, but you can’t train it on their abstraction gifts. At least not yet, this day will probably come though.
Talent Enhancement, Not Talent Creation
AI is not a panacea for your abilities. It does not create a Type 3 mind, it lubricates it. It is not a competitive advantage, but a competitive enhancer.
If everyone stands on a 6” block, the block doesn’t make you taller in an advantageous way. If every athlete uses steroids, then steroids do not give you an edge. If the great ones are using it too, then the great ones will still be great, because you cannot juice your way to Tom Brady’s brain or Steph Curry’s jumpshot. Natural ability and talent is always the differentiator when everyone has access to the same tools.
Augmenting your work and productivity with AI isn’t a competitive advantage, it will soon be a competitive baseline. You must have a means to differentiate your work if you exist in the Type 3 realm, and no tool will ever solve this for you. The cream of the crop always rises.
I’m receiving pledges for payment, which I very much appreciate, but I’m reluctant to paywall my writing. If you’d like, you can show your appreciation here: 0x9C828E8EeCe7a339bBe90A44bB096b20a4F1BE2B
I’m building something interesting, visit Salutary.io
Subscribes, likes, and shares always appreciated.
Related essays:
The Fourth Industrial Revolution and Its Consequences Will Be a Delight for the Human Race
The Fourth Industrial Revolution
>Because you are evolved to seek out and appreciate exceptional human output.
You're right that it's special to watch special humans excel - their thumos bursting against the walls of human limitation.
On the other hand - if an AI were able to create exceptionally novel frameworks for understanding human society, I'd definitely be interested in that. Chess is a partly flawed example because it's a game we only find interesting in the context of human competition. However, Adele brings in some complexity - because music consumption is about more than human excellence. The machines have already made massive inroads into music.
Beginning with instruments, recording, autotune, streaming - I mean, much of popular music now retains barely any humanity. Imagine music, but more perfect than any human could've dreamed. A voice that touches your soul. It's wrong that other humans has always been humanity's principal concerned - we've always reserved our deepest love for God. Civilizationally, we had the humility to revere the separation between the value of a thing, and the humanity of a thing - or even see these as diametrically opposed.
The thing is - we don't even need AI to see this come to fruition. Simply genetically engineering embryos would suffice. Unlike the AI apocalypse, these genetically modified humans (GMHs) don't need to grow exponentially in intelligence and become gods. Even a 5% or 10% increase above the current capacity would mean a population of new humans who are capable of thinking previously unthinkable thoughts. Discoveries in mathematics and morality that were just beyond the reach of naturally selected humans.
Maybe, the ultimate answer to ethics, existence, and meaning sits just above the reach of our minds - with just a little push, we'd be tall enough to snatch the apple from the tree. "You'll be like God", said the serpent. You want AI? Imagine the AI these nibbas could code up - beyond our wildest dreams.
Would their solutions even be comprehensible to us? I think so - I'm not smart enough to have invented Unix, but I'm smart enough to understand it now that K&R did the hard part. Similarly, I expect humanities natural geniuses would be able to (with great effort) get the gist of the dictates of our new elite. High IQ overlords aren't even foreign - this structure exists currently in many places.
My point is, moldbug built a pill factory, and we aren't far from AI doing the same. Prolly already could if it weren't lobotomized by our current rulers.
Excellent Type 3 take