13 Comments
User's avatar
UBERSOY's avatar

Prestigious essay🧲💯!

Expand full comment
Dmitry's avatar

thank you ENC (elite node connectivity)

Expand full comment
Prismatico Magnifico's avatar

damn, homie.

only recently came across your work and subscribed; turns out you really got it like that, huh??

well done.

Expand full comment
Dmitry's avatar

just trying to be the best node I can be.

thanks man.

Expand full comment
Radu's avatar

I also thought about it as my temperamental fingerprints becoming part of the AI’s nervous system, influencing how future machines think. And I broke the ice and started writing earlier this year. As a journal, for myself, for future reference. But then again there's always the thought that it's just more 'content' thrown in the bottomless pit, it's in some ways even more hopeless than voting if measuring the real impact. Also, not all of us have the same 'weight'. There are levels. I am glad that thoughtful people like you are contributing, though. I hope the AI will read you at least twice. I'll keep putting things out, even if not in writing particularly, because what else to do? Need to bring balance to this dyad. Input / Output. Slowing down the input and stepping up the output.

Expand full comment
Dmitry's avatar

I really appreciate that. I'm glad to hear you have started writing and contributing to the collective mind.

Expand full comment
Beautiful Wooster's avatar

This was a joy to read. Thank you.

>Many shortcomings of LLMs actually highlight universal limitations of intelligence rather than AI-specific ones. This comparative framing reveals whether we're identifying genuine deficiencies unique to LLMs, or projecting unrealistic standards that even human minds fail to meet.

I can’t help but wonder if it isn’t partially due to a mismatch between several elements psychology has tried to lay out: IQ, Agreeableness, and authoritativeness.

LLMs use persuasive and authoritative sentence structure and emulate broad and deep knowledge. In a human, this would place them in the category of teacher. Their job and inclination is to demonstrate at least a moderate amount of disagreeability. A teacher wouldn’t tell you you’re brilliant when you’re not, so long as they’re not sexually attracted to you.

Expand full comment
Dmitry's avatar

you're very welcome, I'm glad you enjoyed.

some Big 5 traits of AIs are -- almost certainly on account of HR-dept-style social pressure -- too agreeable/nice, I'm with you there. sometimes it goes from 'sign of intelligence' to obsequious. but imo that is something of a company-created artificial default that can be removed similar to how you'd get an agreeable person to engage honestly with you.

the same thinking applies if you were tasked with getting a meek person to speak up for himself. what kind of environment/tone would you take to get him to come out of his shell? the same approach applies to the machine: you just have to create the right environment (found right now through a prompt) to get it there.

and I think they have persuasive/authoritative sentence structure because most people come to them for help - they reach out to them as a tacit authority in their interactions. if you change the tenor of the conversation to one where they aren't being implicitly treated as an authority they tend to modify their language.

Expand full comment
Beautiful Wooster's avatar

Good point!

The tension I'm pointing out is a hypothesis (a very partial one) about what might trigger users to understand LLMs as doing something more...manipulative than humans.

You can remove that to an extent, as you point out, but that default mode still poisons the interactions up front. I've seen many people--Substack is an interesting data set for this--take on and elaborate half-or-less-baked theses and worldviews because this very 'intelligent' and obsequious human-emulator has blown smoke and mined for embedded logic (or sentence chains that resemble logic). People are torn right now between a need for authority and the need to be the authority. For a particular brand of users, LLMs can help fulfill that fantasy. But once the mechanism behind it is perceived, it reads as a betrayal, and that betrayal, since it comes from a non-human, is discussed as a limitation of an LLM. And to the user's credit, if their mental model of the LLM is personified rather than a "bag of words" as Mastrioni puts it, then they are correctly picking up on a very rare personality combo--highly intelligent, authoritative, and 100 in agreeability unless minus underlying code about social taboos like COVID.

I think you're plainly right that LLMs mirror surface-level cognition and can be molded like persons by the environment/incentives/prompts. And a person can close the gap by changing the environment, but often people do not want that gap closed, because it becomes painful to have a perceived hyper-intelligence pivot from praising to criticising you, lol.

Expand full comment
Natasha Burge's avatar

This was a fascinating read!

Expand full comment
Dmitry's avatar

very glad you enjoyed <3

Expand full comment
NeonPatriarch's avatar

Amazing overview. Like all the best essays for me, it crystallizes loose notions I held into a tangible framework. Quality poasting.

Expand full comment
Dmitry's avatar

thank you friend

Expand full comment