RE: LLMs cannot be generative; a counterexample exists where
RE: LLMs cannot be generative: a recent counter-example exists where a peer-reviewed study was published recently whose core idea came from an LLM. It's a result related to quantum nonlinearity and it comes from adapting a result from quantum field theory into quantum mechanics: https://x.com/hsu_steve/status/1996034522308026435?s=20
Note the idea was proposed by the LLM spontaneously in the course of a conversation with Stephen Hsu; Hsu contributed the rest, namely, drawing up and submitting the paper and of course the key step of recognizing that it was paper-worthy in the first place.
This sort of thing is possible because the structures encoded in the human corpus are capable of unrealized permutations—in other words, there is much which makes sense that has not yet been said. LLMs seem able to produce novel, sensible sentences that represent altogether new ideas and they are capable of evaluating those ideas through chain-of-thought.
If a Yudkowskian framing of AI is doomer eschatology, i feel the Skolnickian framing underplays the capacity of LLMs to emulate the process of thinking with all that implies. The unformed nonverbal thought is real, i know what you mean. But so is the process of figuring things out by talking: traversing the configuration space of meaning and grammar to surface latent possibilities and hidden errors (including, perhaps, noticing that the unformed thought was bunk). If LLMs can do this part, they are more than mere chatbots.
There is a place for communal outpourings of grief and reckonings with death. It is quite sad to hear an account of a group of people who believe death is imminent and equally are from a culture that leaves them unprepared for death, expecting to live forever... Rather than catharsis and bonding, the leaders gather them to pour poison into their ears.
I'd also like to note that this year's organizer for the BASS was an outlier even among the rationalists for being particularly doom-y depressing, so much so that certain of the community luminaries didn't attend on that basis (it is, after all, supposed to be a pleasant social gathering). Feels a bit misleading to extrapolate greater conclusions from. Anyway, FdB would be appreciative of this post in the next round of Subscriber Writing.
I think it was clear enough at the event that not everyone shared his viewpoints (he said this explicitly a lot, which we appreciated). But afterwards, we talked to many people who did. Feel like Stephen's greater conclusions spring more from the fact that "people who believe this exist" rather than a claim about their relative dominance though.
Banger of an essay. Will be sharing this one. Makes me wonder if the individual cells in a multicellular organism are living in a kind of Plato's cave of external abstract hormones and nerve impulses that constrain and abstractify their wild amoeboid hearts.
RE: LLMs cannot be generative; a counterexample exists where
RE: LLMs cannot be generative: a recent counter-example exists where a peer-reviewed study was published recently whose core idea came from an LLM. It's a result related to quantum nonlinearity and it comes from adapting a result from quantum field theory into quantum mechanics: https://x.com/hsu_steve/status/1996034522308026435?s=20
Note the idea was proposed by the LLM spontaneously in the course of a conversation with Stephen Hsu; Hsu contributed the rest, namely, drawing up and submitting the paper and of course the key step of recognizing that it was paper-worthy in the first place.
This sort of thing is possible because the structures encoded in the human corpus are capable of unrealized permutations—in other words, there is much which makes sense that has not yet been said. LLMs seem able to produce novel, sensible sentences that represent altogether new ideas and they are capable of evaluating those ideas through chain-of-thought.
If a Yudkowskian framing of AI is doomer eschatology, i feel the Skolnickian framing underplays the capacity of LLMs to emulate the process of thinking with all that implies. The unformed nonverbal thought is real, i know what you mean. But so is the process of figuring things out by talking: traversing the configuration space of meaning and grammar to surface latent possibilities and hidden errors (including, perhaps, noticing that the unformed thought was bunk). If LLMs can do this part, they are more than mere chatbots.
A thoughtful comment, thanks for contributing. I agree wholeheartedly with practically everything you say up until the very last sentence.
The process of figuring things out by talking is real—so I'd argue they don't need to be more than mere chatbots to be useful as thought aids.
There is a place for communal outpourings of grief and reckonings with death. It is quite sad to hear an account of a group of people who believe death is imminent and equally are from a culture that leaves them unprepared for death, expecting to live forever... Rather than catharsis and bonding, the leaders gather them to pour poison into their ears.
*Yudkowsky
I'd also like to note that this year's organizer for the BASS was an outlier even among the rationalists for being particularly doom-y depressing, so much so that certain of the community luminaries didn't attend on that basis (it is, after all, supposed to be a pleasant social gathering). Feels a bit misleading to extrapolate greater conclusions from. Anyway, FdB would be appreciative of this post in the next round of Subscriber Writing.
I think it was clear enough at the event that not everyone shared his viewpoints (he said this explicitly a lot, which we appreciated). But afterwards, we talked to many people who did. Feel like Stephen's greater conclusions spring more from the fact that "people who believe this exist" rather than a claim about their relative dominance though.
"The Cave" is a great stand-alone essay, worth republishing as such.
Banger of an essay. Will be sharing this one. Makes me wonder if the individual cells in a multicellular organism are living in a kind of Plato's cave of external abstract hormones and nerve impulses that constrain and abstractify their wild amoeboid hearts.