15 Comments

When I am half asleep, or woken up in the middle of the night, I can talk. People recognize I’m sleepy, but I seem awake. However, I have absolutely no memory of these conversations. After a bit of trial and error, my partner has figured out that this part of me can answer simple questions but not do any math.

I think this half-awake state is, roughly speaking, a subsystem in my mind that is able to run without waking me up.

And I think that this is approximately the part of the mind we’ve figured out how to build in LLMs, similarly to how we’ve managed to build the visual center of the brain in convolutional neural networks.

Under this framework, LLMs are deeply superhuman, in a narrow domain. This half-awake subset of me cannot do basic math, cannot code, cannot form complex rhetoric.

This is why I’m scared of the future of AI—we have superhuman visual centers. We have superhuman language centers. If we get an algorithmic breakthrough in agency—we have less than 5 years before we have superhuman artificial general intelligence.

Expand full comment

As far as what AI robots can do, and how along they are, look up the recent work done with the X-62A Vista, a highly modified F-16 currently flown by AI but with a human on board with a kill switch who can take over if things go wrong, having mock dogfights with human pilots in regular F-16s. I expect there are likely several types of AI involved, but that too seems like the future.

Expand full comment

I think the step to robots is very natural if you’re just a believer in the “scale generative models” paradigm. So I’m not sure this should be a big update on the horizons of people in the field.

After all, if you have a VLM that can do realistic video, that makes a great model for model predictive control, and you can prompt/condition it to seek very general goals, with basically no need for anything else.

Seems like you might as well hook it up to a real robot even though sim2real is tough.

Expand full comment

https://x.com/eshear/status/1791839394413883511

"Shocking amount of pushback on “don’t build stuff that can destroy the world”. I’d like to take this chance to say I stand by my apparently controversial opinion that building things to destroy the world is bad. In related news, murder is wrong and bad."

And this is also related to the idea of "A Star Trek type of fictional extraterrestrial or android is easily more sympathetic, more “human”, to a 21st century Western-educated person than Genghis Khan."

But not to me, and I am a human with children. I would rather be succeeded by birds, than by machines. To me, the litmus test is biological life with a motherhood and courtship, to which we find various concepts of love. All of that requires a single personality per body and so on, and ultimately we find the rise of things like art(birdsong, dancing, etc) all come from there. In a way, we see love, art and beauty all rise together and it seems awfully more precious than to have robots replace humans.

And so this leads to the earlier concept of "murder is wrong." Which shocking as it might seem, if I don't consent having my children having their futures eliminated, I would rather object to having all of that destroyed, or rendered meaningless.

Insofar as the risks themselves go, I think that the evidence is pretty overwhelming at this point. It feels silly and perhaps "link-spammy" but its worthwhile to just add onto the information. It also is worth discussing if the AI are the "children learning morals", what kind of morals are they learning from the ruthless way that corporations are inflicting them upon us? What kind of lesson about caution and risk are they teaching them?

https://time.com/6898967/ai-extinction-national-security-risks-report/

https://www.theguardian.com/technology/article/2024/may/20/world-is-ill-prepared-for-breakthroughs-in-ai-say-experts

https://www.wired.com/story/openai-superalignment-team-disbanded/

And are they learning?

https://www.businessinsider.com/ai-deceive-users-insider-trading-study-gpt-2023-12

I originally was, in fact, an accelerationist. Then AI happened and I spent so much time reading on reasons why not to fear doom, that I have become convinced to join #PauseAI.

Expand full comment

"Tesla currently has the most advanced autonomy level on the market" -- I don't think this is true? Waymo seems significantly far ahead of Tesla.

Expand full comment

Indeed, they are. Substack suggested this recent post on the topic: https://www.understandingai.org/p/on-self-driving-waymo-is-playing

Expand full comment

That article hits the nail on the head. "Tesla hasn’t found a different, better way to bring driverless technology to market. Waymo is just so far ahead that it’s dealing with challenges Tesla hasn’t started thinking about."

Expand full comment

Agreed. My reaction to the point about lidar being cheating is, well, true, but in a good way? I mean, in theory the self-driving car could operate with only 2 cameras, next to each other and on a swivel, inside the car looking through the windshield. But that would be an insane handicap. And in fact Waymo relies heavily on a whole list of exotic sensors. If Tesla surpasses Waymo using only cameras I will eat my hat. In fact, I am betting heavily -- see various Manifold markets -- that Waymo is drastically ahead of Tesla.

Expand full comment

The development of the AI industry has exceeded my expectations, especially with its rapid progress in the industrial sector and under market incentives.

Expand full comment

FYI.

Since I have answered the How Many Angels can dance etc. question here, http://gdeering.com/Legacy2022+/upLTEsHowManyAngelsCanDanceOnTheHeadOfAPin01.pdf

you might want to tell others about this so that they stop using it as an unanswerable question because it isn't--that is, I have answered it.

PS.

I did like your article and look forward to the day when they get the price down for domestic robots--ones that can fold my laundry since I stopped doing this eons ago--and other domestic things. Remember that back in 1969 when Machine Design Magazine had an article for the design of a new high tech device called a handheld calculator that would replace the slide rule, they said it's only drawback at the moment was it would cost about $5000 to make one. Notice how that price came down over the years, so there's hope for robots too.

Expand full comment

See you at less online! I think that even apart from being killed concerns, we’re in for a tough few years when most cognitive labor is too cheap to hire a human for. In terms of mind space, I think we’re mostly building incredibly helpful and useful assistants, albeit ones who can be convinced to be evil with the right words so it’s very possible this could be quite bad for us in terms of being killed as well

Expand full comment

Do you think it's possible that we're still very far from AGI, but instead we've shifted the goalpost to something less sophisticated, and at the same time we're exaggerating current advances in LLM?

Expand full comment

This is great. I agree and I also underestimated what 'engineering' and 'markets' can do.

I'm a bit more skeptical about putting too much weight on people in 'AI' believing in AGI. Many people in many fields have completely unrealistic expectations about what would happen if we did just 'a bit more'.

The thing I find missing in your model is the question of autonomy and persistent identity. The thing that currently seems missing in visions of AGI is a path from an inert blog of weights on a hard drive somewhere to an autonomous entity with its own goals. Almost every major advance people point to outside of reasoning puzzles was achieved through orchestration rather than raw cognition. It is hard to see autonomy arising out of that.

Expand full comment

I'll also be at LessOnline and would love to discuss the question of "given AGI, what ensues?".

My personal situation with regard to this topic is similar to what I think you've described: it seems very important, I would like to give it serious thought, I haven't yet made the time to do so. However, I do have one thought to share: I'm not convinced that it's productive to focus on specific questions, such as what goals an AGI might have, as it's hard to make sense of such questions in isolation. (E.g. an AGI's goals may depend on many other things, such as how quickly things are moving when AGI emerges, what goals we are *trying* to give the AGI, how well alignment techniques have advanced, the nature and degree of competition going on, etc.) Instead, I am interested in trying to articulate plausible scenarios for a post-AGI world. I don't mean predicting a most-likely scenario, just articulating *any* plausible, internally consistent scenario.

My sense (based on, admittedly, a limited amount of time spent thinking about it) is that the task is difficult or impossible, aside from very bad degenerate scenarios. And if that holds up – if it really is ~impossible to articulate a fleshed out, internally consistent post-AGI scenario, that seems like an important idea to inject into the conversation. (I'm not sure what the implications would be, but they seem at least somewhat scary.)

Expand full comment

great to follow your thoughts and read you as always, thanks for the post

Expand full comment