Discussion about this post

User's avatar
Steve Byrnes's avatar

I'm gonna echo a couple other commenters to say that when you say "Why I am not an AI doomer", I would say "Why I don't expect imminent LLM-centric doom, and (relatedly) why I oppose the pause".

(I ALSO don't expect imminent LLM-centric doom, and I ALSO oppose the pause, for reasons described here — https://twitter.com/steve47285/status/1641124965931003906 . But I still describe myself as an AI doomer.)

(I might be literally the only full-time AI alignment researcher who puts >50% probability, heck maybe even the only one with >10% probability, that we will all get killed by an AGI that has no deep neural nets in it. (The human brain has a "neural net", but it's not "deep", and it's kinda different from DNNs in various other ways.))

Like you, I don't expect x-risk in the 2020s, and I also agree with “maybe not the 2030s”. That said, I don’t COMPLETELY rule out the 2020s, because (1) People have built infrastructure and expertise to scale up almost arbitrary algorithms very quickly (e.g. JAX is not particularly tied to deep learning), (2) AI is a very big field, including lots of lines of research that are not in the news but making steady progress (e.g. probabilistic programming), (3) December 31 2029 is still far enough away for some line of research that you haven't ever heard of (or indeed that doesn't yet exist at all) to become the center of attention and get massively developed and refined. (A similar amount of time in the past gets us to Jan 2017, before the transformer existed.)

For example, do you think future AGI algorithms will involve representing the world as a giant gazillion-node causal graph, and running causal inference on it? If so, there are brilliant researchers working on that vision as we speak, even if they're not in the news. And they’re using frameworks like JAX to hardware-accelerate / parallelize / scale-up their algorithms, removing a lot of time-consuming barriers that were around until recently.

> persuade a handful of individuals that they should maybe not work too hard to get the world to take notice of their theoretical ideas.

I do have a short list in my head of AI researchers doing somewhat-off-the-beaten track research that I think is pointing towards important AGI-relevant insights. (I won't say who!) And I do try to do "targeted outreach" to those people. It's not so easy. Several of them have invested their identities and lives in the idea that AGI is going to be awesome and that worrying about x-risk is dumb, and they've published this opinion in the popular press, and they say it at every opportunity, and meanwhile they're pushing forward their research agenda as fast as they can, and they're going around the world giving talks to spread their ideas as widely as possible. I try to gently engage with these people to try to bring them around, and I try to make inroads with their colleagues, and various other things, but I don't see much signs that I'm making any meaningful difference.

Expand full comment
Eliezer Yudkowsky's avatar

Couple of things that strike me as missing on a quick read:

- Whether grinding a loss function over a sufficiently intricate environmental function like "predict the next word of text produced by all the phenomena that are projected onto the Internet" will naturally produce cross-domain reasoning. I'd argue we've already seen some pretty large sparks and actual fire on this.

- Whether an AGI that is say "at least as good at self-reflection and reflective strategicness as Eliezer Yudkowsky" can fill in its own gaps, even if some mental ability doesn't come "naturally" to it.

Expand full comment
59 more comments...

No posts