Hey Sarah, brilliant post as always.

I do want to bring to your attention: https://www.theverge.com/2023/4/14/23683084/openai-gpt-5-rumors-training-sam-altman. GPT-5 is not in works as of 4-14-23

I am also curious on how model architectures will adapt to available compute power. For example, while GPU power might only scale so much due to technical or commercial reasons, we have much further to go with non-GPU accelerated chips such as Trainium: https://towardsdatascience.com/a-first-look-at-aws-trainium-1e0605071970. If you have thoughts on this, I would love to know more!

Expand full comment

hi sarah, great post. just wondering where exactly in [15] it says the predictions about 2026 and 2034 estimations for high and low quality data respectively? tried ctrl+F-ing phrases related to this and nothing came up

Expand full comment

Very interesting post, Sarah. Good use of napkin math!

I linked you here:


Cheers 💚 🥃

Expand full comment
Apr 15, 2023·edited Apr 15, 2023

> Or can it ramp up to grow even faster as demand for AI GPUs increases?


> Epoch AI finds a steady exponential growth trend in GPU FLOP/s from 1848 models of GPU between 2006 and 2021. The doubling rate is about 2x every 2.31 years, or slightly slower than Moore’s Law.

Note that the doubling time for AI accelerators in particular seems to be shorter, ~2 years. Also note that they measure price-performance, not performance. I do think you're right though about seeing 2-3 OOMs more model compute by 2030. (But then there are also algorithmic improvements, which you allude to in the intro, roughly speaking making each FLOP count more: https://epochai.org/trends#algorithmic-trends-section)

Expand full comment

Thanks for writing this Sonia!! It’s the best short explanation of scaling laws I’ve ever read. I’ll be referencing and sharing it :)

Expand full comment