I have a tiny technical nitpick -- you make a subtle error when you say "by the Law of Large Numbers, since the X_n are independent, identically distributed random variables, this sum converges almost surely to n times the expected value". This is actually not true! The proper statement of the strong LLN is "this sum divided by n converges almost surely to the expected value". These two statements seem like they should be equivalent, but because of some subtleties around limits and convergence, they actually are not. It's fun to think about why!

One thing that might help show why your original statement must be false, is to note that by the Central Limit Theorem, the distribution of the random variable corresponding to the sum of the X_n will approach a normal distribution centered at n * E[X], which clearly means it doesn't converge in probability (or almost surely) to any one value.

So one issue with the "well, it would be better to have $2 trillion than $2 billion..." argument is that once the numbers get that high, there isn't necessarily $2 trillion to have.

The Gross Planetary Product is roughly 80 trillion dollars. That is literally every bit of economic activity there is. If you gained control over 2 trillion dollars and directed it all towards AI safety or whatever, you've removed 2.5% of the planetary production for the year. As we all know, if the economy contracts by 2.5% YoY in your country, that's a severe recession that is felt by uncountable people. Poor people lose their jobs or have to struggle to make rent, rich people can't invest in productive enterprises, companies go bankrupt, tax revenue falls, services are cut back, things suck for everyone. Donating $2 trillion to charity is actually bad!

> But these zero-probability scenarios are still factored into expected value calculations, which allows expected value to be maximized by the all-in betting strategy.

If the set of these as a whole has probability zero, then it won't contribute to expected value. So that can't be what's going on; it has to be something else.

I'm not sure what to draw from this. I'm not fully a utilitarian but when I imagine myself as a 100% utilitarian, here are my thoughts:

When you say "you should take bets that are virtually certain to lead to ruin"

I reply with "Okay"

And when you say 'Conversely, if you declare that you’ll round down to “probability zero” or “impossible” all events more unlikely than some lower bound, you can freely “maximize expected value” without running into the above absurdities.'

I would say "Yes, but then I'm not maximizing utility. I care about utility not absurdities. There is no principled reason to round down. Rounding down AI risk or X-risk from 40%/30%/10%/2%/1%/0.1% to zero is silly in my view, so I can't see a principled reason to treat 0.000000001% any differently and so forth. There is no principle place to draw the line. " Maybe you can say at measure 0 but see my example below [1].

And when you say infinite utility isn't a thing in the VMN framework, I say I reject the framework because it doesn't let me maximize utility by having infinite utility.

So, I see what points you're making, but if I'm 100% utilitarian, I don't know if I'm persuaded that it's a bad thing to do.

[1] If someone rejects the infinite SPP then would they accept like 100 trillion rounds? I don't really think so. (Maybe?)But that isn't infinite payoff with measure 0.

Note: Kelly criterion is only optimal in the sense you mean if you expect to be offered an infinite sequence of positive expected value bets on which you can bet any percentage (up to 100%) of your current bankroll. If you expect to have only a fixed, finite number of such opportunities, optimizing EV may well be the way to go.

The most of your utility is concentrated on some event with vanishingly small probability objection is also somewhat mitigated if you are part of a large EA movement and are making bets that are independent of those made by other people in the movement.

I think section two is playing fast and loose with words, and I'm not convinced that the derivation is correct.

> You have repeated opportunities to make a bet which is a binary random variable; with probability p you get a return of b times the amount you bet, and with probability (1-p) you lose and get zero.

> Each bet is an independent random event.

> You have to stop betting when your bankroll runs out

> You care about your long-run total money, after “many” repeated bets.

What is "many" here? I tend to take "many" to mean some large $n$. The post jumps between this being some large value and it being infinity.

In particular, it derives that the value is

> W_0 (e^(E[log(X)])(1 + o(1)))^n

and says

> As n becomes large, the small-o terms disappear, and you just get W_n ~ W_0 (e^(E[log(X)])^n)

No you don't. I'm not sure that turning the o(n) term into an o(1) term that doesn't depend on $n$ was valid to begin with (I'm not sure that using big-o itself was a good idea), but it's in the exponent, and you can't just drop it. If you want to see what this converges to as n approaches infinity, you have to take a limit.

The reality is that this process, like everything else in the real world, does not go to infinity. There is no event of measure zero to discuss. The expected value of betting everything is p^n * b^n for however many betting opportunities show up. And it's easy to show that any deviation from that lowers your expect value. It's also easy to see that p^n is going to be very small if p < 0.9 and n > 100, so you will probably (not certainly!) lose all your money.

Really, this could be rolled into the St. Petersburg paradox discussion – I don't think this is adding anything except for an unnecessary discussion of infinities (and thereby almost sure convergence and measure zero).

The resolution to the St. Petersburg Paradox is, in banking parlance, risk management, or more specifically, credit risk management or counterparty credit risk. Can your counterparty afford to pay out $1 million? If so, you might be willing to pay up to a maximum of around $20 to play. How about $1 billion? Now the value of the game might be as high as $30 if you're confident the wager is legally enforceable. 1 Trillion? 40 bucks. But saying the EV is infinite is just wrong, because it ignores the reality that no entity is good for the amounts that would be required to keep that series infinite.

Log utility can also be applied, but is generally unnecessary unless a large bank or casino is offering the wager. If your friend is selling you a game and you don't trust them to pay out huge sums or that your wager is legally enforceable, you might cap the value of the wager at something closer to $5 or $10 without log utility coming into play at all.

In context, the argument about starting FTX with a low probability of success was that opportunity costs of not focusing on Alameda were high. Launching a startup with low odds of success while your existing startup is doing well is a tricky bet and does depend on risk tolerance.

And the argument about taking a 10,000x bet that pays off 10% of the time is explicitly framed as a one-shot, so the Kelly derivation doesn't actually apply. And even there, he's saying he'd bet half his bankroll, which is over Kelly but is far from risk neutral (where you'd bet 100%).

Nice post!

I have a tiny technical nitpick -- you make a subtle error when you say "by the Law of Large Numbers, since the X_n are independent, identically distributed random variables, this sum converges almost surely to n times the expected value". This is actually not true! The proper statement of the strong LLN is "this sum divided by n converges almost surely to the expected value". These two statements seem like they should be equivalent, but because of some subtleties around limits and convergence, they actually are not. It's fun to think about why!

One thing that might help show why your original statement must be false, is to note that by the Central Limit Theorem, the distribution of the random variable corresponding to the sum of the X_n will approach a normal distribution centered at n * E[X], which clearly means it doesn't converge in probability (or almost surely) to any one value.

edited Dec 2, 2022So one issue with the "well, it would be better to have $2 trillion than $2 billion..." argument is that once the numbers get that high, there isn't necessarily $2 trillion to have.

The Gross Planetary Product is roughly 80 trillion dollars. That is literally every bit of economic activity there is. If you gained control over 2 trillion dollars and directed it all towards AI safety or whatever, you've removed 2.5% of the planetary production for the year. As we all know, if the economy contracts by 2.5% YoY in your country, that's a severe recession that is felt by uncountable people. Poor people lose their jobs or have to struggle to make rent, rich people can't invest in productive enterprises, companies go bankrupt, tax revenue falls, services are cut back, things suck for everyone. Donating $2 trillion to charity is actually bad!

> But these zero-probability scenarios are still factored into expected value calculations, which allows expected value to be maximized by the all-in betting strategy.

If the set of these as a whole has probability zero, then it won't contribute to expected value. So that can't be what's going on; it has to be something else.

edited Dec 5, 2022I'm not sure what to draw from this. I'm not fully a utilitarian but when I imagine myself as a 100% utilitarian, here are my thoughts:

When you say "you should take bets that are virtually certain to lead to ruin"

I reply with "Okay"

And when you say 'Conversely, if you declare that you’ll round down to “probability zero” or “impossible” all events more unlikely than some lower bound, you can freely “maximize expected value” without running into the above absurdities.'

I would say "Yes, but then I'm not maximizing utility. I care about utility not absurdities. There is no principled reason to round down. Rounding down AI risk or X-risk from 40%/30%/10%/2%/1%/0.1% to zero is silly in my view, so I can't see a principled reason to treat 0.000000001% any differently and so forth. There is no principle place to draw the line. " Maybe you can say at measure 0 but see my example below [1].

And when you say infinite utility isn't a thing in the VMN framework, I say I reject the framework because it doesn't let me maximize utility by having infinite utility.

So, I see what points you're making, but if I'm 100% utilitarian, I don't know if I'm persuaded that it's a bad thing to do.

[1] If someone rejects the infinite SPP then would they accept like 100 trillion rounds? I don't really think so. (Maybe?)But that isn't infinite payoff with measure 0.

control-f "Kelly Criterion" :)

Note: Kelly criterion is only optimal in the sense you mean if you expect to be offered an infinite sequence of positive expected value bets on which you can bet any percentage (up to 100%) of your current bankroll. If you expect to have only a fixed, finite number of such opportunities, optimizing EV may well be the way to go.

The most of your utility is concentrated on some event with vanishingly small probability objection is also somewhat mitigated if you are part of a large EA movement and are making bets that are independent of those made by other people in the movement.

I think section two is playing fast and loose with words, and I'm not convinced that the derivation is correct.

> You have repeated opportunities to make a bet which is a binary random variable; with probability p you get a return of b times the amount you bet, and with probability (1-p) you lose and get zero.

> Each bet is an independent random event.

> You have to stop betting when your bankroll runs out

> You care about your long-run total money, after “many” repeated bets.

What is "many" here? I tend to take "many" to mean some large $n$. The post jumps between this being some large value and it being infinity.

In particular, it derives that the value is

> W_0 (e^(E[log(X)])(1 + o(1)))^n

and says

> As n becomes large, the small-o terms disappear, and you just get W_n ~ W_0 (e^(E[log(X)])^n)

No you don't. I'm not sure that turning the o(n) term into an o(1) term that doesn't depend on $n$ was valid to begin with (I'm not sure that using big-o itself was a good idea), but it's in the exponent, and you can't just drop it. If you want to see what this converges to as n approaches infinity, you have to take a limit.

The reality is that this process, like everything else in the real world, does not go to infinity. There is no event of measure zero to discuss. The expected value of betting everything is p^n * b^n for however many betting opportunities show up. And it's easy to show that any deviation from that lowers your expect value. It's also easy to see that p^n is going to be very small if p < 0.9 and n > 100, so you will probably (not certainly!) lose all your money.

Really, this could be rolled into the St. Petersburg paradox discussion – I don't think this is adding anything except for an unnecessary discussion of infinities (and thereby almost sure convergence and measure zero).

edited Dec 4, 2022The resolution to the St. Petersburg Paradox is, in banking parlance, risk management, or more specifically, credit risk management or counterparty credit risk. Can your counterparty afford to pay out $1 million? If so, you might be willing to pay up to a maximum of around $20 to play. How about $1 billion? Now the value of the game might be as high as $30 if you're confident the wager is legally enforceable. 1 Trillion? 40 bucks. But saying the EV is infinite is just wrong, because it ignores the reality that no entity is good for the amounts that would be required to keep that series infinite.

Log utility can also be applied, but is generally unnecessary unless a large bank or casino is offering the wager. If your friend is selling you a game and you don't trust them to pay out huge sums or that your wager is legally enforceable, you might cap the value of the wager at something closer to $5 or $10 without log utility coming into play at all.

In context, the argument about starting FTX with a low probability of success was that opportunity costs of not focusing on Alameda were high. Launching a startup with low odds of success while your existing startup is doing well is a tricky bet and does depend on risk tolerance.

And the argument about taking a 10,000x bet that pays off 10% of the time is explicitly framed as a one-shot, so the Kelly derivation doesn't actually apply. And even there, he's saying he'd bet half his bankroll, which is over Kelly but is far from risk neutral (where you'd bet 100%).

Great explanation! Thanks!