Continuing from: Day 24 (20) / Morning Lecture
If you're willing to assume your audience can already freely wield the machinery of derivatives and antiderivatives and understand on an intuitive level everything that they mean, then sure, you can solve the problem more quickly using overpowered tools like that.
Integrating x^N gives you x^(N+1)/(N+1), which evaluated from 0 to 1 will give you 1/(N+1).
Given a mix of N LEFTs and M RIGHTs, however, and the need to renormalize the posterior over that, they might find themselves in a bit more trouble when it comes to deriving the integral...
Integral of p from 0 to 1: [ (1 - p)^M p^N dp ] = M! * N! / (M + N + 1)!
Keltham doesn't actually remember the exact proof, it's been a while, but he's pretty sure the introduction he used was the one his teachers used on him. So it's possibly the right introduction for getting the more complicated proof in the least painful way, once he starts trying to remember that?
And in any case, Keltham hasn't yet been told he's allowed to assume that everyone in this class has a full intuition for derivatives and integrals yet. Keltham hasn't taught them a calculus class himself, and who knows what weird malfunctions exist in the Golarion way of teaching it.
Why, it wouldn't shock him, at this point, if people are just being told to memorize formulas and not really taught to intuit how anything works!
So he's just talking directly about the infinity of possible hypotheses between 0 and 1, and how to sum up priors-times-likelihoods and posterior-predictions from those, rather than using calculus to abstract over that. Abstracting over that only works to teach mathematical intuition if people already know what's being abstracted away.
Keltham is in favor of people understanding things using calculus and continuous distributions, to be clear. So long as they can also understand the ideas in terms of a countably infinite collection of individual hypotheses each with probability 1/INF. You don't want to start identifying either representation or methodology of analysis with the underlying idea!
Which idea is just: a metahypothesis where any fraction from 0 to 1 seems equally plausible on priors. Those get updated on observations of LEFT and RIGHT, that have different likelihoods for different propensity-fractions; the allocation of posterior probability over fractions changes and gets normalized back to summing to 1; that changes the new predictions for LEFT and RIGHT on successive rounds, having updated on all of the previous rounds.
Keltham will now go up to the wall and spend a bit of time figuring out how to derive the rest of the Rule of Succession, which there's no simple or obvious way to prove using calculus known to anyone in this room anyways.
Thankfully, they know what all the correct answers have to be! Using much simpler combinatoric arguments, about new balls ending up randomly ordered anywhere between the left boundary, all the LEFT balls, the 0 ball, all the RIGHT balls, and the right boundary.
Eventually Keltham does succeed in deriving (again) (but this time proving it using dubious infinitary arguments, instead of clear and simple combinatorics, so they can see what's happening with priors and posteriors and likelihoods behind the scene) that indeed:
If you start out thinking any fraction of LEFT and RIGHT between 0 and 1 is equally plausible on priors, and you see experimental results going LEFT on N occasions and going RIGHT on M occasions, the prediction for the next round is (N+1)/(N+M+2) for LEFT and (M+1)/(N+M+2) for RIGHT.