NHacker Next
login
▲Simulating and Visualising the Central Limit Theoremblog.foletta.net
87 points by gjf 6 hours ago | 33 comments
Loading comments...
ForceBru 3 minutes ago [-]
Speaking of CLTs, is there a good book or reference paper that discusses various CLTs (not just the basic IID one) in a somewhat introductory manner?
firesteelrain 8 minutes ago [-]
“ You’re also likely not going to have the resources to take twenty-thousand different samples.”

There are methods to calculate how many estimated samples you need. It’s not in the 20k unless your population is extremely high

jpcompartir 1 hours ago [-]
Edit: OP confirms there's no AI-generated code, so do ignore me.

The code style - and in particular the *comments - indicate most of the code was written by AI. My apologies if you are not trying to hide this fact, but it seems like common decency to label that you're heavily using AI?

*Comments like this: "# Anonymous function"

gtsnexp 1 hours ago [-]
https://gptzero.me/ Says that at large portions of it are 100% human
robluxus 1 hours ago [-]
Interesting comment. Why is it common decency to call out how much ai was used for generating an artifact?

Is there a threshold? I assume spell checkers, linters and formatters are fair game. The other extreme is full-on ai slop. Where do we as a society should start to feel the need to police this (better)?

Sharlin 56 minutes ago [-]
The threshold should be exactly the same as when using another human's original text (or code) in your article. AI cannot have copyright, but for full disclosure one should act as if they did. Anything that's merely something that a human editor (or code reviewer) would do is fair game IMO.
robluxus 31 minutes ago [-]
Maybe OP just used an ai editor to add their silly comments, so that would be fair game I guess? Or some humans just add silly comments. The article didn't stand out to me as emberrassingly ai-written. Not an em dash in sight :)

Edit: just found this disclaimer in the article:

> I’ll show the generating R code, with a liberal sprinking of comments so it’s hopefully not too inscrutable.

Doesn't come out the gate and say who wrote the comments but ostensibly OP is a new grad / junior, the commenting style is on-brand.

gjf 18 minutes ago [-]
Op here, no AI generated code, I'm wondering what gives the impression that it is?

I use Rmarkdown, so the code that's presented is also the same code that 'generates' the data/tables/graphs (source: https://github.com/gregfoletta/articles.foletta.org/blob/pro...).

jpcompartir 14 minutes ago [-]
If you say there's no AI-generated code then I retract the original comment, nice work.
jpcompartir 16 minutes ago [-]
That is not a disclaimer for generated code, it's referring to the code that generated the simulations/plots.

I had read that line before I commented, it was partly what sparked me to comment as it was a clear place for a disclaimer.

jpcompartir 35 minutes ago [-]
Agree here - in a nutshell it strikes me as intellectually dishonest to intentionally pass off some other entity's work as one's own.
niemandhier 5 hours ago [-]
Highly entertaining, here a little fun fact: there exist a generalisation of the central limit theorem for distributions without find out variance.

For some reasons this is much less known, also the implications are vast. Via the detour of stable distributions and limiting distributions, this generalised central limit theorem plays an important role in the rise of power laws in physics.

Tachyooon 4 hours ago [-]
3blue1brown has a great series of videos on the central limit theorem, and it makes me wish there were something similar covering the generalised form in a similar format. I have a textbook on my reading list that covers it, unfortunately I'm I can't seem to find it or the title right now. (edit: it's "The Fundamentals of Heavy Tails" by Nair, Wierman, and Zwart from 2022)

Do you have any good sources for the physics angle?

nextos 13 minutes ago [-]
Yes, came here to say the same thing. Not telling people that the CLT makes strong assumptions is important.

Otherwise, they might end up underestimating rare events, with potentially catastrophic consequences. There are also CLTs for product and max operators, aside from the sum.

The Fundamentals of Heavy Tails: Properties, Emergence, and Estimation discusses these topics in a rigorous way, but without excessive mathematics. See: https://adamwierman.com/book

usgroup 5 hours ago [-]
https://en.wikipedia.org/wiki/Central_limit_theorem#The_gene...
hodgehog11 4 hours ago [-]
I thought the rise of power laws in physics is predominantly attributed to Kesten's law concerning multiplicative processes, e.g. https://arxiv.org/pdf/cond-mat/9708231
kgwgk 4 hours ago [-]
> find out

Finite?

jethkl 28 minutes ago [-]
The Fisher–Tippett–Gnedenko theorem is the extreme-values analogue of the CLT: if the properly normalized maximum of an i.i.d. sample converges, it must be Gumbel, Fréchet, or Weibull—unified as the Generalized Extreme Value distribution. It’s extremely general and underpins methods like wavelet thresholding and signal denoising—easy to demo with a quick simulation.
globalnode 2 hours ago [-]
The definition under "A Brief Recap" seems incorrect. The sample size doesn't approach infinity, the number of samples does. I'm in a similar situation to the author, I skipped stats, so I could be wrong. Overall great article though.
k2enemy 16 minutes ago [-]
It is correct in the article. As the sample size approaches infinity, the distribution of the sample means approaches normal.

https://en.wikipedia.org/wiki/Central_limit_theorem

jaccola 1 hours ago [-]
Yes indeed, if the sample size approached infinity (and not the number of samples), you would essentially just be calculating the mean of the original distribution.
lottin 4 hours ago [-]
Looking at the R code in this article, I'm having a hard time understanding the appeal of tidyverse.
gjf 3 hours ago [-]
Author here; I think I understand where you might be coming from. I find functional nature of R combined with pipes incredibly powerful and elegant to work with.

OTOH in a pipeline, you're mutating/summarising/joining a data frame, and it's really difficult to look at it and keep track of what state the data is in. I try my best to write in a way that you understand the state of the data (hence the tables I spread throughout the post), but I do acknowledge it can be inscrutable.

lottin 3 hours ago [-]
A "pipe" is simply a composition of functions. Tidyverse adds a different syntax for doing function composition, using the pipe operator, which I don't particularly like. My general objection to Tidyverse is that it tries to reinvent everything but the end result is a language that is less practical and less transparent than standard R.
mi_lk 3 hours ago [-]
Can you rewrite some of those snippets in standard R w/o Tidyverse? Curious what it would look like
apwheele 1 hours ago [-]
I mean, for the main simulation I would do it like this:

    set.seed(10)
    n <- 10000; samp_size <- 60
    df <- data.frame(
        uniform = runif(n, min = -20, max = 20),
        normal = rnorm(n, mean = 0, sd = 4),
        binomial = rbinom(n, size = 1, prob = .5),
        beta = rbeta(n, shape1 = .9, shape2 = .5),
        exponential = rexp(n, .4),
        chisquare = rchisq(n, df = 2)
    )
    
    sf <- function(df,samp_size){
        sdf <- df[sample.int(nrow(df),samp_size),]
        colMeans(sdf)
    }
    
    sim <- t(replicate(20000,sf(df,samp_size)))
I am old, so I do not like tidyverse either -- I can concede it is of personal preference though. (Personally do not agree with the lattice vs ggplot comment for example.)
RA_Fisher 4 hours ago [-]
Why? The tidyverse is so readable, elegant, compositional, functional and declarative. It allows me to produce a lot more and higher quality than I could without it. ggplot2 is the best visualization software hands down, and dplyr leverages Unix’s famous point free programming style (that reduces the surface area for errors).
lottin 4 hours ago [-]
I disagree. In this example tidyverse looks convoluted compared to just using an array and apply. ggplot2 is okay but we already had lattice. Lattice does everything ggplot2 does and produces much better-looking plots IMO.
RA_Fisher 47 minutes ago [-]
I like simplicity and I love a good base R idiom, but there's a lot less consistency in base R compared to the tidyverse (and that comes with a productivity penalty).

Lattice is really low-level. It's like doing vis with matplotlib (requires a lot of time and hair-pulling). Higher level interfaces boost productivity.

ekianjo 4 hours ago [-]
the equivalent in any other language would be an ugly, unreadable, inconsistent mess.
tucnak 4 hours ago [-]
Obligatory 3Blue1Brown reference

https://www.youtube.com/watch?v=zeJD6dqJ5lo

oriettaxx 4 hours ago [-]
and the Galton Board https://en.m.wikipedia.org/wiki/Galton_board

(yes, that Galton who invented eugenetics)

gjf 3 hours ago [-]
Very much an inspiration and resource when composing the post.
evrennetwork 2 hours ago [-]
[dead]