4 Comments

This is a great summary of a frustration I've had in modeling and simulations for years. The biases we code in to our measures are often self fulfilling. I always go into the assumptions of a study and ask, if this assumption weren't true, would the model still work?

If the answer is no, then it's not a good assumption.

Assumptions help to bracket complex problem spaces but if the assumption has a 50/50 chance then it needs to be baked into the model itself, not ignored.

A prime, non-AI example of this are climate models which notoriously make the cloud feedback cycle static. The problem is that clouds are highly dynamic and react to small changes. Of course making them static will show something. If that same somthing dissapears when you allow clouds to be clouds then the finding didn't exist to begin with.

AI suffers the same problems.

Expand full comment

This is really great content mate, thanks for adding a spot of considered, well reasoned thought to the wash of panic driven analysis that is around at present.

Expand full comment

That's very kind. Thank you

Expand full comment

But what about "Broken Neural Scaling Laws" paper?:

arxiv.org/abs/2210.14891

arxiv.org/pdf/2210.14891.pdf

Expand full comment