The Blog of Why

By Richard D. Lange, December 2019

Not long ago I read Judea Pearl's Book of Why to see what all the fuss on causality was about. I'm glad I did. It's one of those books that can so thoroughly change the way you look at certain problems that you forget what it was like thinking in different terms before. I'm pretty sure I used to think of "confounding" as a mysterious evil force that could call any result into question. Having understood ...

Common Misconceptions about Hierarchical Generative Models (Part 3 of 3)

By Richard D. Lange, July 2019

This post is the last in a series on misconceptions or not-quite-right intuitions about hierarchical generative models, their pertinence to perception, and their relevance to representations. This post explicitly builds on the foundations of the previous two, so I highly suggest reading Part 1 and Part 2 if you haven't yet!

Intuition 6: it all boils down to simple priors, self-consistency, and marginal lik...

Common Misconceptions about Hierarchical Generative Models (Part 2 of 3)

By Richard D. Lange, March 2019

In the previous post I discussed my first 2 of 7ish not-quite-true or misleading intuitions about hierarchical generative models. If you haven't read it yet, start there. This post picks up where the other left off.

Background: priors, average-posteriors, and linear Gaussian models

The ideas in this post all rely on a distinction between a model's prior and its average posterior. I find this distinction ...

Common Misconceptions about Hierarchical Generative Models (Part 1 of 3)

By Richard D. Lange, January 2019

It's generally understood that the brain processes the visual world hierarchically, starting with low-level features like patches of color, boundaries, and textures, then proceeding to a representation of whole scenes consisting of objects, people, and their relations. We also have reason to believe that the brain has learned a probabilistic generative model of the world where data come in through the senses analogously to raw pixels from a camera, and percepts correspond to ...

The New Behaviorism of Deep Neural Networks

By Richard D. Lange, October 2018

About a month ago, I had the chance to attend the CCN conference in Philadelphia. This post is not about all the great talks and posters I saw, the new friends I made, nor the fascinating and thought-provoking discussions I had. It's a great conference, but this is a post about a troubling and ironic theme that I heard more than a few times from multiple speakers. The troubling part is that behaviorism is making a comeback. The ironic part is...