So, this was going around lately, where someone put people in an MRI while showing them video and then correlated the scans with the video to try to judge what they were seeing:
The concept reminded me of some related previous work with cats:
At first, I didn’t see why they added the extra step of making an averaged amalgam of videos instead of directly showing what the computer thought the correlation was, like in the cat video. But they say they tried to correlate “shapes, edges and motion” so it would be visually complex to display all of that, I suppose.
The visual texture of the averaged videos reminded me of this guy’s stuff, who averaged Playboy playmates and late night talk show hosts: