Originally posted on March 27, 2015.
At a recent foundation consultation at Stanford, I enjoyed meeting Andrew Meltzoff, the amiable and articulate co-director of the University of Washington’s Institute for Learning and Brain Sciences in my home city (where he lives but a short walk from my former high school).
Meltzoff is known to psychology teachers and students for his many studies of infant imitation, including his classic 1977 Science report on 2- to 3-week old infants imitating his facial gestures. It was, he reported, a powerful experience to stick out his tongue and have newborns do the same. “This demonstrates to me the essential socialness of human beings.”
I’ve always wondered what newborns really are capable of visually perceiving, and he reminded me that it’s not much—but that they have their best acuity for the distance between their mother’s breast and eyes, which also was the distance between his face and the infants’ eyes.
His lab is now reading infants brains using the world’s only infant brain imaging MEG (magnetoencephalography) machine, which reads brain magnetic activity more finely than possible with EEG.
He reports that “When a brain sees, feels, touches, or hears, its neuronal activity generates weak magnetic fields that can be pinpointed and tracked millisecond-by-millisecond by a MEG machine.” That is allowing Meltzoff and his colleagues to visualize an infant’s working brain as the infant listens to language, experiences a simple touch on the hand, or (in future studies) engages in social imitation and cognitive problem solving.
On the horizon, he envisions future studies of how children develop empathy, executive self-control, and identity. He also anticipates exploring how children’s brains process information from two-dimensional digital media versus their three-dimensional everyday world, and how technology can best contribute to children’s development. In such ways, they hope to “help children maximize their learning capabilities.”