This image could be hung in a gallery, but it started life as a tiny chunk of a woman’s brain. In 2014, a woman undergoing surgery for epilepsy had a tiny chunk of her cerebral cortex removed. This cubic millimeter of tissue has allowed Harvard and Google researchers to produce the most detailed wiring diagram of the human brain that the world has ever seen.
Biologists and machine-learning experts spent 10 years building an interactive map of the brain tissue, which contains approximately 57,000 cells and 150 million synapses. It shows cells that wrap around themselves, pairs of cells that seem mirrored, and egg-shaped “objects” that, according to the research, defy categorization. This mind-blowingly complex diagram is expected to help drive forward scientific research, from understanding human neural circuits to potential treatments for disorders.
“If we map things at a very high resolution, see all the connections between different neurons, and analyze that at a large scale, we may be able to identify rules of wiring,” says Daniel Berger, one of the project’s lead researchers and a specialist in connectomics, which is the science of how individual neurons link to form functional networks. “From this, we may be able to make models that mechanistically explain how thinking works or memory is stored.”
Jeff Lichtman, a professor in molecular and cellular biology at Harvard, explains that researchers in his lab, led by Alex Shapson-Coe, created the brain map by taking subcellular pictures of the tissue using electron microscopy. The tissue from the 45-year-old woman’s brain was stained with heavy metals, which bind to lipid membranes in cells. This was done so that cells would be visible when viewed through an electron microscope, as heavy metals reflect electrons.
The tissue was then embedded in resin so that it could be cut into really thin slices, just 34 nanometers thick (in comparison, the thickness of a typical piece of paper is around 100,000 nanometers). This was done to make the mapping easier, says Berger—to transform a 3D problem into a 2D problem. After this, the team took electron microscope images of each 2D slice, which amounted to a mammoth 1.4 petabytes of data.
Once the Harvard researchers had these images, they did what many of us do when faced with a problem: They turned to Google. A team at the tech giant led by Viren Jain aligned the 2D images using machine-learning algorithms to produce 3D reconstructions with automatic segmentation, which is where components within an image—for example, different cell types—are automatically differentiated and categorized. Some of the segmentation required what Lichtman called “ground-truth data,” which involved Berger (who worked closely with Google’s team) manually redrawing some of the tissue by hand to further inform the algorithms.
Digital technology, Berger explains, enabled him to see all the cells in this tissue sample and color them differently depending on their size. Traditional methods of imaging neurons, such as coloring samples with a chemical known as the Golgi stain, which has been used for over a century, leave some elements of nervous tissue hidden.