- Scientists reveal that the brain can retain 4.7 B books
- The human brain can hold one petabyte or 1,000,000,000,000,000 bytes
- A petabyte equates to 4.7 B books, 670 M web pages, or 13.3 years of HD TV recordings
The human brain can actually retain 4.7 billion books or 670 million web pages; scientists have revealed.
In an article written by Harry Readhead of Metro, it was disclosed that on the average, one synapse can hold roughly 4.7 bits of information. This means that the human brain can hold one petabyte or 1,000,000,000,000,000 bytes.
A petabyte equates to 4.7 billion books, 670 million web pages, 13.3 years of HD TV recordings, or 20 million four-drawer filing cabinets filled with text.
The US scientists have measured the storage capacity of synapses or the brain connections that are responsible for storing memories.
“The discovery is a real bombshell in the field of neuroscience,” said Professor Terry Sejnowski.
“We discovered the key to unlocking the design principle for how hippocampal neurons function with low energy but high computation power,” he said.
“Our new measurements of the brain’s memory capacity increase conservative estimates by a factor of 10 to at least a petabyte, in the same ballpark as the World Wide Web,” added Sejnowski, who is from the Salk Institute for Biological Studies.
Moreover, it was disclosed that 50 petabytes can hold the entire written works of humankind; from the beginning of recorded history, in all languages.
More than one could imagine
Before, the brain was thought to be capable of just one to two bits for short and long memory storage in the hippocampus.
“This is roughly an order of magnitude of precision more than anyone has ever imagined,” noted Sejnowski.
The findings also offer a valuable explanation for the brain’s surprising efficiency. The waking adult brain generates only about 20 watts of continuous power—as much as a very dim light bulb.
The Salk discovery could help computer scientists build ultraprecise, but energy-efficient, computers, particularly ones that employ “deep learning” and artificial neural nets—techniques capable of sophisticated learning and analysis, such as speech, object recognition and translation.