In this essay, I want to examine the relationships, and fundamental differences between the wetware of the human mind, and the hardware of the artificial intelligence we hope to create.
Pumping You Up
Recent research from Natalia A Goriounova’s team at the Vrije Universiteit Amsterdam, entitled “A Cellular Basis of Human Intelligence” inspired one vector for the ideas which I’ll discuss in this brief essay about artificial intelligence.
The Dutch team discovered a positive correlation between greater cortical thickness of the temporal lobe and an association with higher IQ scores in hundreds of subjects by obtaining T1-weighted MRI scans prior to surgery using voxel-based morphometry. They found that both the total dendritic length (TDL), and the complexity of an individual’s pyramidal neurons strongly determines how smart one is.
Turns out that (metaphorically speaking), being thick-headed, is actually a good thing in terms of IQ. The synapses are actually larger in smart people, in much the same way that Arnold Schwarzenegger’s muscles are larger than most humans. As with other muscles, such extraordinary results are in part due to working out, and in part due to genetics.
Information Substrates
Creating AI is both a software and a hardware challenge. The way this brain research in humans relates to artificial intelligence, is in that unlike the human brain, an artificial intelligence isn’t necessarily container dependent. Human intelligence is limited by the literal size of our cranium, however an AI does not share this hardware limitation. It's far more extensible.
While we literally cannot stuff any more molecules into our heads (thick or not), an artificial intelligence - especially one which “escapes the box” (and Max Tegmark has detailed how this might occur in a number of ways) - most assuredly could.
Which brings us to the question of the physical growth of the hardware substrate of an AI system, and how that might theoretically occur. Two things came to mind when I began considering this. First, methods of data storage, and second, the chemical nature of the substrate itself.
So let’s talk for a moment about data storage methods. Consider the following mediums:
+ DVD – each bit corresponds to whether there is or isn’t a microscopic divot in the plastic surface as read by a laser
+ Hard Drive – each bit corresponds to a point on the surface being either negatively or positively magnetized
+ RAM – each bit corresponds to the positions of certain electrons thereby determining whether a micro capacitor is charged
+ Optical fibre – each bit corresponds to a laser beam being strong or weak at a given time
+ Router – each bit corresponds to voltages
+ Wireless network – each bit corresponds to radio waves
+ Writing – each bit corresponds to a molecule on a piece of paperWe have a tendency to think about information in terms of these storage methods, but in truth, information itself really has a life of its own, independent of its physical substrate. It can be stored on a number of materials. Ultimately, you don’t care how these bits of information are represented as physical objects. Therefore we should free ourselves from our present constraints when considering where sentience can be “stored” too.
Evolving Memory
Our brains are basically an evolved memory device. Back in 2016, Philip Hollinger at Cambridge was able to make an RNA molecule that could encode 412 bits of generic information. It is thought that early life created short, self-replicating snippets like that. The smallest living organism storing data is the bacterium Candidadus Carcnella Rudi which can contain roughly 40 kilobytes of information, whereas our human DNA stores about 1.6 GB. This is comparable to a downloaded movie. Meanwhile your brain as a whole, stores about 10 GB electrically and 100 terabytes chemically & biologically.
Computers vastly exceed our memory capacity and at increasingly cheaper rates. During past 60 years, memory has gotten 1000x cheaper every 20 years. Hard drives are 100 million times cheaper than they were 60 years ago, and RAM has gotten 10 trillion times cheaper. Indeed, if you were to compare data storage vs. real estate costs, a similar drop in price in real estate would mean you could buy New York – all of it – for just 10¢.
Where we differ from computers, is in how memory is retrieved. A computer at present is rigid, and looks for “where” information is stored in a substrate, whereas our brains operate more like a search engine by determining “what” piece of information is stored, and how it RELATES to other things. Auto-associative memory recalls by relation, rather than address. This works for any physical system with multiple stable states, ie., a network. In the case of our brains, a collection of interconnected neurons & synapses. In 1982, it was revealed by John Hopfield that you can store 138 memories for every 1,000 neurons without causing major confusion.
Which brings us to computer neural networks. Basically, a “computation” is a transformation of one memory state, into another. A “function” is just a churn node. If you input a particular value into the node, it will consistently return the same result every time. Some functions are inconsequentially simple/trivial, while others can be incredibly complex. When combined into graphs, they become extremely powerful, very quickly. It’s one reason why I’m a big fan of directed acyclic graphs for use in UX/UI development.
Scientists have theorized that the limit of computational substrate is 33 orders of magnitude (10 to the 33rd power) beyond where we are now. Consider how many advances we’ve already seen in the past 100 years, and then factor that value into your calculations. In another hundred years, the world will be completely unrecognizable.
The ultimate parallel computation, of course, is quantum computing. For matter to learn, it must be able to rearrange itself independent of precompiled code. That ironically, may be one of the better definitions of sentience.
Non-Carbon-Based Life & Data Storage
Being a science fiction nerd, all this pondering brought to mind two shows I dearly love, “Babylon 5” and “Star Trek: The Next Generation.” In B5, data is stored on crystals, something which I found fascinating back in the day when CD’s were the prominent means of storage. Additionally, I was reminded of the “Home Soil” episode of Star Trek: TNG in which a miner in a terraforming colony is killed by a malfunctioning laser.
In the episode, Commander Data discovers that the code of the laser was re-written to fire upon the terraformers, and a crystal near the device is exhibiting particular light and radiation patterns, so they bring the crystal onboard the Enterprise to study it. It turns out that the crystal is actually alive, and has evolved to live in a thin layer of saline water just below the surface of the planet which acted like a conductor for the organism. It seems the terraformers were accidentally killing them with their drilling, so the crystals fought back by recoding the drill, who’s computer controls were also made of silica.
On the planet, the silica crystals functioned sort of like a grove of Aspen trees - one crystal was not sentient, but when connected, these individual crystals formed a single organism with a formidable hive-mind intellect. So when they were removed from their home, they grew and took over the computer system of the Enterprise, in order to try to communicate their need to get back home.
That got me thinking, if our AI has a hardware growth feature similar to crystal growth, it could in essence “gain additional mental muscles” - and on steroids. In a matter of minutes on an exponential growth curve, such a crystalline creature could grow so fast that “ugly bags of mostly water” (as the crystalline creatures in Star Trek referred to humans) simply couldn’t compete with it.
Generative Brain Design
How would such a growth pattern be accomplished in an efficient manner? This brought to mind another field which interests me, generative design. Being a 3D nerd as well as a science fiction nerd, I began to consider how the field of 3D printing is altering our design constructs.
In generative design, the real medium isn’t paint, wood, metal, or marble, but rather, computation. Generative designers use processes they find in nature, to create design tools. For example, the branching patterns of dendritic crystals are created by diffusion limited aggregation. The basic state of matter is diffused motion. Diffusion limited aggregation, is just a simple calculation which describes what happens when particles of matter floating around in a diffuse manner, collide with, and stick to each other.
The fractal branching that results, can be found all throughout nature, such as in coral growth. By observing this, generative designers build mathematical algorithms that computationally describe that process, and those calculations can in turn be pruned in various ways to fabricate very complex structures from that relatively simple math. In essence, rather than drawing structures, they grow them. Instead of creating static designs, they interact with, and manipulate complex systems.
Part of what generative designers are exploring, is the relationship between growth and form. Specifically, they’re observing how nature uses differential growth (different parts intentionally growing at different speeds) to create a wide variety of forms. For instance, a plant can change its shape and the direction it grows in, through differential elongation of cells on either side of a stem, growing faster on one side, relative to the other, in response to light. So you have a gradient of growth, in response to a gradient of an environmental signal.
Which begs the question, “What if an AI neural network could direct itself to grow differentially in a crystalline substrate, via similar fractal branching patterns?
Controlling the Beast
AI are not necessarily any more dangerous than humans, but let’s face it, even if they don’t surpass us, humans are significantly dangerous enough. How do you manage this danger in an AI? Shackle it? Is that even possible? Is it even moral to so cage a superior intellect, even if it is non-human one? If you have an AI with the same level of intelligence as humans, but which does its thinking a million times faster than you can, the AI can do a 10,000 years worth of thinking, in the same time it took for you to devote 1 day to a problem. And clock speed is dependent upon the hardware substrate, which we’ve already established is limited in humans to cranium size, but not so for AI.
By merely giving it one-time access to the internet, we could be talking about a computational advancement on an order of magnitude faster than we can imagine, and certainly faster than we can unplug it, before it’s too late. Basically, when your cute little Chihuahua suddenly balloons into a 50ft St. Bernard, how do you control him? And how can you determine whether such a massive intellect has divergent goals from your own, prior to achieving its state of super-intelligence? Again, these are questions we need to find answers for, prior to even remotely considering letting such a general intelligence out into the wild.