The push for super intelligent robots still seems perhaps “over-the-horizon” for many, but many fields are producing the hardware and software that can make the functional mechanisms possible.
So robots are a next big thing. Artificial Intelligence can now fit into CPU’s that share RAM in the 64gb range. CPU’s that house eight individual cores. CPU L2 cache allows for buffering, while the RAM stores information more quickly than HDD space. But now we have consumer access to Solid-State Drives. These are much faster than traditional Hard Drives, and have no moving parts. Nvidia recently released a product called “NV-Link”, and this will bridge the connection between the most important components of modern computing: Central Processing Unit (CPU), and Graphics Processing Unit (GPU). This product was meant for supercomputing, and would be the ideal candidate for real-time simulations at the computer level. Quantum Processors now make possible the ability to read 1’s and 0’s simultaneously, meaning computing can happen much faster than before.
In addition to Hardware, Software is taking on the new opportunities that hardware opens up. Game developers seem to be pushing to potential the farthest. Real-time physics simulators are now able to predict responses in real-time, rather than having to be pre-programmed to do it. This is the beginning of what we might call “artificial intelligence”. Is it already here? But how much programming has to be done to allow software to “figure things out” for itself? And when we get a working self-contained Program/Operating System/Kernel that will be able to take control of it’s own hardware, will it ever actually be “self-aware”?
Japanese developer Honda produced a remarkable breakthrough in the robotics field with “Asimo”. But it is slow, and it cannot be compared to a human for a number of reasons. If we were to give Asimo a hardware up grade, with software to take advantage of it, Asimo would be able to process his surroundings more quickly, understand words spoken more accurately, and perhaps even express synthetic emotions. But the difficultly is, we are approaching the “Uncanny Valley”, a phrase coined by Masahiro Mori in the 1970’s. We begin to seriously analyze every minute detail of the object in front of us. We will look at things like response time, as well as human-like attributes of communication, such as sarcasm and subtle gestures. Part of what makes a human a human is his experience in his own body. A robot, were he to become “self-aware” would find himself at a drastic disadvantage to his human companions.
The difficulty with trying to create robots that are human-like means more than just a colossal budget (the first hurdle). It will mean a massive collection of anthropologists, psychologists, sociologists, and historians to teach him. And this is only the explicit knowledge. What about implicit knowledge that come from “growing and experiencing life”? How does the robot consider what it’s own existence is? It that a corner of his awareness we don’t give it? If we were ever to achieve a point where robots were to interact independently, what would they have to contribute but what we have given them? And since the realm of science is moving in the trajectory of seemingly-blind discovery, what will the robotics decide to “compute” when they have access to Mankind’s internet? Will they have the moral character developed to be considerate, to be respectful, to be non-violent? If we were to connect them to a smaller supply of all human knowledge (an intranet), would that give them enough to learn what they need to learn to develop right?
These “creatures” would be toddlers with very massive brains.They would be immature with their advanced computational functions. They would need to develop a sense of moral character. But the question is: Can robots develop character? What we run into is a situation that science is not equipped to solve. In fact, psychologists wouldn’t be able to do an exceptional job either, because their techniques are relatively systematic as well, and rely on the statistics from the responses of humans, instead of the causes of those responses. So the place we come to is deeper. We come to the fields of philosophy and theology. What is at the very core of man? What is a soul? How is the soul linked to the human brain, and other vital organs? What happens if we were to remove the soul from a person? How much is God considered in the field of robot experimentation?
Of course, the correct configuration of hardware and software that might make a robot “self-aware” are in the future(if at all). And since the budget is so steep, there needs to be a logistical reason to justify the cost. The mere premise of “discovery” may not receive the $500 billion(or more) needed to give the robot the hardware and precisely-tailored software for everything “it needs”.