Where AI Is, and Where it Still Needs To Go
AI technology may be coming soon, but not in the way we think. AI technology has been written about in movies and books forever. We see examples of highly intelligent robots helping, or taking over the human race in multiple capacities. I will be using J.A.R.V.I.S, an AI program from the Avenger movie series, as a fictional example of what AI could look like, and compare him to where AI technology is now. The big question is, will J.A.R.V.I.S become a reality sooner than we think?
In the context of “AI Gone Awry” by Peter Kassan, J.A.R.V.I.S is an example of connectionism AI. Connectionism AI has the goal of creating a human like output with human-like logistics inside. J.A.R.V.I.S acts and talks just like a human would, and even has a sense of humor through sarcasm. He downloads and learns from information on the internet, and uses it to help the Avengers. According to the article “AI Gone Awry”, this type of AI is impossible. “The probability of successfully completing a project 25 million times more complex than Windows is effectively zero” (Kassan 33). J.A.R.V.I.S, however, is just that powerful (in this fake reality created by the Marvel universe).
J.A.R.V.I.S also meets the criteria for different AI theories. Sternberg’s Theory was developed with anthropological methods and looked at multiple cultures. It was used to measure human intelligence, however it can also be used as a test to see if an AI program has intelligence. J.A.R.V.I.S fits into all three of Sternberg’s categories. The three categories of the Strenburg’s theory are Analytical intelligence, practical intelligence, and creative intelligence. J.A.R.V.I.S is analytical and solves problems for Tony Stark, He is practical and can adapt to all of Stark’s ideas, and he is creative in his problem solving skills. A specific example of this is in “Avenger: Age of Ultron”, when he outsmarts Ultron by faking his own “death” and hiding in his own software while still maintaining the security of nuclear weapons.
J.A.R.V.I.S is the perfect AI program. In a way, J.A.R.V.I.S is what Max Tegmark hopes AI will be in his Ted Talk, “How to get empowered, not overpowered, by AI”. Tegmark believes that Artificial General Intelligence (AGI) will become reality, and soon. Tegmark shows a graphic that displays where AI is now (playing games like Go! And Chess), and where it will be once it reaches AGI. J.A.R.V.I.S has already reached the highest potential of AGI. He can translate, conduct science, and have social interactions with other characters in the movie. Not only is J.A.R.V.I.S operating where Tegmark wants AI to be operating, but Stark created J.A.R.V.I.S to share his goals in helping the world. Stark trusts J.A.R.V.I.S and they work together as equals. Stark solved intelligence the right way and got it right the first time. J.A.R.V.I.S is mistake free. These are all things Tegmark thinks scientists need to do in order to make AI great, and J.A.R.V.I.S is the example of it.
It is clear that this fictional AI program, J.A.R.V.I.S, is the perfect example of what researchers want AI to be, and where our future might be headed. However, we are still a ways away from this becoming a reality. Here are the reasons J.A.R.V.I.S does not exist yet in our world.
In Andy Clark’s book “Mindware”, chapter one “Meat Machines”, Clark explains how AI researchers treat human intelligence as computation. The brain processes can be mimicked through algorithms in a program, but lack real world timing. This means that AI, as of now, is only a replica of how human intelligence works. A good example of this are chatbot programs. These answer questions using basic input, manipulation, then output algorithms. Chat bots, however, often give an odd answer because they lack a sense of the real world.
Clark then goes on in the next chapter, “Symbol Systems”, explaining that these AI algorithms are physical symbol systems. Physical symbol systems follow three basic rules. 1) use of symbolic code as information storage, 2) depiction of intelligence as the ability to search a symbolic problem, and 3) intelligence resides at the level of deliberative thought. With these rules, AI researchers have created programs such as SOAR, a problem solving program. According to Clark, these programs are impressive, but they lack major components to human intelligence.
The first is the learning curve, outlined by the example of the Chinese Room by Searle. In this situation, a man sits in a room and is given a “rule book” and chinese letters to read and respond too. Using the rule book, the man should be able to learn chinese and answer in an appropriate manner. However, according to Searle, this is just an illusion, because the man is not learning the language, he is only learning how to manipulate it. This also goes for basic PSS programs.
PSS programs are also lacking in human emotion and experience. This is also a big part of human intelligence, and gives our consciousness the power to do the two intelligence processes all humans do, add more information until we reach a conclusion, and make inferences based on past knowledge. Clark concludes that if we are going to solve this problem, researchers need to focus on micro-functionalism. This means focusing on the function of these problem spaces (where information is manipulated) and being able to replicate it either in an algorithm or a neuron system. We need to understand how output is created within our own intelligence before we can create an algorithm as clever as J.A.R.V.I.S.
Through Clarks arguments, we now know where AI is headed, and what it is lacking. Where is AI technology now? In Janelle Shane’s Ted Talk “The Dangers of AI are Weirder Than You Think”, Shane explains that AI can complete simple tasks, and we are still learning how to communicate with these programs. We assume that the technology we are developing thinks the same as we think, but this is not true. An example of this is asking a robot to travel from point A to point B. The robot program then rebuilds itself and falls, landing right at point B. We would assume that the robot would walk forward to point B, but that is not the case. Another example in Shane’s talk is an AI program that generates ice cream flavors. It comes up with outputs such as “Peanut Butter Slime”, not classic Rocky Road. This just proves that we are not close to J.A.R.V.I.S level technology, yet.
AI researchers know a lot about AI technology, and are getting closer to AGI. They have created tech such as Siri, Google Home, and Alexa that are all used to help the human race. When it comes to a program as complex and human-like as J.A.R.V.I.S, AI technology still has a long way to go.
Breazeal, C. (2010, December). The rise of personal robots. TED conferences.
Clark, A. (2013). Symbol Systems. In Mindware: An introduction to the philosophy of cognitive science. OUP USA.
Clark, A. (2013). Meat Machine. In Mindware: An introduction to the philosophy of cognitive science. OUP USA.
Kassan, P. (2006). AI Gone Awry. Skeptic, 12, 30–31.
Shane, J. (2019, April). The danger of AI is weirder than you think. TED Conferences.
Tegmark, M. (2017, April). How to get empowered, not overpowered, by AI. TED Conferences.