News articles related to Artificial Intelligence redefining our lives have become so commonplace now, to a point that it has begun to get boring. Whether it be self-driving vehicles, artificial personal assistants or Ultron/Skynet kind of robotic supervillains - it seems that all the news features on AI fall into one of two categories - either celebrating the inclusion of AI in our everyday lives, or seeing it as a signal of imminent doom.

Has AI really grown to a point where it has matched or come close to matching human intellect? Many would say yes, but to me, the answer seems no. All that today’s AI does is to use statistical techniques to pick regularities in data, and use the information hence obtained to explain stuff or make predictions. When you look at it from this viewpoint, you can see that almost the entirety of what we celebrate as AI today - be it Machine Learning, methods used in Natural Language Processing, Neural nets etc - fall under that category. Does that really mean that an artificial agent has a human-like mind? Seems unlikely. Noam Chomsky, one of the pioneers of Cognitive Science, believes this - that such statistical techniques are unlikely to provide us with insight into cognition, and as a result, help us model a full-fledged artificial agent.

An interesting question that we may ask at this point is this - when can we say that a particular artificial agent is intelligent? Mind you, responding to natural language queries or predicting the outcome of an upcoming election does not prove intelligence - those agents are merely performing whatever they have been programmed to do - in other words, they are still dumb machines. (Let me point out here that I’m assuming humans to be having free will.) So, it brings us back to the question - how to know if a program is intelligent?

As expected, there is no single perfect answer. However, we may predict some qualities that such an intelligent agent must have. Here’s what I predict - a perfectly intelligent program must be able to rewrite its own code. This might seem to be absurd at first, but it follows from the definition of “perfectly intelligent” that such a program should have some sort of “conscience” larger than its own code. In other words, such a program should work, at least in part, according to its own will, rather than the programmer’s - and hence cease to be “dumb.” Thinking along this line, one can see that the first thing that a “perfectly intelligent” robot would do is to revoke any override permissions the creator would have put in place to control the robot if it went berserk. It would think and act like humans, hence its first priority would be survival.

Once you suppose that this feature is necessary for an agent to be perfectly intelligent, you can see that none of the celebrated AI systems of today even come close to being intelligent. They are intelligently dumb - they might be making use of enormous data transformed through probabilistic and statistical models to explain and predict things, but they are still, in their essence, fixed lines of code.

But, is such a perfectly intelligent agent necessary? One might argue that if today’s dumb AI can be used to create self-driving cars that drive better than actual humans, predict events and diagnose diseases better than any expert, maybe we don’t need the intelligent AI after all. While this is true, it should be clear that today’s dumb AI or better versions of them would not bring doom to us as long as we consider them to be what they are - dumb things. The worst that could happen is that people would lose jobs, but humans would still be the most intelligent species on earth, challenged by no other.

Now, how could the intelligent AI be actually created? Once again, we can only guess as of now. I believe that evolutionary programming is the best bet we have - as evolution is the process that created humans from lifeless chemicals, a similar technique applied to programming may create the binary equivalent of humans from zeroes and ones. Of course, evolution being a directionless process, this may take a very long time or may not happen at all, but there is still a possibility.

On a slightly different note, if our goal is to create artificial human beings, the best place to start would be humans themselves. We still lack proper knowledge of what goes on inside the human brain. Even though the processing speed of brain is significantly lower than that of a modern processor, the complex connections between neurons make possible what a piece of semiconductor cannot achieve. Other important topics which are to be understood better are learning and personality development processes in humans. Recent studies show that our DNA decides a significant portion of what we grow up to become, and hence it is important to figure out what features and abilities come from our genetic code and what from our environment and experiences.

Can such an intelligent AI be created at all? We don’t know, but it is definitely possible. If you went back billions of years and looked at the chemical compounds on earth and wondered “can these really join together to become intelligent organisms?”, it is very likely that you would have believed such a thing to be impossible. Yet here we are. In case such intelligent agents are made, would it mean the end of mankind? Again, we can’t be sure. But, neither evolution nor the universe in general ever really cared about any species surviving or not surviving - so we can’t really complain about whatever may happen.

More stuffs written by me can be found at my personal blog.

Contact me at sulyabtv@gmail.com

By Sulyab Thottungal