Claude Shannon’s article on programming computers to play chess and Alan Turing’s “imitation game” (later on to become known as the “Turing Test”) described computers simulating human behavior, and consequently opening up the doors to the modern era of AI.
Talk was cheap back then in the 50s. It was very easy to describe a vision of Artificial Intelligence but the hardware itself was a major stumbling block. In the early days of computers, their sole function was to execute commands and not to store any of the commands. Lacking this key prerequisite for intelligence, computers could not remember what they just did. Additionally, leasing a computer was a massive cost, at up to $200,000 per month in leasing expenses. Unless you were a major company willing to dabble in the new science of computing, most thought leaders were only able to stand on the sidelines during the early years.
Great Success, Great Setbacks
AI began to ultimately flourish, though in a series of roller coaster ups and downs over the next two decades. As algorithms improved greatly, the hardware once again over time, provided a wall unable to be moved in any direction. Understanding AI in its infancy was becoming a challenge. Computational power was crippling, and substantial progress made. Storing enough information and processing speed were severely difficult obstacles to overcome.
In the 80s, AI found its algorithmic breakthrough in the form of two sources: a toolkit, and a boost of funding to move the hardware scale forward. It is here where we learned about the birth of the ‘deep learning’ concept of allowing computers to learn by experience. While hundreds of millions were poured into a resurgence in AI research, it was unfortunately all for naught as the ambitious goals initially crafted were not met and those who had funded the projects, left them abandoned. The upside to this cascade failure was an uptick in a new generation of software engineers and scientists taking up the mantle of AI research on their own.
Without oversight by government funding and public hype, AI thrived in the 90s and 00s. Many goals and achievements were realized (IBMs Deep Blue beating Kasparov) in highly publicized settings to promote AI as a legitimate science that needed attention. Later on, Google’s Alpha Go defeated the Chinese Go champion Kie Je. It was now possible for a machine to make artificially intelligent decisions. Moore’s Law to date is no longer a barrier.
AI Is Everywhere
In the age of big data, we find AI in almost every tech corner of the world. Banking, marketing, entertainment, research labs etc. have benefited from AI. As we look forward, we see AI language leading as the next output. Call any major company and you most likely will initially be speaking with an AI system rather than a human. Having a conversation with another person in two different languages is not far off at all and very near fluid. Automated driving has certainly taken off as a potential disruptive method of transportation. While it all seems to be moving rapidly, it remains inconceivable that we will see sentient robots able to hold meaningful conversations within the next 50 years.
It’s Not All Sunshine And Roses Though
Given that timeline, major players in the field today have taken sides in their outlook of AI’s future. Elon Musk has professed a concern over letting AI grow organically on its own as the fear of self-awareness may come to be and artificial intelligence would become sentient. He may not be too far off in his concern. Open AI took an unusual step of not releasing their research publicly for fear of potential misuse. Earlier, Facebook shutdown its experiment in machine language when two AI programs began to talk to each other in a separate new common language that humans were unable to interpret.
It remains a concern that there will be an ethical date in the near future where sentience is attained. It is hoped that before that time comes, an open, honest serious conversation will need to be had about machine policy and ethics (however human that may be).