In 1950, computer scientist and philosopher Alan Turing published a paper on ‘Computing machinery and intelligence’ that is often referred to as the origin of modern artificial intelligence.  In it, he described the capacity for computers of the future to to display human-like capacities such as reasoning, learning, planning and creativity.

Artificial intelligence as a field began developing in the 1960s around the idea that it should be possible to deconstruct intelligent human behaviors as a succession of logical rules, transcribed in algorithms, which machines could follow to display intelligent behavior. As learning is one of the key features of human intelligence, scientists derived ways to train the computers to become so familiar with certain topics that they could identify the key components of those topics automatically.  For example, if the objective is to teach a machine to recognize pictures with cats, the computer is fed with thousands of pictures, including pictures with cats. The learning capacity of these machines is based on the ability of the algorithm to find statistical correlations in the data it analyses, that is to say, interdependence of variables in the data – or in layman’s terms, finding the cat in the haystack.

 

 

 

 

 

 

 

 

By applying the methodologies of how humans learn to identify object patterns, artificial intelligence leverages a computer’s ability to analyze huge quantities of data to find statistical correlations, in essence to deploy human learning methods at scale. Machine logic does differ from human thought pattern, and a fundamental aspect of machine learning techniques is that there is no way to know how it makes its decision on a given task.  In computer logic, the associations necessary to fulfill the given task are left to the computer itself to define.  Recently, when Facebook AI researchers enabled bot to bot discussion, they had to shut down the experiment as the bots had developed their own “shorthand” in communication, attributable to the computer’s desire to make communication more efficient.

Today, artificial Intelligence can be used to write stories or create artworks such as paintings or musical compositions, as well as analyze large sets of data.  Advances in AI have established two distinct types of machine learning, called “Narrow” and “Strong”.  Data-driven AI is referred to as ‘narrow AI’ or ‘ weak AI’ because it creates machines that are only able to do one task very well: recognize cats; go play; invent a recipe. Narrow artificial intelligence systems lack common sense and intentionality, and it is still not possible for machines to understand what would come next in a series of images or to understand the broader context of a scene in a given image.

Strong artificial intelligence, on the other hand, harks back to the original AI quest to create machines that are able to display the same level of intelligence as humans.  Often referred to as artificial general intelligence or ‘strong AI’, this type of machine learning would perform different tasks, show common sense and share intentionality. Outstripping human intelligence, Strong AI would lead to a technological singularity’, leaving humans in the hands of machines.  And that is where artificial intelligence gets a lot more real.