Machines are increasingly demonstrating the capabilities of a genuine Artificial Intelligence (AI). They can beat chess grandmasters, win million-dollar quiz shows, help diagnose cancer, and soon even smell like a human nose. A look at AI and what it means for our future.
In 2011, Watson, an artificially intelligent computer system developed by IBM, participated in a special edition of the American television quiz program, “Jeopardy!” The show, which places emphasis on lateral thinking, language comprehension and speed, seemed too sophisticated a prospect for computer intelligence. But Watson prevailed in a head-to-head contest against the two biggest prize winners in the show’s history, prompting the question: might we be entering the age of a genuine Artificial Intelligence?
The phrase Artificial Intelligence was conceived in the 1950s by cognitive scientist John McCarthy, who described it as “the science and engineering of making intelligent machines.” Today, AI is understood to describe the capability of computers and computer software to exhibit intelligent behavior, a quest that has preoccupied researchers for the past half-century.
Early optimism concerning the potential of AI was not matched by technological development. Even though there were, by the 1970s, computers that could prove mathematical theorems, play draughts (checkers) and even speak, these accomplishments were not scalable. Essentially, the machines were only as good as the information provided by their human programmers; one could only get out of them what had already been inputted, so to speak.
Moore’s Law and Exponential Growth
But advances in computer hardware technology in the 1980s contributed to a breakthrough in AI research. These advances were predicted in 1965 observation by the electrical engineer (and founder of the Intel Corporation) Gordon Moore. Moore’s Law, as the observation has come to be known, relates to the processing power of integrated circuit computing power: Moore hypothesized that the number of transistors in an integrated circuit – and thus the circuit’s computational power – would double every two years. The prediction has held firm until now, underlining the exponential growth and reach of computers and the digital electronic industry as a whole.
With regards to AI, the principal contribution of Moore’s Law concerns the evolution of machine learning: the capacity to develop and implement algorithms – the sets of rules that govern calculations or problem-solving activities by computers – that can actually learn from raw perceptual data rather than already delineated information. Watson’s sophistication at decoding linguistic riddles in order to win “Jeopardy!” demonstrates this capacity succinctly.
Big Data: Creating Context
Machine learning, in turn, is facilitated by the use of Big Data: massive data sets that previously resisted analysis and interpretation because existing means of data processing were inadequate. Uwe Neumann, Senior Credit Analyst with Credit Suisse, explains that Big Data creates the framework for a new, sophisticated means of drawing value from information. “Essentially, Big Data is the analysis of structured and unstructured data,” Neumann explains. Structured data, confined within predetermined parameters – think back to the human programmers of early AI machines – is limited in the scope of its applicability to problem-solving. But the use of predictive analytics and other data-scraping techniques, enabled by technological advances, has bestowed upon computers advanced means of assessing, analyzing, evaluating and ranking unstructured data. “Now, it is possible to engage with data that cannot be put into a structure, but can be put into a context,” Neumann emphasizes. This is a significant move towards a functional AI.
From Deep Learning to Smelling – An Artificial Nose
Whilst AI is, in definitional terms, about machines thinking like humans, it is also about endowing machines with human capabilities: the responsive ability to engage with the environment like humans, assessing, processing and evaluating data in order to inform the decision-making processes at which humans remain superior to machines. Lavi Secundo, a researcher in the Neurobiology department of Israel’s Weizmann Institute of Science, is part of a team working on an intriguing AI project: the creation of an artificial nose.
There are machines, mainly employed in security contexts, capable of analyzing an environment and identifying the presence of certain chemicals in the atmosphere – a crude approximation of “smelling”. But the Weizmann project is vastly more sophisticated, being the development of a machine able to sense volatile molecules and recreate the percept of odor. “What we are attempting to do is to develop a vocabulary of smell, training the artificial nose to describe the environment in general terms, without falling back on specific molecular descriptions,” Secundo explains; recreating the olfactory complexity of the human nose, taking machines above and beyond the limited capabilities of chemical recognition. This is possible, Secundo explains, in part thanks to contemporaneous developments in Deep Learning – the use by computers of algorithms that model high-level abstractions in data. The algorithms analyze data in a non-linear yet effective manner – overcoming the limitations that once stymied AI.
The Future of AI – Or of Humans?
AI is already leading important developments in productivity and efficiency, both in the workplace and in leisure. The question must be asked, though: if these developments mirror the exponential improvements in the technology that gestated AI, would they one day make human beings redundant? Neumann thinks that the question, for the moment, remains unresolved. “If a machine starts to learn faster than a human brain, then there will be a tipping point,” he notes. In the meantime, he notes, AI is used as a tool to collate and interpret financial data. “This could put analysts like me out of a job,” he laughs wryly.
Not just financial analysis. In one form or another, AI already supports a range of professional vocations, from law and medicine to (one acknowledges, reluctantly) journalism. If Moore’s Law continues to hold true – and even if the rate of technological advancement slows from its current exponential rate – it might well be that in the not-so-distant future, computers will possess the sophistication to supplant their human creators.
Balancing The Bounty And The Spread
But this may be jumping the gun a little. In their influential 2014 book, “The Second Machine Age”, MIT academics Erik Brynjolfsson and Andrew McAfee argue the principal challenge is not if computers will supplant humans, but rather how humans manage the undoubted added value artificial intelligence and other technological developments will bring into our lives. Specifically, they contrast the gap between what they term as the Bounty, the economic reward of increased productivity engendered by technology, and the Spread, the concentration of the aforementioned Bounty in the very highest centiles of the general population, as the immediate challenge. “The technologies we are creating provide vastly more power to change the world,” they write, “But ultimately, the future we get will depend on the choices we make.”
Artificial Intelligence, Human Ingenuity
Artificial Intelligence, one senses, will ultimately reflect the uses that we put it to. Secundo’s artificial nose, for instance, will eventually become an invaluable aid to the medical profession, providing a more sophisticated and perceptive substitute to current diagnostic tests. It will indubitably improve our quality of life. And as for Watson, our intelligent quiz-winning computer? It – or its technology – is now a diagnostic tool at the Memorial Sloan-Kettering Cancer Centre in New York. It helps doctors and nurses filter the mass of research, genetic data, procedures and drugs used in cancer treatment. And it won’t take the place of a doctor: rather, it provides a range of pin-point precise options, backed with the data information used to reach its conclusions. AI, given the right human orientation, will become a force for good, it seems.
Akin Ajayi, Journalist