Archive

Posts Tagged ‘AI’

‘By 2029, computers will match human intelligence’

August 14, 2013 1 comment

Ray Kurzweil, one of the world’s foremost AI gurus, talks about a future in which computers will understand natural language, and nanobots will fight disease


Ray Kurzweil has been described as the “ultimate thinking machine” and a “restless genius”. Currently director of engineering at Google, he is credited with a slew of inventions, which include a music synthesizer and a machine that could read to the blind (the first customer was Stevie Wonder). A high-profile public campaigner for Artificial Intelligence, he has predicted that by about 2030, technology will advance so much that for every passing year, one year will be added to human life by controlling genes and having nanobots in the bloodstream fighting infections. A littleknown part of his history is that as a 19-year-old he came under the influence of Maharishi Mahesh Yogi, after learning about him from the Beatles. He learnt Transcendental Meditation and has kept it up since “though not regularly”. He delved into Eastern philosophies and his bestselling books like ‘The Singularity is Near’ and the latest ‘How to Create a Mind’ have sections on “western versus eastern ideas”. In an email interview with Subodh Varma, the 65-year-old Kurzweil throws light on the basics of Artificial Intelligence. 
    What is ‘Artificial Intelligence’? Do you include human emotions in ‘intelligence’ or in ‘consciousness’? 
Artificial Intelligence is the science of creating computers that can perform tasks that we associate with human intelligence. Our ability to understand and respond appropriately to high level emotions is the cutting edge of human intelligence and the most intelligent thing that we do. Being funny or loving or sexy are very sophisticated behaviours. We want computers to have these capabilities also so that they can interact with us in helpful ways. Understanding emotion is key to understanding language and language is key to understanding knowledge. 
    How far have we progressed towards developing AI? 
Recent progress has been impressive. IBM’s Watson computer is able to play the TV game of Jeopardy! which is a broad task involving complicated natural language queries which include puns, riddles, jokes and metaphors. For example, Watson got this query correct in the rhyme category: “a long tiresome speech delivered by a frothy pie topping.” It correctly responded “What is a meringue harangue.” Watson got a higher score than the best two human players combined. Watson got its knowledge by reading Wikipedia and other encyclopedias, a total of 200 million pages of natural language documents. Another good example is the self-driving car from Google. These cars have driven a half million miles without human drivers and accidents. You can also ask questions of your cellphone by using Apple’s SIRI or Google Now. We also have good models of how the human neocortex processes information and we can use these biologically inspired algorithms to build intelligent machines. That is what I am doing now at Google. 
You predict that by 2045 ‘singularity’ will be achieved. What does that mean? 
One critical date is 2029. It has been my consistent prediction that by that date computers will match human intelligence and pass the “Turing test,” meaning that they will be indistinguishable from human intelligence. Once they can do that they will necessarily exceed human intelligence because they will be able to read everything on the web and every page of every book. Consider that Watson does not read as well as a human but it makes up for that by reading more pages – 200 million to be exact. I would also point out that the advent of intelligent machines is not intended to compete with biological humans or to displace us but rather to enhance us. We are already enhanced by the devices we carry around and their ability to connect with computer intelligence in the cloud. We will do that directly from our brains by the 2030s. 
    These technologies progress exponentially, doubling in power about every year. That means that by 2045 we will have multiplied our intelligence a billion fold by merging with the AI we are creating. That is such a profound transformation that we borrow this metaphor from physics and call it a singularity. 
    Can AI become superior to human intelligence? Will it threaten humanity as feared by many? 
I would point out that we are not talking about an invasion of intelligent machines from Mars. We create these tools to extend our own reach. That is the whole point of technology. So it will not be human versus machine. Machines are already enhancing our intelligence and that will only increase in the future. This is a very democratizing technology. A couple of kids in a college dorm room with their thousand dollar notebook computers created Google. The same was true of Facebook. A kid in Africa with a smartphone has access to more information than the President of the United States had 15 years ago. That being said, technology has always been a double-edged sword. Fire kept us warm and cooked our food but was also used to burn down our villages. So, all technologies have a dual creative and destructive potential. But there is no question that we have been helped more than we have been hurt. Just look at how human life expectancy has gone up.

R AY K U R Z W E I L

A RT I F I C I A L I N T E L L I G E N C E …….. MACHINES WITH A MIND OF THEIR OWN

A ‘thinking’ machine isn’t just the stuff of sci-fi movies. With computer science and neuroscience working together to simulate the human brain, a breakthrough in artificial intelligence may not be too far away

Subodh Varma | TIG 

    Last October, members of an audience at a conference in Tianjin, China, gasped when they heard Microsoft chief research officer Rick Rashid address them in Mandarin. He would speak in English, pause, and the Chinese translation would come on, in his own voice. A machine was converting spoken English into text English, translating that to text Chinese and then converting it to spoken Chinese using Rashid’s own voice characteristics. 
    Chinese is a difficult language, and it is also structured differently from English. So the task was very difficult. The tech world hailed the new software as a breakthrough. After a long time it looked as if progress was being made towards liberating Artificial Intelligence (AI) — that is, machines that behave like humans — from the sci-fi movie cage. 
    There had been milestones like that before. One of the most publicized of such events was in 1997 when an IBM machine called Deep Blue beat world chess champion Garry Kasparov two wins to one, with three draws. The refrigerator-sized computer could analyze 200 million positions in a second. It had a database of 700,000 grandmaster games to draw upon. 
    The world hailed this event as a historic moment — the first real sign that a machine could be more ‘creative’ than a human. But, ultimately, what was Deep Blue doing? It was utilizing brute force power to ram through millions of calculations to beat an undoubtedly creative and trained mind like Kasparov’s. In 2006, a German software called Deep Fritz contained in just two Intel processors beat the new world champion Victor Kramnik. Again brute force, but contained in a much more elegant machine. 
    More recently, there was Watson, a machine built by IBM specially for playing the American TV game show Jeopardy! in which contestants have to give the right questions to answers read out to them. In 2011 Watson took on two of the game’s alltime biggest winners Brad Rutter and Ken Jennings – and beat them soundly. It contained 200 million pages of content, including the full text of Wikipedia. It had four terabytes of disk storage and 16 terabytes of RAM (memory) storage. It could process 500 gigabytes of data, equivalent to a million books in one second. (Tera is trillion, giga is billion.) 
    So was it brute force power again? It was, but there was more to it. Watson had broken the language barrier between humans and computer machines. Humans speak in natural language rightly assuming that the listener will fill in. Scientists had struggled against this barrier for decades because it seemed impossible to build a machine with a databank that could cover all the quirks and styles of speech. But Watson had broken through in what the techies call ‘Natural Language Processing’, although it did falter once or twice in the contest. 
    Machines have undoubtedly come a long way towards human intelligence in the past half a century or so. Besides high-visibility achievements like Deep Blue and Watson, computing machines have developed enormous, unimaginable speeds that no human could dream of. Simultaneously, size and energy consumption have gone down drastically. Many strides have been made in bridging the machine-human chasm — from voice and face recognition to bionic artificial limbs that respond to nerve impulses. 
    But there is still a long way to go. When can a machine be called intelligent? There is much controversy on this, and rather bizarrely, every new advance appears in retrospect to not yet reach the level of ‘intelligence’. It is generally agreed that the gold standard for declaring a machine intelligent is that it should pass the Turing test. 
    Alan Turing, British code-breaker and father of computational theory, in a paper in 1950 proposed that if a human being cannot distinguish between a machine and a human through interaction, then that machine would be intelligent. This is how this thought experiment is visualized: in one room there is a computer and a human with a computer; in another sits the interrogator, a human with a computer connected to both computers in the other room. The interrogator interacts with both computers and tries to guess which is run by a human and which one is running by itself. When the computer manages to fool the interrogator into thinking it is human, that day the intelligent machine would be born. 
    Despite all the spectacular advances, no machine has come anywhere near 
passing the Turing test. 
    Here is a recent innovation that brings out both the achievements and the limits of intelligent machines. Stanford University Electrical Engineering professor Andrew Y Ng and Google fellow Jeff Dean, working at Google X, a research facility at an unknown location in the Bay area, San Francisco, reported last year that they had developed a 16,000 array of processors. The system was fed 10 million YouTube thumbnails so that it could ‘watch’ and categorize what it saw into 22,000 categories like ‘cats’, ‘humans’, ‘cars’, and so on. 
    The researchers called their system an “unsupervised neural network”, that is, it was modeled on the way neurons (brain cells) are organized in the brain, hierarchically and connected to each other. “Unsupervised” because it was not told specifically to “identify cats” or anything like that. It was just built to analyse and categorise. Because of the sheer number of cute cat videos on the Internet, the system started identifying cats and slotting them. Similarly it identified ‘humans’. No computer system ever had been able to do this before in “unsupervised” conditions. It looked as if a “learning machine” was finally coming through. 
    But here’s the catch: the system had a success rate of 16%. This is 70% more than what had been achieved ever before. Still, it is way short of what a human eye (with the visual cortex of the brain backing it up) can do. The human visual cortex, located at the very back of the brain, has a million times more connections than the one achieved in the Google system, the researchers themselves admitted. 
    “It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don’t quite have the right algorithm yet,” Andrew Ng told The New York Times. 
    This approach of modelling computers on the human brain is gaining increasing popularity and some modicum of success. The seat of thinking, language, motor functions, spatial and sensory perception in the brain lies in the neocortex, the uppermost layer of the brain consisting of grey cells. It is found only in mammals, and the biggest neocortex in relation to the rest of the brain is found in humans. The neocortex is made of six layers and it is thought that this is a hierarchical system — neurons start reacting at the lowest (most inside) layer and each succeeding layer refines the activity to produce the final ‘thought’. 
    One of the leading figures in the field of building neural network based machines is Geoffery Hinton, a professor at the University of Toronto, who spends half his time at Google. He has developed simple models of virtual neurons connected to each other and layered like the human neocortex. 
    Of course, neuroscience itself is, still “a bit like physics before Newton,” as Bruno Olshausen, director of the Redwood Center for Theoretical Neuroscience at the University of California-Berkeley put it in the journal Scientific American. Scientists think that they have understood only about 15% of how the visual cortex works — and that is just one of the functions of the brain. 
    But, as the functioning of the brain is revealed, spurred by advances in imaging technology, the feedback loop with computer scientists will expand rapidly. And, so will the prospect of thinking machines.