Ella Gale, Ben de Lacy Costello and Andrew Adamatzky Observation and Characterization of Memristor Current Spikes and their Application to Neuromorphic. – ppt download

Categories: Uncategorized

Digital Logic Synthesis for Memristors – ppt download

Categories: Uncategorized

AI GETS REAL

Artificial intelligence and science fiction have always gone hand in hand. The sentient HAL 9000 in Arthur C Clarke’s Space Odyssey, Marvin, the Paranoid Android in Douglas Adam’s Hitchhikers Guide to the Galaxy, C3PO and R2D2 in Star Wars, KITT in Knight Rider, the frightening T-1000 in Terminator; the list can go on and on. 
    While we’re nowhere close to what sci-fi promises, there are cars like KITT that can navigate by themselves; robots like C3PO and Marvin that can create the illusion of human-like behaviour; and machines that can beat us humans at the very games we’ve created… DEEP BLUE 
No talk of AI is ever complete without a mention of IBM’s Deep Blue. In 1997, this machine, capable of evaluating 200 million chess positions per second, became the first ever to win a game against a reigning world champion. Grandmaster Garry Kasparov later asked for a rematch, but IBM refused, and Deep Blue was dismantled. Since then, there have been other chess programs such as Rybka and Fritz that are as powerful, but Deep Blue will always be remembered as the first machine that displayed a form of AI that, till then, was only a part of sci-fi lore.CHAT BOTS 
Many machines have displayed an almost human-like ability to hold a conversation. Bots in chat rooms are a great instance of such programs. In more recent times, SIRI (Speech Interpretation and Recognition Interface), Apple’s digital assistant on the iPhone, has managed to keep users amused and informed with the help of a large natural language database — like the one used by Watson — and some super algorithms. In fact, there’s the Loebner Prize, introduced in 1990 by US inventor Hugh Loebner, for programmers whose chat bots pass the Turing Test. But why take our word for it, try chatting up with a few at www.mitsuku.com, http://www. cleverbot.comalice.pandorabots.com and www.personalityforge.com
KISMET 
In the late 1990s, Dr Cynthia Breazeal from MIT began work on Kismet — a robot equipped with cameras and microphones that gave it an artificial sense of vision and hearing. It could detect motion (including eye movement), and use its four cameras to estimate the distance of an object in its visual field. Similarly, its audio system could identify five different emotions — approval, prohibition, attention, comfort and neutral — from speech. It was also equipped with 21 motors in head and neck, with which it could simulate facial expressions through the movement of its ‘ears’, ‘eyebrows’, ‘eyelids’, ‘lips’, ‘jaw’ and ‘head’. The result was an automaton that could react appropriately during its interactions with humans; leaning in when it liked something and withdrawing when it didn’t. Kismet — once touted by the Guinness Book as the ‘World’s Most Emotionally Responsive Robot’ — is now an exhibit at the MIT Museum in the US. WATSON
In 2011, Watson won a three-day Jeopardy! contest against Ken Jenning and Brad Rutter — two of the biggest champions that the US show has ever seen. What made Jeopardy! the perfect test for the IBM machine is the way the game is played. Players, armed with buzzers, are provided with a clue that is in the form of an answer — and contestants have to reply with an answer that is in the form of a question. To be successful in this game, participants need to discern 
    double meanings of words, puns, and 
    hints — something that is far beyond 
    machine understanding. 
    Of course, the ability to process vast amounts of data, and rapid response was not really a problem for the IBM machine. At the end of three days, the final clue posed to Jenning, Rutter and Watson was in the ‘19th Century Novelists’ category: “William Wikinson’s ‘An account of the principalities of Wallachia and Moldavia’ inspired this author’s most famous novel.” Watson answered with “Who is Bram Stoker?” to win the contest, with a score of $77,147; more than three times that of Jennings’ $24,000 who placed second. IBM is now putting this game-show winner to work on pilot projects with organizations in healthcare and finance. 
THE DRIVERLESS CAR 
AI is making its presence felt in automotive engineering. Companies like Google and TerraMax are clear frontrunners. Google’s project, which began in 2006, includes a test fleet of vehicles equipped with a system of sensors that makes them virtually crash proof, and have been programmed to drive at the speed limit that’s stored on its digital maps. Till date, the Google team has completed over half-a-million kilometres of accident-free, autonomous driving. 
    TerraMax’s story began in 2004, when Oshkosh Corp entered its unmanned vehicle technology in a contest for driverless vehicles. Oshkosh is now developing TerraMax for the US Department of Defense, and is even supplying it to the UK military. 
    The technology takes the form of a modular kit that can be integrated into any vehicle. It is capable of autonomous navigation, and uses radar and laser beams to measure the vehicle’s distance from other objects. Together, these sensors create a map of the vehicle’s surroundings, allowing it to navigate almost autonomously. Interestingly, as of August 2013, three US states — Nevada, Florida and California — have already passed laws permitting autonomous cars on their roads.

 

Categories: Uncategorized

‘By 2029, computers will match human intelligence’

August 14, 2013 1 comment

Ray Kurzweil, one of the world’s foremost AI gurus, talks about a future in which computers will understand natural language, and nanobots will fight disease


Ray Kurzweil has been described as the “ultimate thinking machine” and a “restless genius”. Currently director of engineering at Google, he is credited with a slew of inventions, which include a music synthesizer and a machine that could read to the blind (the first customer was Stevie Wonder). A high-profile public campaigner for Artificial Intelligence, he has predicted that by about 2030, technology will advance so much that for every passing year, one year will be added to human life by controlling genes and having nanobots in the bloodstream fighting infections. A littleknown part of his history is that as a 19-year-old he came under the influence of Maharishi Mahesh Yogi, after learning about him from the Beatles. He learnt Transcendental Meditation and has kept it up since “though not regularly”. He delved into Eastern philosophies and his bestselling books like ‘The Singularity is Near’ and the latest ‘How to Create a Mind’ have sections on “western versus eastern ideas”. In an email interview with Subodh Varma, the 65-year-old Kurzweil throws light on the basics of Artificial Intelligence. 
    What is ‘Artificial Intelligence’? Do you include human emotions in ‘intelligence’ or in ‘consciousness’? 
Artificial Intelligence is the science of creating computers that can perform tasks that we associate with human intelligence. Our ability to understand and respond appropriately to high level emotions is the cutting edge of human intelligence and the most intelligent thing that we do. Being funny or loving or sexy are very sophisticated behaviours. We want computers to have these capabilities also so that they can interact with us in helpful ways. Understanding emotion is key to understanding language and language is key to understanding knowledge. 
    How far have we progressed towards developing AI? 
Recent progress has been impressive. IBM’s Watson computer is able to play the TV game of Jeopardy! which is a broad task involving complicated natural language queries which include puns, riddles, jokes and metaphors. For example, Watson got this query correct in the rhyme category: “a long tiresome speech delivered by a frothy pie topping.” It correctly responded “What is a meringue harangue.” Watson got a higher score than the best two human players combined. Watson got its knowledge by reading Wikipedia and other encyclopedias, a total of 200 million pages of natural language documents. Another good example is the self-driving car from Google. These cars have driven a half million miles without human drivers and accidents. You can also ask questions of your cellphone by using Apple’s SIRI or Google Now. We also have good models of how the human neocortex processes information and we can use these biologically inspired algorithms to build intelligent machines. That is what I am doing now at Google. 
You predict that by 2045 ‘singularity’ will be achieved. What does that mean? 
One critical date is 2029. It has been my consistent prediction that by that date computers will match human intelligence and pass the “Turing test,” meaning that they will be indistinguishable from human intelligence. Once they can do that they will necessarily exceed human intelligence because they will be able to read everything on the web and every page of every book. Consider that Watson does not read as well as a human but it makes up for that by reading more pages – 200 million to be exact. I would also point out that the advent of intelligent machines is not intended to compete with biological humans or to displace us but rather to enhance us. We are already enhanced by the devices we carry around and their ability to connect with computer intelligence in the cloud. We will do that directly from our brains by the 2030s. 
    These technologies progress exponentially, doubling in power about every year. That means that by 2045 we will have multiplied our intelligence a billion fold by merging with the AI we are creating. That is such a profound transformation that we borrow this metaphor from physics and call it a singularity. 
    Can AI become superior to human intelligence? Will it threaten humanity as feared by many? 
I would point out that we are not talking about an invasion of intelligent machines from Mars. We create these tools to extend our own reach. That is the whole point of technology. So it will not be human versus machine. Machines are already enhancing our intelligence and that will only increase in the future. This is a very democratizing technology. A couple of kids in a college dorm room with their thousand dollar notebook computers created Google. The same was true of Facebook. A kid in Africa with a smartphone has access to more information than the President of the United States had 15 years ago. That being said, technology has always been a double-edged sword. Fire kept us warm and cooked our food but was also used to burn down our villages. So, all technologies have a dual creative and destructive potential. But there is no question that we have been helped more than we have been hurt. Just look at how human life expectancy has gone up.

R AY K U R Z W E I L

A RT I F I C I A L I N T E L L I G E N C E …….. MACHINES WITH A MIND OF THEIR OWN

A ‘thinking’ machine isn’t just the stuff of sci-fi movies. With computer science and neuroscience working together to simulate the human brain, a breakthrough in artificial intelligence may not be too far away

Subodh Varma | TIG 

    Last October, members of an audience at a conference in Tianjin, China, gasped when they heard Microsoft chief research officer Rick Rashid address them in Mandarin. He would speak in English, pause, and the Chinese translation would come on, in his own voice. A machine was converting spoken English into text English, translating that to text Chinese and then converting it to spoken Chinese using Rashid’s own voice characteristics. 
    Chinese is a difficult language, and it is also structured differently from English. So the task was very difficult. The tech world hailed the new software as a breakthrough. After a long time it looked as if progress was being made towards liberating Artificial Intelligence (AI) — that is, machines that behave like humans — from the sci-fi movie cage. 
    There had been milestones like that before. One of the most publicized of such events was in 1997 when an IBM machine called Deep Blue beat world chess champion Garry Kasparov two wins to one, with three draws. The refrigerator-sized computer could analyze 200 million positions in a second. It had a database of 700,000 grandmaster games to draw upon. 
    The world hailed this event as a historic moment — the first real sign that a machine could be more ‘creative’ than a human. But, ultimately, what was Deep Blue doing? It was utilizing brute force power to ram through millions of calculations to beat an undoubtedly creative and trained mind like Kasparov’s. In 2006, a German software called Deep Fritz contained in just two Intel processors beat the new world champion Victor Kramnik. Again brute force, but contained in a much more elegant machine. 
    More recently, there was Watson, a machine built by IBM specially for playing the American TV game show Jeopardy! in which contestants have to give the right questions to answers read out to them. In 2011 Watson took on two of the game’s alltime biggest winners Brad Rutter and Ken Jennings – and beat them soundly. It contained 200 million pages of content, including the full text of Wikipedia. It had four terabytes of disk storage and 16 terabytes of RAM (memory) storage. It could process 500 gigabytes of data, equivalent to a million books in one second. (Tera is trillion, giga is billion.) 
    So was it brute force power again? It was, but there was more to it. Watson had broken the language barrier between humans and computer machines. Humans speak in natural language rightly assuming that the listener will fill in. Scientists had struggled against this barrier for decades because it seemed impossible to build a machine with a databank that could cover all the quirks and styles of speech. But Watson had broken through in what the techies call ‘Natural Language Processing’, although it did falter once or twice in the contest. 
    Machines have undoubtedly come a long way towards human intelligence in the past half a century or so. Besides high-visibility achievements like Deep Blue and Watson, computing machines have developed enormous, unimaginable speeds that no human could dream of. Simultaneously, size and energy consumption have gone down drastically. Many strides have been made in bridging the machine-human chasm — from voice and face recognition to bionic artificial limbs that respond to nerve impulses. 
    But there is still a long way to go. When can a machine be called intelligent? There is much controversy on this, and rather bizarrely, every new advance appears in retrospect to not yet reach the level of ‘intelligence’. It is generally agreed that the gold standard for declaring a machine intelligent is that it should pass the Turing test. 
    Alan Turing, British code-breaker and father of computational theory, in a paper in 1950 proposed that if a human being cannot distinguish between a machine and a human through interaction, then that machine would be intelligent. This is how this thought experiment is visualized: in one room there is a computer and a human with a computer; in another sits the interrogator, a human with a computer connected to both computers in the other room. The interrogator interacts with both computers and tries to guess which is run by a human and which one is running by itself. When the computer manages to fool the interrogator into thinking it is human, that day the intelligent machine would be born. 
    Despite all the spectacular advances, no machine has come anywhere near 
passing the Turing test. 
    Here is a recent innovation that brings out both the achievements and the limits of intelligent machines. Stanford University Electrical Engineering professor Andrew Y Ng and Google fellow Jeff Dean, working at Google X, a research facility at an unknown location in the Bay area, San Francisco, reported last year that they had developed a 16,000 array of processors. The system was fed 10 million YouTube thumbnails so that it could ‘watch’ and categorize what it saw into 22,000 categories like ‘cats’, ‘humans’, ‘cars’, and so on. 
    The researchers called their system an “unsupervised neural network”, that is, it was modeled on the way neurons (brain cells) are organized in the brain, hierarchically and connected to each other. “Unsupervised” because it was not told specifically to “identify cats” or anything like that. It was just built to analyse and categorise. Because of the sheer number of cute cat videos on the Internet, the system started identifying cats and slotting them. Similarly it identified ‘humans’. No computer system ever had been able to do this before in “unsupervised” conditions. It looked as if a “learning machine” was finally coming through. 
    But here’s the catch: the system had a success rate of 16%. This is 70% more than what had been achieved ever before. Still, it is way short of what a human eye (with the visual cortex of the brain backing it up) can do. The human visual cortex, located at the very back of the brain, has a million times more connections than the one achieved in the Google system, the researchers themselves admitted. 
    “It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don’t quite have the right algorithm yet,” Andrew Ng told The New York Times. 
    This approach of modelling computers on the human brain is gaining increasing popularity and some modicum of success. The seat of thinking, language, motor functions, spatial and sensory perception in the brain lies in the neocortex, the uppermost layer of the brain consisting of grey cells. It is found only in mammals, and the biggest neocortex in relation to the rest of the brain is found in humans. The neocortex is made of six layers and it is thought that this is a hierarchical system — neurons start reacting at the lowest (most inside) layer and each succeeding layer refines the activity to produce the final ‘thought’. 
    One of the leading figures in the field of building neural network based machines is Geoffery Hinton, a professor at the University of Toronto, who spends half his time at Google. He has developed simple models of virtual neurons connected to each other and layered like the human neocortex. 
    Of course, neuroscience itself is, still “a bit like physics before Newton,” as Bruno Olshausen, director of the Redwood Center for Theoretical Neuroscience at the University of California-Berkeley put it in the journal Scientific American. Scientists think that they have understood only about 15% of how the visual cortex works — and that is just one of the functions of the brain. 
    But, as the functioning of the brain is revealed, spurred by advances in imaging technology, the feedback loop with computer scientists will expand rapidly. And, so will the prospect of thinking machines.

 

Teaching positions

Categories: Uncategorized

MBA grads can expect pay hikes

 

Survey Says That Talent Will Be Sought After In Non-Traditional Sectors

Sruthy Susan Ullas TNN 


Bangalore: Watch out for a year when the demand for MBAs will be in those industries that were not traditionally sought after by MBA talent, like energy and utilities, health care and pharmaceuticals. Even while European companies might stagnate in hiring, the growth of Asian companies is robust with some even thinking about increasing the base salaries. This and more in the 2013 GMAC Corporate Recruiters Survey that is being released on Wednesday by Graduate Management Admission Council (GMAC). 
    The survey was conducted in partnership with the European Foundation for Management Development (EFMD) and the MBA Career Services Council (MBA CSC). More than 900 employers in 50 countries around the world, were included in the survey. 
    Compared to countries with high per capita income, a it’s those with medium per capita income who plan to hire graduate management talent in 2013. A greater share of employers in these countries plans to increase base salaries. “European hiring projections reflect employer sentiments about the region’s struggling economy. Although the hiring outlook for MBAs remains stable compared with 2012, hiring projections for all other types of candidates decreased slightly,” said the report to be released on Wednesday. 
    The hiring outlook has improved slightly from 71% companies who hired in 2012 to 75% who will in 2013. The share of companies expecting to hire recent bachelor’s-degree candidates without work experience has declined slightly. 
    It’s in energy and utilities that the demand for recent MBA graduates has spiked the most. The growth is from 69% of companies hiring in 2012 to 86% in 2013. The demand for health care industry has also gone up. “The vast majority of companies in this sector are located in the United States—a market reacting to recent changes in federal health care laws,” the report points out. 
    If that is the positive scene, hiring is bound to decrease in the field of manufacturing. Demand for new MBA hires is projected to decline in 2013 both globally and within the United States (86% of US companies hiring in 2013 versus 90% in 2012). 
    With the uneven growth in economy continuing many national governments are implementing strict austerity measures and spending cuts, the job growth is slow in government and nonprofit sectors. While things are stable in the technology sector, hiring levels in the finance and accounting sector are expected to grow in 2013 for all candidate types. 
    The new demand for MBA graduates in emerging sectors is also driving the starting salaries up. According to the reports, more than half of health care employers plan to increase base salaries above inflation (18%) or at the rate of inflation (33%) for MBAs in 2013. A quarter of companies in the consulting industry, which has increased its hiring, is planning to increase annual starting base salaries above the rate of inflation. 
MUCH IN DEMAND 

• Globally, the mean number of recent MBA graduates that companies plan to hire increased from 11.4 in 2012 to a projected 14.6 in 2013 

• Three-fifths (61%) of Asian companies plan to hire an MBA in 2013, up from 54% in 2012 

• US companies show the greatest demand for MBAs worldwide, as 85% of firms plan to hire an MBA in 2013, up from 82% that did so in 2012

 

Categories: Uncategorized

Bioinformatics and applied Biotechnology

 

 

Categories: Uncategorized

study Abroad Scholarships for minorities

 

 

 

Categories: Uncategorized

‘Most studies on neuroscience are unreliable’

April 15, 2013 2 comments

‘Most studies on neuroscience are unreliable’

Subodh Varma TIMES INSIGHT GROUP 

  Chocolates can boost brain power. Exercise makes you feel happy. Pomegranate juice will keep your brain healthy. Recent years have seen a flood of such studies. It is being called the Golden Age of neuroscience – the study of how the human brain works. Riding on a combination of imaging technology, computing power and genetics, neuroscientists are dizzy with success. And the money is flowing in. President Obama has announced a $100 million BRAIN Initiative to map every neuron, the European Commission has given a billion euros to build a computer model of all 86 billion neurons in the human brain. 

    But a study published this week in Nature Reviews Neuroscience has thrown a bucket of cold water on the euphoria. It found that most brain related studies are not reliable and may be exaggerating things. Scientists from the University of Bristol, UK, teamed up with those from Stanford University, the University of Virginia and the University of Oxford to analyze published neuroscience studies and came to a startling conclusion: the average “statistical power” of these studies was just 20%. This means only one in five times will the studies’ claim be valid. Most scientists regard an 80% power as sufficient. 
    Kate Button, one of the authors from Bristol University told TOI that the statistical power of a study is its ability to detect the effect it is looking for. “Power is dependent on both sample size (number of participants) and the size of effect being investigated, with increases in both leading to increased power,” she said. 
    The other problem that Button found is that of exaggeration of effect. The smaller the sample, the more probable it is that a small individual variation will get highlighted as a major effect. “Imagine that antidepressants actually improve mood by 10% on average, but we select a group of people and do a study and find that, in our select sample, it improves mood by 20% on average. This would be an overestimation of the true effect.” 
    Putting these two effects together and you will find the euphoria about neuroscience studies flagging. Button and her colleagues used 49 metastudies published in 2011 that had collated the results of 730 studies on neuroscience themes. The scientists analysed 461 brain imaging studies and found that their statistical power was just 8%. They analyzed 41 rat-in-the-maze type of studies which study memory functions and found that their average power was between 18 and 31%.

Categories: Uncategorized Tags: , ,