April 16, 2012
Alan Turing was famous for believing computers could act like humans. He devised what is now known as the Turing test, whereby a computer would supply responses to human questioning. If the questioner could not distinguish the answers from real live person, the computer would have passed the test.
His premise was that since human thinking is logical and computers are based on logic, behavior should be computable. Nice idea. But it hasn’t quite panned out -- at least not yet. Even with recently developed AI-type technology such as automated customer service assistants, most people are aware when they are talking to a computer.
A recent article in Wired summed up the problem with Turing’s thinking:
That simplistic idea proved ill-founded. Cognition is far more complicated than mid-20th century computer scientists or psychologists had imagined, and logic was woefully insufficient in describing our thoughts. Appearing human turned out to be an insurmountably difficult task, drawing on previously unappreciated human abilities to integrate disparate pieces of information in a fast-changing environment.
But as the Wired piece suggests, the technology might finally be catching up to Turing. Recent advances in AI-type machines, like Google’s search engine and IBM’s Watson points to how that might come about.
Current AI relies on connection and probability algorithms. This technology drives language recognition found in both Google searches and in IBM Watson’s DeepQA technology. The systems understand, to a degree, what a human is requesting and produce search results or the most likely answers for a Jeopardy question.
However, those systems have to rely on their limited datasets, which confines their answers to particular domains. Neither Google nor Watson can supply ad lib responses like a human. But, as the Wired article points out, the ability to ingest large datasets and correlate the information seems to be the approach that could scale up.
Robert French, cognitive scientist at the French National Center for Scientific Research, theorized that a massive dataset could be the final key. This dataset would contain every memory, including olfactory, audio, visual and sensory data, from millions of people. Says French: “These data and the capacity to analyze them appropriately could allow a machine to answer heretofore computer-unanswerable questions”.
Recent advancements in big data, analytics, and language recognition are enabling the creation of much more intelligent machines than even just a few years ago. At the right scale these technologies may indeed lead to systems that could pass the Turing Test. While such machines would only be able to imitate human behavior, they would impact nearly every industry of the modern era.
Full story at Wired
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?