Becoming cognitive

Published on the 20/03/2017 | Written by Alison Bolen, Hui Li, Wayne Thompson


What underpins the increasingly in-vogue concept of cognitive computing? Wayne Thompson, Hui Li and Alison Bolen provide a quick start guide…

When machines become cognitive, they can understand requests, connect data points and draw conclusions. They can reason, observe and plan. Leaving for a business trip tomorrow? Your cognitive device will automatically offer weather reports and travel alerts for your destination city.

Planning a birthday celebration? Your cognitive device will help with invitations, make reservations and remind you to pick up the cake. Planning a direct marketing campaign? Your cognitive assistant can instinctively segment your customers into groups for targeted messaging and increased response rates.

In this quick primer on cognitive computing, we’ll explore the basic components of artificial intelligence and describe how various technologies have combined to help machines become more cognitive.

The history of artificial intelligence
Cognitive computing is an outgrowth of artificial intelligence (AI), which originally set out to make computers more useful and more capable of independent reasoning.

But where did AI come from? Well, it didn’t leap from single-player chess games straight into self-driving cars. The field has a long history rooted in military science and statistics, with contributions from philosophy, psychology, math and cognitive science.

Most historians trace the birth of AI to a Dartmouth research project in 1956 that explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and increased the focus on training computers to mimic human reasoning.

For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Google, Amazon or Microsoft tackled similar projects. This work paved the way for the automation and formal reasoning that we see in computers today.

AI contains many subfields, including:

  • Machine learning automates analytical model building. It uses methods from neural networks, statistics, operations research and physics to find hidden insights in data without being explicitly programmed where to look or what to conclude.
  • A neural network is a kind of machine learning inspired by the workings of the human brain. It’s a computing system made up of interconnected units (like neurons) that processes information by responding to external inputs, relaying information between each unit. The process requires multiple passes at the data to find connections and derive meaning from undefined data.
  • Deep learning uses huge neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns in large amounts of data. Common applications include image and speech recognition.
  • Computer vision relies on pattern recognition and deep learning to recognise what’s in a picture or video. When machines can process, analyse and understand images, they can capture images or videos in real time and interpret their surroundings.
  • Natural language processing is the ability of computers to analyse, understand and generate human language, including speech. The next stage of NLP is natural language interaction, which allows humans to communicate with computers using normal, everyday language to perform tasks.

How big data plus AI produced cognitive computing
Remember the big data hoopla a few years ago? Advancements in computer processing and data storage made it possible to ingest and analyse more data than ever before. Around the same time, we started producing more and more data by connecting more devices and machines to the internet and streaming large amounts of data from those devices.

With more language and image inputs into our devices, computer speech and image recognition improved. Likewise, machine learning had much more information to learn from. These advancements brought AI closer to its original goal of creating intelligent machines, which we now call cognitive computing.

Where are we today with cognitive computing?
Cognitive computing is the holy grail of AI. With cognitive computing, you can ask a machine questions – out loud – and get answers about sales, inventory, customer retention, fraud detection and much more. The computer can also discover information that you never thought to ask. It will offer a narrative summary of your data and suggest other ways to analyse it. It will also share information related to previous questions from you or anyone else who asked similar questions.

You’ll get the answers on a screen or just conversationally.

How will this play out in the real world? In healthcare, treatment effectiveness can be more quickly determined. In retail, add-on items can be more quickly suggested. In finance, fraud can be prevented instead of just detected. And so much more.

In each of these examples, the machine understands what information is needed, looks at relationships between all the variables, formulates an answer – and automatically communicates it to you with options for follow-up queries.

We have decades of AI research to thank for where we are today. And we have decades of intelligent human-to-machine interactions to come.


ABOUT THE WRITERS//

Wayne Thompson, Hui Li and Alison Bolen work for software company SAS.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

No items found
Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere