Cover photo

Artificial Intelligence in 4 Minutes

A Non-Technical Guide

What is AI?

Artificial Intelligence refers to machines performing tasks that would normally require human intelligence. AI breaks down into two categories, narrow AI and general AI. Narrow AI, also known as “weak AI” performs a specific task, or set of closely related tasks which would typically require human intelligence to accomplish.

Artificial General Intelligence (AGI) refers to machines that can perform any intellectual task a human can, and perhaps better. When Elon Musk talks about AI wiping out the human race (science-fiction), this is referring to a potential negative consequence that could happen if the conceptual idea of AGI becomes reality.

Components Narrow AI

I. Natural Language Processing (NLP): Ask Siri or Google Assistant a question and that service is able to understand and interpret the question and generate a response in human language by using NLP.

II. Knowledge Representation: Retrieving knowledge in a manner that a machine can understand. IBM’s Watson uses advanced knowledge representation techniques with NLP to understand and answer questions across a wide range of topics. Watson intakes the question through NLP, processes the information using knowledge representation techniques, and then outputs the answer using NLP.

💡 What’s the difference between NLP and Knowledge Representation? 💡

Siri takes a question asked in spoken language, interprets the intent of the question, and then transforms it into text using NLP. One person might ask “What’s the weather?” while another might ask “What’s the temp?” and the system understands these are both requests for the current weather. A more complex question, “How many teams have come back from a 3–1 deficit in the NBA Finals to win?” (the answer is one, the Cleveland Cavaliers in 2016). Siri doesn’t know that information, or have that stored in an internal database. Instead, a search engine (Google) retrieves the top result and then returns a human-like response using NLP.

While Siri has some elements of knowledge representation, it’s not the same as in the Watson example. Watson applies NLP to understand the content and structure of the question asked. Using knowledge representation, Watson maps the question to potential answers in its database. Watson understands the context, infers meaning, understands the relationship between different pieces of information, and responds to complex questions with a depth and precision that Siri can not.

Ask Watson that same question, “How many teams have come back from a 3–1 deficit in the NBA Finals to win?” and Watson will analyze an internal database, consider the historical data of NBA games, the specific teams involved, the concept of a “3–1 deficit,” and so forth. Watson is able to provide an answer along with related context, such as the year the comeback occurred, the team who achieved it, and other details.

III. Learning (Machine Learning): Computers learning from data and improving their performance over time. Machine learning algorithms train on data and learn to make predictions or decisions without being specifically programmed for that task.

Instead of programming Watson with specific answers to each potential question, Watson trained on a vast amount of information from encyclopedias, books, research papers, and other data. Watson learned over time how to better answer questions by learning from mistakes, improving performance, and becoming more effective at playing Jeopardy.

IV. Reasoning and problem-solving: Enabling machines to make decisions and find solutions based on available information and rules by analyzing a given situation, understanding the relevant factors, and determining the best course of action.

AlphaGo was a computer that defeated the world champion Go player, Lee Sedol, in 2016. Using sophisticated reasoning and problem-solving algorithms, AlphaGo explored different possible moves, evaluated potential outcomes, and selected the move with the highest probability of success.

V. Social Intelligence: Computers understanding and interpreting human emotions, social cues, contexts, and responding in a way that is socially appropriate. Siri uses a basic form of social intelligence to interpret the intent behind user commands and respond in a way that mimics human conversation by using a combination of NLP, machine learning, and pre-programmed responses.

VI. Perception: Perception involves the ability to sense and understand the environment, mimicking human senses like vision (computer vision) or hearing (speech recognition). Robotics falls into this category as well, and a great example of that is Boston Dynamics. Their robots, like Atlas and Spot, use perception systems to interact with and navigate their surroundings. Using computer vision to analyze visual data from cameras and sensors, and perceive objects, obstacles, and terrain in real-time. Self-driving vehicles fall fit here as well. Tesla, Waymo, Cruise technology all have to perceive and understand their environment to function safely and efficiently while driving on the road. Here’s how they do it:

  1. Input Collection: A self-driving vehicle uses a variety of sensors, including cameras, LIDAR, and RADAR to capture information about the surroundings. This could include detecting other vehicles, pedestrians, signs, road markings, and more. Sensors serve as the vehicle’s “eyes” and “ears,” capturing raw data about the environment.

  2. Perception Processing: Raw visual data from a camera might identify and categorize different objects, like cars, pedestrians, or road signs. This process can also involve elements of Machine Learning, where the AI system learns from past data to better interpret current and future data.

  3. Action Planning: Based on this processed perception data and its internal knowledge representation, the vehicle’s AI system decides what actions to take, accelerate, brake, turn, change lanes, etc. This is a form of reasoning and problem-solving, as the system needs to make decisions based on its understanding of the environment.

  4. Actuation: Computers needs to manipulate physical hardware to carry out the planned action. Planned actions becoming physical actions. This is where the “robotics” comes into play.

Each component of narrow AI serves a specific function and plays a key role in the overall system. While certain examples may combine multiple components of narrow AI, such as perception and reasoning, it is important to note that these individual parts operate within their own domains and do not communicate or learn from each other in the same system. This clear separation and lack of interconnectivity distinguish narrow AI from general AI.

Loading...
highlight
Collect this post to permanently own it.
Mercer AI logo
Subscribe to Mercer AI and never miss a post.
#ai#artificial intelligence#machine learning#deep learning