Help  |  Pay an Invoice  |  My Account  |  CPE Log  |  Log in

Explain it to me like I’m 5: Artificial intelligence

By Adam Eric Junkroski

April 19, 2023

There’s a ton of talk lately about artificial intelligence and the term has been making its way into daily news stories and conversations, especially with the rise of ChatGPT. Yet, if you search for “artificial intelligence,” or “AI,” you get multiple similar sounding, but slightly different explanations.

So, what is AI?

The simplest definition for AI is perhaps an oversimplification: AI refers to computer technology designed to copy human problem solving, decision making, pattern recognition, learning and perception. 
If you do some digging, you will see other areas AI covers, but many — if not most — will fall into one of these categories. For example, speech recognition is AI, but technically has aspects of pattern recognition and learning associated with it.

What AI is not 

Perhaps as important as what AI is, is what it is not. AI does not indicate self-awareness or sentience, though sophisticated AI responses and actions can seem eerily similar to what you might expect from another human. AI is also not omniscient. Currently, AI is limited to the data sets made available to it, whether those are pre-existing — the archives of National Geographic, for example — or data taken from interactions with people or other computer systems.

Additionally, AI is incapable of several important functions of human intelligence that make social interactions especially complex, such as empathy and intuition. AI is also incapable of independently developing a moral and ethical framework. Similarly, AI has great difficulty making independent inferences. It can mimic inference when programmed with specific responses to specific inputs, but this is not true inference.

Perhaps most importantly, AI isn’t yet able to think in a truly creative way. Humans excel at developing novel and unused approaches to problem solving by applying information gleaned from completely unrelated experiences. 

What many people think of when they think of AI is what experts refer to as Artificial General Intelligence (AGI). This refers to a broad, nonspecific ability to act and problem solve as a human would, including adaptability and true reasoning. Some argue self-generated emotional response must be included in AGI as well, which could include aspects of self-preservation and the development of preferences.

AGI does not yet exist. Though AI systems are increasingly capable and complex, they are still a very long way from achieving AGI.

Where does machine learning fit into the picture?

Machine learning is a subset of AI. In machine learning, the program builds its own data set from the inputs it receives and is given instructions for. For example, several brands of thermostat can be trained to recognize the days, times, or conditions that lead to you to change a setting. Over time, they can learn that when you wake up on weekdays at 6 a.m., you like the temperature to be 73 degrees in the house, and it automatically adjusts. The system considers multiple factors: day of the week, time and current temperature to make a decision about what to do. That’s the decision-making part. What input you provided regularly under those conditions is the machine learning part.

AI exists without machine learning, but it tends to be limited to very specific tasks. For example, several pattern recognition activities, such as optical character recognition, are technically AI. In the past, however, these systems were generally not capable of learning to identify characters in fonts or writing they weren’t already trained on. But adding machine learning led to handwriting recognition, which enhanced character recognition. (Though maybe not with huge success in the beginning, as was the case with the infamous Apple Newton from the early 90s).

Is there an end goal for AI?

Yes, and it’s not sinister — AI will not turn the Earth into a desolate wasteland filled with cannibalistic metahumans. 

Simply put, the end goal is to develop systems that supplement, not replace, human intelligence to take over tasks that are too menial, dangerous or time-consuming for humans. Have you seen this incredible video of a Boston Dynamics robot running amok?

Picture one of these robots with the added abilities we’ve discussed here. 

“Robot, could you wake up the kids and get them ready for school?”

Your robot knows the locations of your kids, understands what waking them entails, knows they need clothes set out, toothbrushes loaded with toothpaste and their bookbags ready. Oh, and it learned yesterday that little Billy had homework last night, so it reminds him to check to make sure it’s completed and in his bag.

Plus, it adapts to this new request by fitting it into the routine tasks it already had set out for it for the day and is able to resume them once this task is complete. Later, when little Billy realizes he did not finish his homework and attempts to get the robot to do it for him, it understands the ethical implications (from human perspective) and refuses.

And Billy fails. That’s the end goal of AI — failing Billy. 

OK, maybe not.
 
Adam Eric Junkroski is a marketing, advertising and communications professional specializing in content strategy. With over 25 years of experience, he has worked with a wide variety of clients across the country, including finance, family entertainment, mitigation banking and real estate.