What do you think of when you hear “artificial intelligence” or AI?
Do you think of:
Some believe artificial intelligence is the harbinger of doom that will result in robots stealing our jobs and maybe our lives, while others see it emerging as a friend or even family who can help us be more efficient and lead better lives. (Siri, can you remind me to pay my bills in an hour?) Regardless of where you stand, it cannot be argued that artificial intelligence is, and will continue to be, a prominent feature of the modern world.
But where did it all begin—how did we get to this point? Let me tell you a SUPER-abridged bedtime story that includes an interesting cast of characters (both human and machine). For all the nerds out there, this story will be so good you’ll be asking Alexa to add an Alan Turing biography to your Amazon cart.
Let’s hit it.
The History of Artificial Intelligence: Once Upon a Time …
Milk was delivered to people’s doors, the first Peanuts comic strip was published, and the first remote control was marketed. Yes, it was 1950 in America and some big cultural things were happening. Just across the pond Alan Turing was prospecting on machine intelligence.
In a 1950 paper, Turing, who was later considered the Father of Computer Science, proposed what became known as the Turing Test to determine a machine’s intelligence. The test involved a person, a computer, and a judge. The judge was tasked with correctly identifying which conversational interactions were human and which were machine, based on responses to questions. (Fun fact: It’s argued that no machine has passed the Turing Test to this day, despite claims that Google Duplex passed it in 2018.)
Though Turing alluded to the concept of artificial intelligence, it didn’t receive its name until the mid-1950s when John McCarthy organized a conference for scientists at Dartmouth University. McCarthy defined the concept as “the science and engineering of making intelligent machines” and later became known as the Father of Artificial Intelligence.
A few years passed and in 1959, Arthur Samuel started really talking about “machine learning” defining it as a “field of study that gives computers the ability to learn without being explicitly programmed.” To my knowledge he did not become known as the Father of Machine Learning.
In the mid-1960s one of the first chatbots, ELIZA, was born. Joseph Weizenbaum introduced ELIZA to the Natural Language Processing (NLP) scene and she was built to mimic a Rogerian psychotherapist. Around the same time, Daniel Bobrow developed STUDENT, a program that could solve word problems in algebra—this was a huge accomplishment for natural language processing.
Things are getting good, right? We’re really cooking with gas now!
That is, until work with artificial intelligence stagnated in the mid-1970s, with government funding for AI slowing. Despite the cool things happening, scientists overpromised and didn’t deliver. Known as the AI winter, things weren’t looking good for our main characters.
Funding returned years later with renewed hope in AI’s potential, but yet another AI winter hit with government funding again doubting the future of AI. This winter lasted into the mid-1990s—someone, do something quick!
As we all know, AI was never meant to be kept down. Clapping back at the haters, computer scientists unleashed an IBM computer named Deep Blue that defeated the chess champion Gary Kasparov in 1997. Deep Blue was capable of evaluating 200 million positions in a second. I guess you could say there was some progress being made in AI.
Fast forward to 2011. If you weren’t impressed by a machine winning a game of chess fraught with rules and specific plays, an IBM machine named Watson took on two of the reigning Jeopardy! champions and won. This computer could analyze complex questions and riddles and correctly answer questions.
The same year, Apple’s Siri “the personal assistant” entered our lives with the ability to answer questions, like if we need a raincoat for the day. Not to be outdone, over the next few years Google’s Google Now, Microsoft’s Cortana, and Amazon’s Alexa joined Siri.
Here’s where I’d normally say, “And they lived happily ever after. The end.” But this is not a story with an ending—the storyline is still developing across multiple sectors and markets.
Good luck sleeping now, amiright?
Nailing the Lingo: AI, Machine Learning, and Natural Language Processing
Before we get into the importance of AI and marketing, let’s cover some basic artificial intelligence-related lingo that you can bust out at parties and really geek it up (and probably get a date, if you’re looking).
As you can imagine, there are lots of aspects of artificial intelligence. The terms I’ve included are only a sample of those relevant to marketing and fixtures in Brandwatch’s and Crimson Hexagon’s social listening platforms. According to SAS:
- Artificial intelligence “makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks.” How does a machine get there? Through learning, similar to humans.
- Machine learning “is a method of data analysis that automates analytical model building. It’s a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns, and make decisions with minimal human intervention.”
- Natural language processing is “a branch of artificial intelligence that helps computers understand, interpret, and manipulate human language.”
Human Analysts and AI: The Marketing Dream Team
You already know this, but I have to say it anyway.
We’re in the era of hyper-connected customers, where personalization will tip the scales in certain business’ directions. An important part of ensuring this personalized experience happens through social listening, or online conversation research. Social listening gives you an “in the moment” understanding of your customers and brand as it exists “in the wild.” You can be a fly on the wall to any conversation happening around you about whatever you’re selling.
This is all well and good, but given the vastness of the social media world, it’s impossible for lone analysts to compute by hand, comprehend all of the conversations happening, and generate meaningful insights in the short turnaround time required by our ever-connected world. That is unless you literally have an analyst army of Risk proportions to do your bidding.
Enter artificial intelligence.
“The growth of AI, machine learning, and deep learning technology addresses this challenge [of needing quick insights in order to personalize] head-on. This can allow marketers to take personalization to a whole new level—using data as the voice of the customer—so you can match and tailor digital experiences to customer journeys. Deep learning technology provides more personal and accurate data points, enabling more accurate personalization recommendations.”
“Customers today want to be seen, known, and understood. Social listening allows marketers to give them that sense of being known without having to spend thousands of man-hours actually getting to know them.”
—Forbes, describing how marketers use AI to socially listen.
Let’s talk about that.
Of course the work we do at Campus Sonar relies heavily on the human analyst. Part of what makes it possible for us to do our jobs is Brandwatch’s artificial intelligence. But as we’ve said many times before, we’re not a software company. The software gives us a start, then we bring it home.
One way Brandwatch helps Campus Sonar analysts is with their AI-Analyst, Iris. Iris has the ability to monitor brands, identify crises, and examine drivers of conversation—it sticks to identifying peaks and valleys in conversation. Brandwatch also has some behind-the-scenes AI work happening with their signals feature, which again alerts us to unusual online mention activities when it detects changes. Brandwatch also helps us with the AI that’s built into their analytics. Brandwatch uses NLP and text analytics to identify major topics of conversation, examine sentiment, and better understand emotion. Let’s focus on sentiment because it is important, and there are lots of questions about it.
Brandwatch currently takes a rules-based approach to categorizing sentiment. The rules-based approach to text analytics relies on Boolean strings or more complex models developed by the language pros. This approach results in faster analysis, easy mistake identification, and more granular results, and you know exactly what you’re getting. However, there are some drawbacks: sometimes there are exceptions to the rules, complex rules don’t write themselves overnight, different languages present different challenges, and it’s a relatively narrow approach. Furthermore, the rules can’t identify specific human-aspects of language such as sarcasm and slang. You need a human reviewing the AI’s work. As such, Brandwatch allows users to change sentiment specifications if they’re incorrect. For those of you thinking you want to cut some corners, think again: without human assistance, the stock accuracy of this feature is about 65+ percent. But that’s why you have handy analysts (like us!) to clean up sentiment tagging to raise the accuracy even higher.
The text analytics approach will change with Brandwatch’s merger with Crimson Hexagon. The new “super-platform” will allow users the ability to tap into Crimson Hexagon’s acclaimed artificial intelligence that’s based on machine learning, not rules. Remember how Brandwatch allows you to manually change the sentiment? In addition to manual changes, Crimson Hexagon allows you to take the role of teacher and train for the type of sentiment (or really any category) you’re looking for, which should increase accuracy. While it depends on the number of posts you’ve trained and continue to train, the AI’s stock accuracy is conservatively around 68+ percent. That’s 68 percent with no posts trained and relying solely on Brandwatch/Crimson Hexagon’s IQ to correctly categorize posts. With more time spent “coding” online mentions and attention to consistent training on the front-end, there might be less cleanup for the human analyst on the back-end, but there will always be cleanup.
By a human. A human who understands the distinctly human aspects of language, context, and culture.
Don’t get me wrong, it’s not all roses here either. For machine learning to be effective, you need to actually teach it. There may be a decrease in precision and different ways to approach various documents, but if you take these considerations to heart, you’ll be able to train with specific examples, customize your analysis and insights as changes occur, have more flexibility to discover the nuances of your data, and analyze any language.
I’m sure you’re as excited about this merger as I am, my fellow data nerds.
All of this is to say, it’s the magical union between AI and analysts that gives you the real insights you need by the time you can use them. From a Campus Sonar perspective, together, this heroic team helps you prevent dumpster fires from consuming your campus (or, at the very least, helps you smell the smoke and tell you what’s about to burst into flames), drill down into how people feel about your brand and why, and bubble up the things that matter the most.
And we (awesome Campus Sonar analysts with Brandwatch’s AI) do this work well. And we do it fast.
So how do we feel when we hear “artificial intelligence” or AI? Delighted.
Don't miss a single post from Campus Sonar—subscribe to our monthly newsletter to get social listening news delivered right to your inbox.
The post Human Analysts + Artifical Intelligence = Marketing Masterminds!!! originally appeared on the Campus Sonar Brain Waves blog.