Listen to Your Sonarian Senses: Lessons Learned from Crisis Monitoring

Crises happen. The ripples are felt in-person and online. While talking to every person affected would be helpful, it is not realistic. Social listening, however, gives campuses instant access to online conversations about a particular crisis as they emerge on social sites, in forums, on blogs, in the news, and on other websites. Furthermore, social listening allows for unique views into trends within those conversations as the crisis unfolds in real time. All in all, social listening puts campuses in a better position to plan or adjust crisis communication strategies in the moment based on emerging insights. But this is only effective with human analysts teaming up with technology to write the perfect query and easily visualize the data. 

TL;DR

Let’s step back and talk about campus crises. If you work on a campus, your eye probably just twitched, and chances are you’ve just knocked on wood, crossed yourself, or thrown salt over your shoulder. But deep down you know, these actions won’t save you.

Crises come in lots of sizes and flavors. There can be relatively small crises or Varsity Blues-level crises. They can be internal crises or external crises. They can involve faculty/staff, students, or donors. They can relate to cheating, weather, errant social media posts, or even death of a campus community member. They can be serious; something you never thought you’d see. 

In each case, the context is different, how it unfolds varies, who gets involved shifts, and what the response looks like changes. One thing is the same though: when it hits, it hits hard and often poses a reputational threat that the institution needs to mitigate before they lose control of the story.

That’s where Campus Sonar comes in. We help institutions better understand and strategically manage emerging crises; after all, information is power. We wouldn’t say our approach to crises is formulaic, but we have a system in place for how we deal with these occurrences. We have a process that works. 

A woman and two men react with different emotions in speech bubbles

However, our most recent crisis monitoring project presented a twist from what we typically deal with, which required some adaptability and quick thinking, along with a reliance on our gut instincts. More than any other crisis case before, it indisputably indicated the value of the human analyst in supporting an unfolding crisis on campus.

This particular crisis involved a small private college that had an initial incident several years ago and now awaited a legal judgment. The initial incident only involved a few individuals, but the legal results punished the school as a whole. The client wanted to track how the news was discussed online and any potential reputation-harming coverage of the school. 

There were several factors surrounding the crisis, such as the time between when the initial scandal broke and when the legal judgment was announced, that made the school uncertain about the online response. Specifically, by examining conversation in real time, the client hoped to use our data and analysis to support their response and communications strategy moving forward. 

Queries (AKA the Ladle for Language Soup)

We knew the issue our client was concerned about would be released in the weeks following our initial contact with the client. When this happened, we’d need to quickly turn around dashboards and relevant data points for insight generation to inform their communications strategy, so we decided to draft an early version of the query. To do this, we had to get our arms around the initial incident and write a query based on that information. We scoured news articles and identified key aspects of the conversation: the who, what, where, when, how, and why. We wrote the keywords and phrases we found into our query as they represented our best thoughts on the topics that might emerge in public conversation when the legal announcement was made.

Nerd Note: Dashboards are repositories of data visualizations where data is broken down and visualized in different ways to clearly depict patterns and trends.Nerd Note: A query is a request for information from a social listening database or software. It defines the scope of the conversations we collect through social listeningThe thing is: We knew the query wouldn’t be perfect. But it would be enough to get our client a baseline on the scope of conversation, especially if the crisis broke right before the weekend.

Which it did—on a Friday afternoon.

A bowl of "Alphabet soup" illustrating queries and search parametersWe had alerts in place for the query and executed it the minute the issue was released. We left for the weekend confident that our query was in place and the client's data was being collected. We knew we’d need to spend Monday observing the mentions that pulled in to understand how the incident was actually being discussed. This would help us write the most robust and relevant query.

We tweaked around the edges of the Boolean for a day or two, checking its effectiveness by testing searches for the client and the crisis in Twitter. But something didn’t feel right. Even when it appeared our query functioned pretty well … we felt like we were missing something. We edited it again. But our “Sonarian Senses” were tingling, and we couldn’t ignore that something else was going on.

We took a deep dive into how this crisis was talked about online, regardless of how it was discussed in the past. By observing the proliferation of conversation and using a qualitative analysis of the conversation, we distilled patterns in language and re-wrote the query again. What we came up with was an extremely complex string of Boolean that ended up doubling the volume of conversation we were able to see and analyze related to the crisis. The iterative process of mapping the conversation, translating the patterns into Boolean, testing, editing, waiting, watching ... took hours. 

Qualitative analysis in this case was the careful study of language and idiosyncrasies in how online conversation emerged on this topic. Through this we identified trends and patterns in language to then use in our query.We had some major epiphanies during this process. One of the most relevant and important was that it’s impossible to predict how a crisis will be discussed online, no matter how much information you have going in. One would think (at least we did) that how the conversation was discussed the first time around would be similar to the current discussion. The major players, location, and incident hadn’t changed over time, so why would the language? Wrong …

It turns out people have idiosyncratic ways of talking about emerging stories. They also drew attention to the target school in different ways, such as hashtagging the school, directly tagging the school, or referencing the state the school resides in. Additionally, with numbers involved, rounding the amounts created key differences in how the story was communicated each time. Finally, we saw that instead of placing blame on the specific individuals involved, the whole university took the heat in the crisis resolution.

While we relied on our Sonarian senses to guide the technology to find relevant conversation mentions, we relied on the same senses to dig deeper into the data and dashboards to pull out meaningful observations and insights. 

Dashboards (AKA Pretty Data Pictures)

Feeling confident we were pulling in all relevant mentions related to the crisis, we created a dashboard of data components that qualitatively and quantitatively contextualized the mentions. These components needed to accurately and holistically portray the important players and channels involved in the crisis, as well as show how the mentions and metrics evolved as time went on. We built a robust dashboard that accomplished these goals and as the crisis unfolded, we relied on our Sonarian senses to identify new ways to analyze and situate the data.

Nerd Note:  Michelle Mulder:heavy_check_mark:  2:33 PM They look great, just one change on note 4. This was actually describing Qualitative and Quantitative. Can you update the text to: Quantitative components provide numerical data for us to observe, analyze, and report on. Qualitative components display broader data patterns observed in language, such as repeated/similar key words or phrases.There were a few critical examples of data contextualization and insights we discovered as we monitored the daily mentions for our client. These emerged only after we human analysts examined the mentions and distilled important trends for our client.

We noticed that retweets made up a large portion of relevant Twitter mentions, compared to original posts, comments, or replies. After further investigation, we discovered a high volume of politically-leaning accounts interacting with the story and that the vast majority of the content was retweets of an initial post. These retweet-heavy accounts acted bot-like, seemingly retweeting content that related to specific politically-charged topics and people. Over time, this pattern persisted and indicated people weren’t fixating on the crisis—they discussed the facts or the headline of the case and moved on. This was a promising sign for the university.

We saw a low author to tweet ratio, indicating that people weren’t repeatedly discussing the crisis; they mentioned it once or twice and moved on. This was another optimistic sign for the university, as it showed there was no fixation or continued coverage of the crisis among people discussing it online.  

Finally, we measured the trending and fading topics as the crisis continued to play out. This measurement component examined commonalities of topics across the crisis and displayed the topics based on their prevalence over time. We typically saw the same topics that generally moved from trending to fading over time as the issue dissipated from public conversation; however, one day we noticed a new trending topic, which indicated a proliferation of new mentions containing a specific phrase. Upon further examination, the culprit was actually a single article posted multiple times on the same site with a slightly altered URL. The unique URL artificially inflated the “prevalence” of the trending phrase, making it appear as though various news articles included the trending phrase. In reality, this “trend” was isolated to a single post on a particular site amplified through the use of unique URLs. This deeper dive allowed us to assure our client that despite what the trending topics showed, there was no major change to the data pattern we previously observed. 

As the crisis went on, we continued to look for new ways to examine the data. However, with daily mention volume about the crisis steeply declining after the first several days, we continued to obtain evidence that the crisis was fading from view, further affirming our client’s chosen response strategy of not releasing a statement. 

Nerd Note: Daily Mention Volume is a metric that counts the number of mentions pulled in by the query within a given day.In the End ...

As humans monitoring the emerging online crisis conversation, we were nimble enough to spot patterns and trends that necessitated the adaptation of our query to identify relevant mentions for our client. Further, after finalizing the query, we leveraged our innate human ability to not only monitor data components displaying key metrics and dive into those patterns, but also to use our knowledge of data and crises to further explore data views specific to our client and their unique situation.

TL;DR: Human analysts (and their Sonarian senses) rule during a crisis monitoring situation. Here’s what we learned and what differentiates us from artificial intelligence by itself.

    1. People use creative and idiosyncratic ways to annotate a story that become vital in social listening. You probably wouldn’t have guessed it’s important to include contextual terms like “bamboozled” or something similar in a query, but it actually is. 
    2. The ways users or authors talk online about colleges and universities vary, and that matters in a query. The use of @mentions, hashtags, shortened versions of the school name, misspellings of the school, etc., all play an important role in crisis conversations. You wouldn’t want to miss a mention of your campus, “Ms. Marks Medical College,” because you didn’t account for the fact that many will invariably forget the “s” in Marks and then have their content about “Ms. Mark Medical College” retweeted to high heaven. Further, some won’t even use the college name in a heading, but instead refer to the city or state the college or university resides in.
    3. If the crisis involves a numeric value (like a fine or settlement), you bet your bottom dollar there will be rounding. And sometimes that rounding won’t be the way you’d expect ... but you still need to include that in your query.
    4. Blame gets shifted. While the first time this crisis rolled around, specific names were used, we rarely saw those folks named the second time around. Instead, the entire university got thrown under the bus for the bad behavior of a select few. This meant our hyper targeted Boolean in our initial query was not efficient.
    5. Dashboards tell only part of the story. Humans, not tech alone, contextualize data points. While important pieces of information exist within individual data points, displays of data may be deceiving without situating those findings within the larger story.
Subscribe to the Newsletter