No items found.
How To Use Data to Prioritize In Your Community: A Case Study from FeverBee
By 
July 1, 2021
May 3, 2024

Is prioritizing your community activities a challenge for your team?I’ve known several organizations who tried to take on everything and "do it themselves." Often, they spent years struggling to improve their community in any meaningful way. Sometimes they made easily avoidable missteps. Most often, they kept repeating the same tasks each month, hoping something improved.They almost never stepped back to consider what's working and what their biggest wins could look like. It’s very hard to see your own blind spots. It’s far easier to stay busy doing tasks that "seem" like the right thing to do.As a community consultant, I approach organizations with a systematic, rigorous, process. A process you likely don’t have the time or knowledge to undertake yourself.I’ve shared this case study to highlight what that process looks like and the methods you can use to improve your community today. This case study focuses on a tech brand with $1 billion+ in revenue, but the core principles can be applied to any community.

Where Do You Begin?

Prioritization can be a daunting task. There are so many ways to think about improving a community, it’s hard to know where to begin.One of the first things we do with a client is invite them to make a list of all the community activities and initiatives they’re working on.Here’s a recent client example:

Two things are surprising about this list:

  1. How many items are on it. The community manager is working on 19 unique activities each week. That’s too many things.
  2. How normal this is. Over the past decade, I’ve asked almost 200+ community professionals to make these lists . On average, they tend to list around 15 distinct activities each week. (The record is 41!)

It’s impossible to do great work when you’re dividing your attention into tiny chunks. This raises a question: Why are we all trying to do so many things at once?The answer is simple: We don’t know what works. If we’re not sure what works, we try to do everything and hope something works.

How Do You Decide What To Prioritize?

If your boss told you tomorrow to improve the community, where would you even begin?Do you try to improve the website? Initiate and reply to more discussions? Work on gamification and superuser programs? Host more events and activities? Offer bigger rewards and more promotions?The problem at this stage is that you’re guessing. Unless you have good data that shows what members need, you shouldn’t be doing anything to satisfy those needs.For sure, there's mileage in each of these activities. But how much mileage? And which option gives you the best bang for your buck?The key is to figure out the few things that matter, and optimize everything around those tasks. Doing a few activities extremely well is far more important than doing a dozen or more badly.

Step One: Decide Which Metrics Matter

You shouldn’t be sitting in a dark room trying to decide what metrics matter! You should be proactively reaching out to your members and colleagues to find out what matters to them.Broadly speaking, there are two ways of doing this:

  1. Find out what your organization cares about, and guide the community towards that goal
  2. Find out what members care about, and guide the community towards that improvement.

With this client , the focus was on the latter. But the process works for both options.

What Do Members Care About?

There are two simple ways to find out what your members care about: Surveys, and interviews. Each yields useful insights.We began by issuing a survey and gathering responses. This was the response:

You can see a clear trend in this data. More than anything else, members in this community wanted to get help from others. They weren't interested in building their reputation, having intimate discussions, or making friends.Despite what others may think, this is common in the majority of communities I’ve looked at. Most people simply want to be helped. Knowing this, we can settle upon an easy metric for what members want:

  • Metric: How satisfied are members with the community?
  • Method: 1–5 scale asking members how satisfied they are with the community during their visit
  • Frequency: Visible upon every visit to the community

Step 2: Prioritizing Activities Which Matter Today

Before trying to make any improvements, we needed to figure out which activities were impacting that metric today. Typically, we can segment the actions of a community team into the following four boxes:

This isn’t a comprehensive list, but you get the idea. Going from the bottom left to top right, there are activities that are short-term and reach a small audience, long-term and reach a small audience, short-term and reach a big audience, and long-term and reach a big audience. As your community matures, you should be spending less time in the bottom left and more in the top right.At this stage, our goal was to free up time by cutting several low-impact tasks to see what the impact might be on member satisfaction.

Based on the initial list of tasks, we used the above framework to cut down on tasks that were short-term and reached a small percentage of the audience. This included the "working out loud" discussions, weekly newsletter, podcast, 34 out of 41 groups, and weekly expert AMAs.This saved around 60% of the community team’s time.Then we waited to assess the impact of these changes. You can see this in the graph below:

There was no visible impact. This is good news! It shows we freed up a lot of time without any noticeable cost.Now, we can work out what to do with this spare time.

Step 3: Invest more time and effort in the activities which matter.

Our survey told us not just what mattered to members (helpful responses), but also what they found helpful. We asked whether they cared most about speed of response, who the response was from, or the quality of response.The results were surprising:

Members primarily wanted high-quality responses from staff members. Speed of response mattered, but nowhere near as much as we expected. Notably, members didn’t care about getting kind and sympathetic responses.This data was good, but we needed to explore it a little deeper. We interviewed around 17 members of the community to better understand what they were feeling and what they wanted. These results are shown below:

The qualitative data uncovered several interesting items we hadn't addressed yet, including:

  • The community is hard to navigate
  • Search doesn’t show relevant results
  • Members frequently land at old posts without outdated information
  • Members struggled with information scattered across difference content types (discussions, articles, FAQs, etc)
  • Members were worried about asking questions when they were "supposed to be the expert" at their organization

Now that we had all the research, we could begin designing solutions.

Phase One: Low-Hanging Fruit

We always like to begin consultancy engagements with things that can be solved without a huge resource investment.The first interventions we designed were matched to a specific issue to tackle:

Each of these deserves some elaboration.

Archiving Old Discussions

We archived (deindexed from search, removed from navigation, and hid from internal search) any discussions that:

  • Received less than 10 visits in the past year
  • Received less than 2 posts in the past year

The information was still technically in the community, but it was almost impossible to find. This encouraged members to create new discussions on the topic. This helped us keep the content up-to-date and stop members from landing on discussions with outdated information.

Creating Definitive Resources

Next we worked to create a series of definitive resources. Creating a series of definitive guides are powerful in attracting more search traffic over the long-term. The DigitalOcean community does a terrific job of this.It’s also important to note the value of carefully created resources against other activities. We once had a debate with a client about whether it was best to host a VIP webinar or create a definitive resource to attract traffic. As you can see below, early on, the event generated far more traffic to the community. However, resources accumulated more traffic over time.

Instead of creating just one, create a number of definitive resources. If you’re not sure what resources to create, look at what questions people are asking on AnswerThePublic.After we implemented these changes, we waited to see what impact this would have on member satisfaction.

The results showed continual improvement from these "low-hanging fruit" changes. This suggested we were on the right track. Next, it was time to go for the big wins.

Phase Two: The Big Wins

The big wins sit in the top right of the grid I showed earlier. Once again, we let the member feedback guide our proposed changes:

Let’s dive into a few of these in more detail.

Revamping the taxonomy

When it comes to revamping the taxonomy, you usually have several choices. You can structure community content by:

  • User type (Customer, developer, partner, reseller, etc)
  • Product category (Product 1, product 2, product 3, etc)
  • Sector (Retail, B2B, B2C, etc)
  • Intent (Get help, explore, collaborate, etc)

After reviewing the survey results, speaking to members, and using heat map information from CrazyEgg, we completely removed the intent pathways in the navigation and focused on product and user type. We also ensured all key areas were accessible at a maximum of 3 clicks deep.

Adding Cognitive Search

Cognitive search is essentially a federated search tool with some machine learning and pattern-matching involved to show members content that is most useful to them.

The Logitech community (above) is a good example of cognitive search in action. Notice how it retrieves information from products, downloads, and the community, and prioritizes the results based on pattern matching.In communities with a high volume of information, the native search functionality is usually quite poor, and you should consider upgrading to a federated search tool. It's typically only worthwhile for larger communities — fees can begin at around the $40k per year range.

Direct-to-engineers

We also created a place for members to get help directly from staff on questions a typical member wouldn’t be able to answer. This wasn’t as popular as we imagined, but it reduced some of the frustration members had with posting questions and then being told to file a direct ticket instead.Once again, we tracked the results to see what happened. You can see the impact below:

At this stage, the member satisfaction line began to take a sharper curve upward, which we interpreted as a great sign.Finally, it was time to go for optimization.

Phase Three: Optimization

Now that we had exhausted the things we knew members wanted, we needed to take a deeper dive into the "satisfaction" number.At this stage, the community had a 4.1 satisfaction score. But that didn't mean every discussion was equally helpful. The next step was to dive in and see which parts of the community were getting better scores than others.

Targeting specific category improvements

By breaking the community down into unique categories, we can see the performance of different categories varied significantly. We used a bubble chart to show four items of data at once. These were:

  • Average time to first response.
  • Response rate %
  • Helpfulness (by color)
  • Active threads (by size of the community)

You can see this chart below:

Some categories were scoring high on helpfulness, response rate, and time to first response. Others were clearly a cause for concern.Based on this, we made three clear interventions:

  • Reduce the time to first response in the Product 2 category by assigning virtual agents to support
  • Increase the response rate in the Product 1 category by surfacing unanswered questions on the homepage
  • Improve the quality of response in Developer and Partner categories by recruiting experts to answer questions

You can see the result of these improvements in the chart below. Notice the changes in Product 2, Product 1, Developers, and (to a lesser extent) Partner categories.

Reviewing superuser contributions

Next, we explored a hunch that superusers might not be equal in how they engage and interact with members. By looking at the helpfulness score on the responses to each superuser, we found that many frequent posters were also posting the least helpful responses simply to gain the rewards.Based on this data, we decided to make some specific interventions:

  • When the superuser campaign relaunched, those with low helpfulness scores (below 3) were not invited back
  • Those with helpfulness scores between 3 and 4 were provided with a short training course
  • Superusers could compare themselves not not just on the quantity of responses, but on their helpfulness rating

This resulted in slow, but steady, improvements:

Within 7 months, the performance of all superusers had significantly improved. The lowest on the scale was around a 3 instead of 2, and the majority of members had increased up the scale a little. (The superusers who weren’t invited back caused problems for a few weeks—and then faded away.)However, the biggest impact was among the group who were already scoring well on helpfulness but low on the quantity of responses. When some of the biggest superusers stopped engaging, it created a void that better superusers filled. They began responding to more discussions. The time to first response dipped slightly, but the helpfulness score overall rose significantly.

The Overall Results

Over the year, we kept a close track on the interventions we were making and the result. You can see this in the annotated chart below:

Yet even this doesn’t truly quite capture the magnitude of the improvement. By setting the graphs to their highest upper and lower values, the improvement becomes even clearer:

By following a data-driven approach, we had taken the community from its lowest-ever satisfaction rating to its highest within a year.This is the secret to community consultancy: You set your own biases aside, and ruthlessly follow the data. Even if the results show you something you didn’t expect, you still have to follow them. The data shows you what to prioritize to deliver the best impact for your community.Once you know what to work on, you need expertise to execute each activity. If you don't have that, consider getting some support from a community consultant.For more tactical, in-depth guides to community building, you can purchase my new book, Build Your Community, on Amazon today. Learn more about our work at www.feverbee.com.

July 1, 2021
May 3, 2024

Share this post

Sign up to our community newsletter

Get insights and the latest community trends in your inbox.

More from the blog