What are the most challenging performance metrics to prove in social?

AdobeStock_283212039.jpeg

Why are we all so obsessed with boosting engagement rates, when not all interactions are positive? And how dare your boss make brand sentiment your team’s core KPI, when most AI can’t fully understand pragmatics?

Today we’re serving up some insights into the most challenging-to-prove performance metrics in social media – and why they can often be problematic. To the data dynamos among us, this one’s for you…

Performance (noun)

/pəˈfɔːm(ə)ns/

1.     The execution of an action.

2.     Something accomplished or completed.

In the context of social media analytics, when we discuss the performance of a campaign or post, we analyse whether or not the action we took accomplished a certain result or completed a particular task.

However, this becomes an issue when ‘performance’ means different things to different people. This is because there are tonnes of metrics to consider, and we find ourselves asking, ‘which one aligns best with our objective?’

We’ve all been in situations where we’ve had to explain to a designer, that while their creative drove 1M impressions, next to nobody clicked through. Or that the engagement rate was through the roof, but only because an angry mob declared war in the comments. Your designer is left scratching their head, as you begin slowly reaching for your emergency wine. ‘Social media analytics is an art, not a science,’ you say, before ending the call at the speed of light!

So, what specific challenges do we face when it comes to proving performance? Let’s take you through these maddening metrics.

Challenge #1: “Metrics can be manipulated with spend”

Clients worry that media metrics are easily manipulated through paid tactics, creating distrust in the numbers. This is true. The more you spend, the more people you reach, the more engagements you earn and the more clicks you deliver.

However, with more money, results can be manipulated if performance is not being benchmarked effectively. If you are benchmarking via engagement rates, delivering fewer impressions can create a higher engagement rate, but probably isn’t driving greater campaign impact.

Also, if you benchmark to gross targets (1,000 campaign engagements), any additional spend provided during the campaign should result in performance greatly exceeding your targets, therefore your expected target has to increase as well to provide an accurate barometer of the campaign’s effectiveness. Unfortunately, sometimes the old target remains. So the lesson is, always update your targets accordingly throughout the lifetime of a campaign.

Challenge #2: “Influencers can’t be accurately measured”

Currently, when you think of influencers, you think of Dubai. However, we really should be thinking about attribution. How do we attribute conversions to influencer activity? Followers, reach, the quality of their content, and affiliate links, to name a few, are all things to consider when activating influencers. But from which of these metrics do you measure the brand value, and understand the true cost of influencer marketing?

Although it’s becoming easier to track these figures through paid partnership tags and API plugs, it is near impossible to track attribution with any real accuracy. One possible solution: switch your focus from attribution to brand awareness, and you might just find influencer marketing more advantageous.

When working with our clients, we always benchmark via cost-efficiency targets (cost per impression/engagement/conversion, etc.), instead of manipulable ‘rates’ or ‘gross targets’. By doing this, the expected returns adjust to changing spends and always give an accurate reflection of cost-efficiency. This framework means we can always optimise our campaigns to drive more desired results with your spend.

Challenge #3: “AI can’t understand pragmatics and sentiment”

Marmite, you either love it, or you hate it. Everyone knows their preference, but does the brand director at Unilever know, and how do they measure this?

A social media sentiment analysis considers the emotions and opinions around your brand or product. Sentiment tools aim to determine whether a piece of text is positive, negative or neutral. This is something which can never achieve 100% accuracy when left in the hands of AI. This is because there's no universal way to communicate.

Language and how we use it to express our thoughts is constantly evolving, for example, AI can’t detect sarcasm or irony, two ever-reliable tools of dry British wit. A customary glance over at Gen Z tells us their contextual use of emojis evolves faster than the amount of new TikTok users. Our advice would be to always ask your social listening tool for the sentiment accuracy rate of their product and compare this to the industry average.

Challenge #4: “As privacy concerns rise, attribution becomes more difficult”

Apple recently announced they would continue with their new update for iOS14 which will affect how conversions are received and processed on social ads. There are multiple implications to this, from optimising, to targeting, to reporting. One example is the attribution window which has been reduced from 7-day view to 1-day view e.g. if you now see a social ad, do not click, but come back after 1 day to convert, this is no longer recorded as a conversion for our client. 

Apple made this decision to protect our ever-decreasing privacy. But at what cost? Social platforms, such as Facebook, have said they are worried about the impact this will have on small businesses. For advertisers, this omission of conversions could have an effect on our media budgets, if we’re no longer able to a provide full and accurate set of results. This is for sure an everchanging story to watch.  


Got a performance metric problem? Get in touch with our Social Director Ines at ines@eightandfour.com

Previous
Previous

The Social Tea #005

Next
Next

The Social Tea #004