From the graph above we can see an increase in the number of visitors in the April-June 2020 period compared to the previous three months. The graph also clearly shows the two highest peaks in April-June. We need to investigate the cause of this. Apart from that, the row of numbers below the graph also shows that the number of new visitors,
sessions, and
page views or the number of pages visited has increased. However, the number of
pages per session, session duration, and bounce rate have decreased, so we need to investigate the cause. So, from these data alone, we can draw several conclusions and have two new questions that need to be answered.
Even though the image above only shows a piece of the most basic data provided by Google Analytics. There are still hundreds of metrics and other data that we can access there. So you can imagine that analytics platforms are vast, filled with mysterious buttons, cool-looking metrics, interesting numbers, and colorful charts. If we are not focused, we can experience what the Javanese call
keblinger, or the British call
being overwhelmed. We get lost in the wilderness of data, become confused ourselves, and end up making wrong or irrelevant conclusions.
For example, when I make a report on the results of a campaign whose aim is to increase audience awareness about an issue. I will have lots of questions, for example how many visitors to this site, and has there been an increase over time? Which pages do visitors frequently visit? Does the audience read informative content such as infographics or articles about issues? If yes, how many articles are read in one visit? Did they read it all the way through? And others.
However, this information is also not enough to be able to answer whether there is really an increase in awareness about this issue. The question about whether or not there has been an increase in awareness as a result of our public campaign program is a question about
outcomes. Data taken from analytics platforms will not be able to answer questions about
outcomes. To answer the
outcome question, we need other data.
The most reliable data to answer
the outcome of a public campaign in changing opinions or behavior is comparative data between
the baseline (the situation before the campaign starts) and
the endline (the situation after the campaign ends).
Baseline and
endline data ideally obtained from surveying respondents who represent the group targeted by the campaign. If the campaign aims to increase awareness, of course we have to see whether the proportion of respondents who have heard or understand an issue in the
endline survey is significantly greater than the proportion in the
baseline survey .
However, unfortunately, in Indonesia, public campaigns rarely use
baseline and
endline surveys. If the situation is not ideal like this, we have to use data from other measurement tools as a
proxy , for example
social network analysis (SNA) aka "analysis of the results of eavesdropping on netizens' conversations on social media". With SNA, we can find out whether there is an increase in netizens' conversations about the issues we raise. We can also see what the sentiment is and who is actively talking about this issue. Notice that I wrote the words “change” and “improvement”. This means that for
proxies (such as SNA) we still have to measure the situation before and after the campaign.
Another example of
a proxy I have used is the search volume of terms measured in Google Trend. In Google Trends, we can compare the popularity of our campaign
keywords with other
keywords in the Google search engine and compare them before and after the campaign.
An increase in keyword popularity can be an indication of increasing public
awareness about these
keywords and topics.
Campaigns to collect donations and collect petition signatures do look similar, because they both raise support. But the
outcome measurements are different. For donation campaigns, analytical data and the number of donations can be used to measure
outcomes, because the expected final results are limited to the number of donations collected.
Meanwhile, the measurement of the petition signature campaign is more complicated. The number of signatures cannot be considered an
outcome , because there is still a bigger goal behind the petition, for example changing regulations, revoking or granting permits to certain institutions, providing compensation, or other demands. Even if the demands are carried out, we need to ensure that these changes are the impact of the public campaign that we are carrying out.
Apart from that, if the petition signing is done via an external platform, such as Change.org, it is actually important for
the campaigner to view the analytics to assess the performance of the petition. But I've never done that before and don't know if Change.org is willing to open that data. Maybe friends who are conducting a petition signature campaign can ask Change.org about the availability of this data.
Instructions for analyzing social media analyticsLatih Logika recently has social media accounts
Twitter and
Instagram. Therefore, I have several times made reports on the performance of these accounts to find out what is good and what needs to be improved. So the question that arises is related to the activities we carry out there. For example, what content is the audience most interested in and least interested in? What days and times should content be uploaded? What is the age of the audience: is Latih Logika successful in reaching young people? Has the number of our audience (
followers,
reach) increased over time? What is the curve, is there a drastic increase at a certain point? If so, what is the reason, can it be replicated?
Just like website analytics, social media analytics can't be used to determine the extent to which a campaign has successfully achieved an outcome.