Warning: Cannot modify header information - headers already sent by (output started at /var/sites/t/test.whencanistop.com/public_html/wp-content/themes/responsive/core/includes/functions-admin.php:126) in /var/sites/t/test.whencanistop.com/public_html/wp-includes/feed-rss2.php on line 8
When Can I Stop http://www.whencanistop.com Alec Cochrane's website Mon, 16 Nov 2015 17:13:04 +0000 en-US hourly 1 https://wordpress.org/?v=4.8 Analytics on Shopify http://www.whencanistop.com/2015/11/analytics-on-shopify.html http://www.whencanistop.com/2015/11/analytics-on-shopify.html#comments Mon, 16 Nov 2015 17:13:04 +0000 http://www.whencanistop.com/?p=821 ]]> I’ve just been doing some work on Shopify and with the documentation lacking in some areas, I thought that I’d post something showing how it all works and how you can get it set up for you. Many apologies to all the people whose sites I have used to help me over the last couple of weeks who I haven’t referenced here (there were a lot of you and a lot of ‘oh I get it now’ moments, resulting in me changing the set up repeatedly).

For those who don’t know, Shopify is a web shopping system that has a payment gateway of its own so that you don’t have to deal with all that faff (for want of a better description). It allows you to put buy buttons on your site, sell through Facebook and have a shop front of your own that you can use to sell products (or services).

As with any web service (particularly one with money associated with it) you’ll want to know how well it is working and installing analytics is the start of that. However, it isn’t that simple and there are a number of circumstances that we should highlight. You might also want to put third party marketing tags on the pages, which is also slightly more complicated than it could be. I am going to show you how to do this through Google Tag Manager.

 

shopify_flow_simple

This is a shopify shop which is stand alone

 

In a simple stand alone version of Shopify you are building a shop which is intended to be completely stand alone. In this scenario your shop is its own entity not connected to any other website.

Google Analytics

For this situation the set up in Google Analytics is simple. In the settings within Shopify, there is an online store page. Within this, you can add in your Google Analytics account ID and choose to enable enhanced ecommerce (if you so wish). What this does is put Google Analytics on not just your shop pages (whether you have your own custom domain or not), but it also puts them on the checkout pages, so you can get more details about how your users interact. It does it through Universal Analytics.

This is very useful because it saves you a whole lot of bother. When a user moves from your pages of the shop and checks out, they move on to the shopify payment pages and this represents a ‘break’ in the journey, potentially on a new domain. Shopify have automatically enabled cross domain tracking and they have tagged the checkout pages with your analytics code.

Just a side note early on here about cross-domain tracking: There is code that is enabled on the checkout button that will make it so that when a user clicks on the link, they pass through to a page on checkout.shopify.com with a query string relating to their userID. This means that when Google Analytics drops a new cookie, with a new userID on the checkout.shopify.com pages (1st party cookies only), it knows that these two cookies is the same person and the same visit. The query string looks like this: _ga=1.119272649.1393383918.xxxxxxxxx.

Because there is cross domain tracking, you need to make sure your Google Analytics is set up to ignore the fact that you are crossing domains. You do this through the referral exclusion list in the admin interface for Google Analytics.

This system also passes through a whole load of variables automatically, including when a user has viewed a Product List page, Product Description page, add to basket, checkout, etc, as well as some descriptions of those products. You can see those through the custom events and through the custom ecommerce tracking:

  • Product ID is the SKU entered in the product page in shopify
  • Product Name is the title of the product followed by the size as entered in the product page in shopify
  • Product Category is the Product Type entered in the product page in shopify
  • Product Brand is the Vendor entered in the product page in shopify

When you are setting up your products – bear this information in mind because it will impact how your products appear by default in your analytics.

Google Tag Manager (or other analytics)

If you want to put third party Marketing tags, or any other sort of analytics on the pages, then this is where you are going to get a bit stuck.

Shopify won’t let you put custom tags on most of the checkout process. As you can see from the diagram above, you can put custom tags on your bit of your site. To do this, you can go through the online store menu, then choose the themes. On this page it will allow you to choose to edit the HMTL/CSS – from here you should be able to find the theme.liquid. This is the template that all of your pages run through, so inserting your tags here will load the tags on all pages before checkout.

If you need to do tagging that involves cross domain tracking here, your luck is out. To do cross domain tracking you need analytics tags running on the landing page of the new domain, which you can’t do unless you use the Universal Analytics described above. This means that if you want to use Classic Analytics (for whatever reason) then you won’t be able to link transactions back to source.

If you use an analytics system that allows you to use third party cookies (eg Adobe) then cross domain tracking isn’t an issue.

You can also put tags on the thank you page, which is useful if you use a marketing agency who likes to have third party tags to be able to automate a load of their marketing efforts (and for post impression tracking and that sort of stuff).

To do this, there is a section in Shopify for third party marketing tags within the Settings => Checkout menu. This allows you to add your Google Tag Manager (or third party marketing tag, or analytics tags if you don’t want to use Universal Analytics).

Add additional scripts, third party tags and Google Tag Manager to Shopify

Add additional scripts, third party tags and Google Tag Manager to Shopify

However, it isn’t that simple (it never is). The next step of using Google Tag manager is to put the details of the products you are using into the data Layer. Note that Shopify has its own data layer, so you could use theirs if you really wanted to, but here it is in the GTM data layer to make like a bit simpler.

You can use my script below, but I’ll explain what it does first. The script has to be run before Google Tag Manager, so make sure when you put it in the box for the snippets it is above the GTM code (or alternatively, make sure your ruleset for firing the tags within GTM uses the custom event included).

Firstly this code translates Shopify variables to Google Tag Manager standard ecommerce variables (so if for some reason you decided that you did want to use Classic Analytics, you can use these variables here). Within these variables there is the odd bit of fudging – for some reason the price and tax are listed in pence not pounds, so I’ve multiplied a couple of variables by 0.01 to get them in the right values.

Secondly I’ve added the products in. Some people might buy more than one product, so you have to loop the code so that it loads the variables for each product that the user has bought. It then loads the variables as we had them before in the normal universal Analytics:

<script>
window.dataLayer = window.dataLayer || [];
dataLayer.push({
‘event’ : ‘transactionComplete’,
‘transactionId’: ‘{{order.order_number}}’,
‘transactionTotal’: {{total_price | times: 0.01}},
‘transactionTax’: {{tax_price | times: 0.01}},
‘transactionShipping’: {{shipping_price | times: 0.01}},
‘transactionProducts’: [
{% for line_item in line_items %}
{
‘sku’: ‘{{line_item.sku}}’,
‘name’: ‘{{line_item.title}}’,
‘category’: ‘{{line_item.type}}’,
‘price’: {{line_item.line_price | times: 0.01}},
‘quantity’: {{line_item.quantity}}
},
{% endfor %}]
});
</script>

Having loaded the tags, then you can set up Google Tag Manager to have these variables, then you can include them in any third party marketing tag.

Shop as part of a site

If your shop is intended to be part of a wider site that already exists, then you potentially need to get your head around cross domain tracking.

There are three broad scenarios:

  1. Your site and your shopify site both use Universal Analytics and they both use the same domain name (different subdomains and different folders are fine). In this scenario, because you use the top level domain for your cookies (or at least you should do!) then users who move between your site and your shopify will be tracked seamlessly.
  2. Your site and your shopify site have different domain names, but both use Universal Analytics: In this case you need to make sure that any link from yourdomain.com to yourshopdomain.com has cross domain tracking enabled. Google has a lot of good documentation on this, but it is of course much easier through Google Tag Manager. You need to make sure the cross domain tracking is enabled on links in both directions (which is why it is useful to have GTM on the Shopify pages)
  3. Your site and your shopify site have different domain names and you use Classic Analytics on your main site: In this case you are out of luck. Classic Analytics uses different cookies to Universal Analytics, so you cannot track from one to the other. The recommendation is to upgrade to Universal Analytics (easier said than done in many cases). If you really want to continue using Classic Analytics, you are going to have to live with the fact you will not get your transactions associated with a marketing source. My recommendation if you can’t put Universal on your main site is to put Universal on the microsite and use as many landing pages on your shop as you can and then just admit that you are going to only get marketing sources from yourdomain.com in your Shopify stats, not where they came from beforehand

Of course, this is just how you set up the data collection, what you actually do with it is a different matter…

]]>
http://www.whencanistop.com/2015/11/analytics-on-shopify.html/feed 7
Analytics of Product List Pages http://www.whencanistop.com/2015/09/analytics-of-product-list-pages.html http://www.whencanistop.com/2015/09/analytics-of-product-list-pages.html#respond Mon, 28 Sep 2015 18:37:26 +0000 http://www.whencanistop.com/?p=809 ]]> Thanks to everyone who came to MeasureCamp a week or so ago. Kelly McClean and I presented a session on the analytics of product list (listing, lister, etc) pages (PLPs) which was actually a copy of the presentation that we gave at the London Digital Analytics Meetup Group earlier this month. Whilst both presentations are basically the two of us talking over a couple of slides, I thought it might be useful for me to formulate the thoughts and questions into a blog post (and show the slides, of course!).

Why?

Every time you are thinking about measuring something, you should be thinking about why you are measuring it. What do you get out of this? What is the purpose? What does your measurement framework tell you about it?

Usually (and I don’t want to put words into people’s mouths) one of the key goals of the website is to sell products. Therefore when you are thinking about anything in this area your number one thought should be “Does it help me sell more products?”

When looking at PLPs the temptation is to start thinking in terms of ‘micro conversions’ and those steps up the funnel to the sale: Did it help me get more product page views, did it help me get more basket views, did it help me get more checkouts, etc. The micro conversions are all well and good, but remember what the purpose of your site is (and it isn’t always just selling things – you might have a preference over products, you might have want people to save for later, you might want people to add more of one product, you might want return visits to easily access frequently bought products, you might want to showcase new products, etc).

PLPs are something that you can change to help you get to that end goal. In an eCommerce environment they are one of the early points in a users journey (although not always – some users might arrive at product pages so check your analytics) and something that can significantly impact how likely someone is to buy. The things you can change on that page are therefore very important and will make up a number of the segments you’d like in your measurement framework.

What?

The next question you need to ask yourself is what should you be changing them to. Analytics can tell you what works and what doesn’t work, but it can’t tell you what you should test to see if it works or not.

The most important thing is visit intent. What do your users want to do on the pages and how does that fit in with your business needs.

To do this there is one simple way: Ask your users.

You can do some extensive user testing and get some of your target audience into a room and ask them. You can put up a survey on the site and ask them what they are hoping to achieve. You can stand in the street with a sign outside the office asking for someone to come in and help you for five minutes. You can ask a friend or family member to come in and talk to you (assuming they are relevant). Anything that you can do to talk to some people to find out what they are trying to do will help.

This will help your design decisions in a tangible way. For example:

  • If your users are browsing to see if there is something they like and then they’ll buy, then you want to set up you PLPs to allow them to see the full range of products that you have to get some interest (so if you have 5 colours of a product, you’ll want to just show one, with the option to pick colours on the product page)
  • If your users know what they want and just want to find the specific colour/taste/smell/size then you’ll want a separate one for each so that they can see what options there are on screen, so you’ll have a separate option for each type on the product list pages

As well as changing how you set up the main portion of the page, it can also help dictate which facets you would like on the left hand menu, how many products you have, how you present the results in the main frame, etc.

What to Measure?

Everything. Next.

Ok, fine, I’ll give you a bit more detail. Everything that you could possibly change you should measure. If you can’t change it, don’t bother measuring it because reporting against it does you no good at all. But you’ll be amazed at how many things you could change (look on slide 4 above).

Here is a product list page (sorry if this is your site!)

PLP example

An example PLP

 

What do you want to measure here:

  1. Measure anyone who clicks on a facet in the left hand menu (what they clicked on and in what order)
  2. The current state of the facets (because a user might end up on a page with one selected/deselected without clicking on it)
  3. The order that the options are in
  4. The number of products that the user has on the page
  5. The number of products that the user could see in total
  6. The Page they are on (or how far down if it is never ending)
  7. Products on sale, products that are not on sale, products that are out of stock

I could go on, but here we have a list of things that we can immediately make changes to and see if they have an impact on our KPI from above. The reason we would want to measure the use of these is because we can change them in tangible ways that could increase conversion.

Don’t believe me? Here are some changes::

Firstly we could rearrange the order of a particular facet to see if has an increase in conversion rather than leaving in the tired old order (of course this would make more sense to rearrange size in descending order or colours based on the one you have most of or the one most people want):

rearrange facet order

Reorder the options in a facet

Reordering facets themselves:

PLP Facet re-ordering

Re-order different facets

Then you can change whether ticked means that it is selected or unselected. Then you can change whether ticked or unticked is set as the default. You can change whether the list is a set of checkboxes or a drop down box:

PLP Facet Dropdown

You can change your facet to a drop down

Here we have a number of small changes you can make to your facets that you can measure the impact of, not just on whether people use the facets but your site’s end conversion goal. These are things that you should test through a multivariate testing tool. These small visual changes will have small, but noticeable impact on your site conversion rate and the number of people getting to your end KPI.

]]>
http://www.whencanistop.com/2015/09/analytics-of-product-list-pages.html/feed 0
The end of the Generalist Web Analyst is here http://www.whencanistop.com/2015/07/the-end-of-the-generalist-web-analyst-is-here.html http://www.whencanistop.com/2015/07/the-end-of-the-generalist-web-analyst-is-here.html#respond Mon, 20 Jul 2015 17:54:12 +0000 http://www.whencanistop.com/?p=797 ]]> Last week at Web Analytics Wednesday we were treated to the outputs of the latest eConsultancy review of the analytics industry and on Wednesday this week eConsultancy is hosting a webinar on the future of agencies, so I thought that this was an opportune moment for me to add my tuppence on the situation.

Hopefully Linus Gregoriadis of eConsultancy and Andrew Hood of the report sponsors, Lynchpin, will publish their presentations from Web Analytics Wednesday, but in the mean time you might find that I publish the odd graph here and there in this post (download the report yourself on eConsultancy if you are a member and there is an eConsultancy blog post on some of the data as well).

For those of you who are not au fait with the report – each year eConsultancy runs a survey of its members (and anyone else who wants to respond) where it asks them a series of questions about their use of measurement and analytics – we have 5 years worth of data now, so we can start to see the trends emerging. These research pieces are useful for those who end up in niches and want to understand the wider environment so that they can keep their eye on the skills they’ll need at future jobs.

How Companies use customer insight

How companies use customer insight

What I thought I’d do is give you some insight into where I have seen things change over the years and what the growth in jobs in Analytics means. There were 93 jobs being advertised in January 2011 and 119 in January 2012. Right now, in July 2015 there are 702.

When I first started doing analytics in 2004 (I know, I don’t look that old, do I?) I worked in an eCommerce team that was a hybrid of marketeers and business facing digital people. My role was to do daily, weekly, monthly reports to management and out to business teams, as well as helping the business teams to understand their figures and what they could do with their marketing.

My second analytics role in 2005 was in an eCommerce team that was mostly front end development, design and database management. My role was of reporting, setting up analytics systems, working with database developers to understand customer patterns and making recommendations on conversion rate improvement.

My third analytics role in 2007 was in a search marketing team, where I was tasked with setting up a new tool, training others in the organisation on how to use the tools, reporting to the board, making recommendations on improvements (either to increase volume or conversion) and working with the search team to get more targeted traffic.

My fourth analytics role in 2009 was in an eCommerce team whose primary function was to report customer insight, but also to train others on how to use the tool, making changes to the tagging and doing data modelling to set targets (we’d call it predictive analytics these days).

Then I decided to go and work for an agency, which I have done for a couple of companies, before setting out on my own in 2014.

In 2010 I opined that you should have four key skills to be web analyst:

  • The information to provide the insight

  • The IT infrastructure and how to alter it (plus all those processes that always go off in IT)

  • The Business framework (how you can get funding)

  • The current projects so that you can push your project

In 2010 I believed this because every job I had worked in had required me to be a generalist – someone who could do everything because there was nobody else (or there were a couple of you and you shared it). Analytics teams of more than two were a thing of the distant future.

In 2015, I’m not so sure you need all of that. I don’t think anyone will ever take the route through to their current role that I did. If you start in a specialist search role, you aren’t going to ever need to know how to do tagging or teach people how to use tools. If you started in a tagging world, you probably won’t need to know the intricacies of each type of marketing. If you come from the business side, then you don’t really need to understand databases or the specifics of each type of marketing.

Because of this, the generalist web analytics professional is a dead career.

But that would always be ok whilst there are analytics agencies out there, because you can trust that the agency side would still need these people to do whatever they can for the company they’re working for.

What I’ve seen instead is three big things happening:

  1. The companies who used to service the analytics needs have moved up market to attract companies willing to pay the rates of management consultancies (or become management consultancies) – if you are charging £1k+ a day, why bother with small 5 day jobs?
  2. We’ve seen a ‘scope creep’ from agencies who used to be on the periphery, who now offer those services, whether they be the full service agencies offering advice on data, marketing services companies offering advice on tagging, media agencies giving reporting advice, etc, etc. If you work in house, how many times has an agency offered a service that you thought as a bit out of their range?
  3. There has been a massive increase in the number of full time, short term contracts in house. This is a result of organisations realising that they need more than one or two people in the analytics team (Tesco now have 30-odd full time analytics people).

Do not fear dear reader, there is still a role to be played by a generalist, so I’m not going to go away any time soon (since I went out on my own I’ve done some work doing measurement frameworks, some work setting up analytics, some work in SEO, some work doing data analysis, some work doing tagging and some work sitting in tag manager tools). There are still some companies out there who don’t have a full time employee but who aren’t using an agency or had an existing agency scope-creep into the role.

But if you are reading this and wondering where the “So what?” moment is going to be, it can come here:

  • If you are looking for a generalist to come and start Web Analytics at your organisation – there are are still a few of us out there – don’t be afraid to ask for someone to do a couple of days a month!
  • If you are looking for a project, you can find people if you can describe the skill set that you are looking for. They may come at a price, but analytics is hard, so you should be prepared to pay that because of the value you will get back out of it
  • There’s nothing wrong with scope creep agencies – if they have the skills in that area and it is something that you desire. Be sure to outline the parameters of what you want from the work, the scope of the work that they’ll carry out and the time frames. Far too often I have seen an agency where the output of their recommendations is to spend more money with them, with little logic in the way and with a methodology so loosely defined at the start that they were bound to find that. Make sure the person with the most knowledge in the building helps set up the project and is involved heavily from the start
  • There are consultants out there and there are employees out there, but they’ll come at a price. Do your due diligence on them and don’t just hire them because they were the only one available – there are companies (or generalists) out there who can tie you over in the short term until you find the right candidate

Final point – there are also a bunch of really nice people in this industry who you can ring up for a ten minute chat who will point you in the right direction without having to pay them. Don’t be afraid to do that!

]]>
http://www.whencanistop.com/2015/07/the-end-of-the-generalist-web-analyst-is-here.html/feed 0
How Googlebot executing JavaScript will affect you http://www.whencanistop.com/2015/06/how-googlebot-executing-javascript-will-affect-you.html http://www.whencanistop.com/2015/06/how-googlebot-executing-javascript-will-affect-you.html#respond Wed, 10 Jun 2015 17:14:56 +0000 http://www.whencanistop.com/?p=786 ]]> A few weeks ago I did a piece of work looking at the impact of Google’s robots executing JavaScript on pages, so I thought I’d do the decent thing and post it up here for you all to read as well.

I’m going to do a massive bit of oversimplification here and explain how Google works first. Anyone with any experience please don’t shout at me that this is so simple, but we need to start from the beginning:

Googlebot

Googlebot is a robot that goes to page after page on the internet. It’s job is to look at the page, find the content (text, images, video, et), catalogue it and then move on to the next page in its list.

When you come to a page, you see something like this:

When Can I Stop's home page

When Can I Stop’s home page

Traditionally when Googlebot came to the site, it wouldn’t see this as it wouldn’t render the page. Instead it would appear as you’d imagine the page to when you view the source code:

When Can I Stop's home page source code

When Can I Stop’s home page source code

If you view the source of a page then you’ll see something similar to this. Effectively this tells your browser what the page should look like, by giving it all the words, the links, where the images are and then pointing at any other files that are important for the browser.

The main two of these are Cascading style sheets (CSS) and JavaScript (JS):

  • Cascading Style Sheets are usually one main file, but sometimes several files that tell your browser how to organise the words and images on the page: which font to use, which size font, what colour font, what colour background, etc. If I want everything that I set up as Title 3 to appear in the same way on all pages, then I can dictate this from the style sheet by including a bit in there that says everything tagged as Title 3 should appear in this way.
  • JavaScript is way of making stuff happen dynamically on the page: It’s a set of commands that allow the web browser to do stuff to the page after it has been rendered. Analytics tools work via JavaScript – a bit of code tells the browser to send a ping off to a web server to say that the page has been loaded completely. They can also do all manner of other things like making things float on a page as you scroll, show up adverts in particular places on pages and all sorts of other things

Traditionally Google would know that the CSS and JavaScript existed, it might download it, but it wouldn’t execute anything in it. More on that in a minute.

Algorithm

Having collected all that information about each page, Google then does its magic ‘algorithm’.

Google's magical box

Google’s magical box

The magical algorithm is a series of 100s of different factors that Google runs through each time you do a search to work out which pages are the most valuable to give the results. These factors include things like the words appearing on the page, the words appearing in links pointing to the page, the page loading quickly, the words appearing in the title tag of the page, etc, etc. If I knew what they all were and how each was weighted then I’d be making a fortune selling my services.

Fortunately a lot of it is known because clever people have done a load of testing to see which things make a difference. Google gets around this problem of reverse engineering pages by changing their algorithm frequently (Panda, Penguin and Hummingbird) – adding in new factors and removing old ones which no longer show that a page is valuable or just simply changing the weighting of certain factors. What hadn’t really changed before was the way that it crawled the page.

Executing JavaScript

Until now that is (I say ‘now’, they officially announced it about 9 months ago). Now Google is telling you that it is running JavaScript and it is now looking at the page as you would do if you did an ‘inspect element’ command on a section of a page.

When Can I Stop home page inspect element

When Can I Stop home page inspect element

This means that it will be able to see what you do with your JavaScript and CSS. This is a clever move by Google because it wants to replicate the way that a user visits a page.

Of course it isn’t just that simple. Google isn’t just running like this, it is doing it both the new way and the old way:

Google algorithm

Google algorithm

This means that you can’t suddenly just implement your entire page using JavaScript (apart from the fact that there are privacy sensitive people who block JavaScript), because Google still wants to see the stuff that is in the source code. But it does mean you can start doing stuff in JavaScript that you want Google to pay attention to: you can put content in it, you can put links in it, you can put redirects in it.

Moreover the downside is stuff that you did in the past that you didn’t want the search engine to see, now it can and it will index based on that. JavaScript overlays for whatever reason (registration, email newsletters, adverts, etc) will all now be seen by the search engine.

What we don’t know is that magical algorithm and how much weight Google puts onto the stuff it finds. I suspect that this early into the process, Google doesn’t know what it is going to do with it either, so the old argument should still stand:

Create unique, interesting and relevant content, written in a way that the audience in question will search for it and it doesn’t matter how you present it to the search engine.

]]>
http://www.whencanistop.com/2015/06/how-googlebot-executing-javascript-will-affect-you.html/feed 0
Why were the Polls about the 2015 General Election Wrong? http://www.whencanistop.com/2015/05/why-were-the-polls-about-the-2015-general-election-wrong.html http://www.whencanistop.com/2015/05/why-were-the-polls-about-the-2015-general-election-wrong.html#respond Thu, 14 May 2015 18:33:24 +0000 http://www.whencanistop.com/?p=780 ]]> Never one to miss an opportunity to jump on a current affairs bandwagon, some of you may have noticed that there was this general election thing that happened in the UK recently where we voted for a load of people. Some people were quite angry because it turned out that the people who were predicting who was going to win got it a bit wrong.

YouGov: "We got it wrong. Why?"

YouGov: “We got it wrong. Why?”

Many of the pollsters are doing some soul searching trying to establish how they got it wrong. This article aims to help them and explain to you why they might have got it wrong.

If you are wondering why you should care, then this next paragraph is for you. The pollsters who do political polling also do surveys for all sorts of different companies to help them with their strategies and many other things. If you haven’t actively used one in your company, you’ll almost certainly be using data produced by them to help inform your decisions through word of mouth. It’s important that you have trust in them and it is important that you understand their errors so that you don’t repeat them when you do your own surveys.

1. Sampling

You can’t ask all the people in a poll. That’s the point of the election. So instead what you do is take a subset of them, ask them what they are going to do and then use that as a representation of what would happen if you did ask them all.

A large part of doing polling and surveying is trying to make your sample random and representative. Inevitably it will be difficult as those who have strong opinions are particularly willing to impart them if you do a ‘come to me’ approach and a ‘go to them’ approach might mean that you end up lumping those who are ambivalent with those who would actually do whatever it is you are asking them about.

To solve the problem of inevitable bias in the sampling pollsters use weighting (more of that later) so that their sample is representative of reality.

There are three broad methods of sampling at the moment, leading to different issues with the way that the questions are asked (more on that later as well):

  1. Internet panels: As used by YouGov (and many others) the panel works by inviting people to sign up to the online panel, then sending a questionnaire invite (usually via email) to a representative sub-set to complete a series of questions. Not everyone will fill one in, so you can’t be sure of an absolute representative sample, so you will still need to weight. The upside is that by doing it via internet you have a great history of the person’s past responses, the downside is that you exclude everyone who isn’t on the internet (or doesn’t like actively participating)
  2. Telephone surveys: As used by Com Res (and many others), the pollster takes a random set of telephone numbers from the phone book (plus some mobiles), randomises a couple of the digits at the end and then ring the person up. Not everyone answers, of course and depending on when you call you might end up speaking to a certain demographic more often than another demographic. The upside is that this is much more random (you going to them) and the downside is you obviously miss a growing number of people who don’t have a phone
  3. Face to face surveys: There aren’t really any that do this any more for political polling, but you might find one for another type of survey. They are more common these days when taking a small number of people as part of more in depth interview. These were largely phased out after the 1992 election when it was decided that it was impossible to avoid bias in the questioning at this point

2. Weighting

Having accepted that we aren’t going to get a representative sample, the data is subsequently weighted so that it does represent the total audience.

This is usually done in political polling by using demographic data (age, sex, location, who they voted for last time, etc) and then matching it against published data sets to weight up demographics under-represented in the sample and weight down over-represented demographics.

The ‘shy Tory’ that you’ve heard about was largely a result of this weighting. In 1992, as Martin Kellner explains, the weightings were based off census data from 1981 and were largely out of date, resulting in areas more likely to vote Conservative being under-represented in the final weighting.

In 2015 this should no longer be problem. There is a whole plethora of data that is regularly updated that can tell you down to a postcode level the exact levels of demographics.

The second way that political polls are weighted is based on a question that is asked of ‘likelihood to vote’. This is an attempt to weed out those who say they are going to vote one way or another, but are largely apathetic. If you look at the datasets from a poll then you’ll see the question where people are asked on a scale of 0 – 10 how likely they are to vote (eg YouGov’s last survey (pdf warning)), these are then weighted differently by different types of pollster (here is a great summary of that).

This is where it gets tricky for the pollsters – in the example from YouGov they had a turnout of 76% (people voting 10/10 to vote). This is much higher than the actual turnout, however the pollsters tend to weight each party equally here (a 10/10 20 year old male from London to vote Labour is treated the same as a 10/10 70 year old Yorkshire woman to vote Conservative) – whether that is right or not is open to conjecture.

3. Question

The question that you ask is vitally important. Who would have thought it was so easy to ask the same question in different ways? It turns out it is. Here is the question that YouGov asks to get their voting intention weighting and the question that Lord Ashcroft uses:

  • The general election will be held this week. On a scale of 0 (certain NOT to vote) to 10 (absolutely certain to vote), how likely are you to vote in the general election?
  • Many people say that if there was a new general election they probably wouldn’t vote at all, while others say they definitely would vote. Please say how likely you would be to vote if there was another general election?

If that question wasn’t difficult enough, the question on who you are going to vote for is even more difficult to do and more prone to differences:

  • The general election is this Thursday, 7th May, which party will you vote for?
  • At the general election on Thursday, which party will you vote for?

Who would have thought it? Just asking the same question in a different manner can result in different results. You can see the effect of this by looking at Ashcroft’s marginal polls (here is Croydon Central (pdf warning)). In this poll Ashcroft asks his general intention question and then subsequently asks them to think about their own constituency. The results are different:

If there was a general election tomorrow, which party would you vote for?

Conservative 31%
Labour 36%
Liberal Democrat 4%
UKIP 23%
Green 4%
Other 2%

Thinking specifically about your own PARLIAMENTARY constituency at the next General Election and the candidates who are likely to stand FOR ELECTION TO WESTMINSTER there, which party’s candidate do you think you will vote for in your own constituency?

Conservative 33%
Labour 39%
Liberal Democrat 4%
UKIP 19%
Green 4%
Other 1%

The same people literally gave two different sets of answers in the same poll to what is effectively the same question. How important is the question? Very!

4. Answer

If you thought composing the question was difficult, imagine what it is like composing the answers that the respondents can give. It’s easier for a general election because there are a finite list of parties, or at least so you’d think.

Towards the end of 2014 the pollsters started prompting for UKIP and in 2015 started prompting for the Green party as well in some cases. Traditionally the answers were just Labour, Conservatives and Liberal Democrats. Was this a good idea? The argument against prompting for smaller parties is always that this will cause their vote share to be overestimated as people will see them as an option, but then change their mind come the actual day (see the question above). The argument for it is the opposite: the ballot paper lists them all in alphabetical orders, so people will see them on the ballot paper and may be more likely to vote for them. Reality is probably somewhere in between (and many pollsters decided that it didn’t make a huge difference prompting or not).

Then there is the question of which order you list them in, whether this makes a difference online (it does), whether it makes a difference in telephone calls (it almost certainly does) and whether there is a difference between the two (there will be). Tradition is that you randomise the order in each survey, but does that have the effect of homogenising the answers to mask actual differences.

Then how do you list the parties? On the ballot paper it has the person’s name and party. But unless you ask them where they live first and have a handy list then this makes it difficult. Plus you then have to make sure you ask enough people in each seat to not end up with a bias based on which constituencies you have (there are 650 mini general elections, not one big one). If you don’t name the individual, will a person remember on the day that there is this one person that they’ve heard of and will vote for them?

5. Intention

There are no right answers to all of the above (which is why there are so many companies who do it with such differing results). Having taken all of that into account, there is a subsequent process of working out intention, that many different pollsters will largely base on the responses users give.

One argument, the ‘shy Tory’ one, is that we should overweight responses from certain demographics because either they are more likely to lie (“I might not vote” but then do) or exaggerate (“I’ll definitely vote” then don’t). The trouble with this approach is that we don’t have an awful lot of evidence to back this up. How do you weight it? Saying you should weight it, without knowing by how much is difficult.

My personal opinion (based on not a lot) is that the errors in the polls were probably caused not by a particular party being more likely to vote when they said they were, but by demographics being more (over 65s) or less (under 35s) likely to vote than they said they were going to. This then translates into different results for parties, because different age groups are more likely to vote than others.

That said, how do you ‘correct’ for that? One suggestion is that the ‘how likely to vote are you?’ question is causing the problems and that may need to change. How? Well that is the million dollar question and the person who answers it will be paid a lot of money by political parties and strategists alike.

]]>
http://www.whencanistop.com/2015/05/why-were-the-polls-about-the-2015-general-election-wrong.html/feed 0
The Importance of Tracking Stock in Analytics http://www.whencanistop.com/2015/03/the-importance-of-tracking-stock-in-analytics.html http://www.whencanistop.com/2015/03/the-importance-of-tracking-stock-in-analytics.html#respond Sun, 08 Mar 2015 23:03:05 +0000 http://www.whencanistop.com/?p=772 ]]> I’ve been working in retail recently and it has highlighted to me the absolute massive importance of stock (and that half explains the absence on here of my posts – I promise I’ll be back more now!).  When I worked for BHS in their ‘restaurant’ as a student 10 (and a bit more) years ago, one of my jobs was to count the amount of stuff that we had left on a Sunday so that the boss could work out what to order for the next month (pre-internet this was a tedious job of ringing up head office and reeling through a long list of order numbers and the amount we wanted).

 

Modern day stock management is much different, with interconnectivity between stores, warehouses and databases for the company website.  But the challenges remain the same: order too much and you have to find ways of getting rid of it before the cost of storage overtakes the profit from sale; order too little and you have the missed opportunity of demand for something you don’t have (this is before you think about ordering the right amount in each colour, size, flavour, etc).

 

But you don’t come to this blog to learn the intricacies of how to make sure that you do not waste a load of shepherd’s pie whilst keeping Vera from number 32 happy. When it comes to internet shopping it has it’s impact as well. When I spoke at ProductTank back in August last year, Matthew Curry from LoveHoney was talking about how his reporting worked.

 

In Stock, Out of Stock, Off Sale Conversion Rates

 

One thing that Matt talked about in at ProductTank was how he dealt with out of stock products. He said that it wasn’t good enough for his team to report to him product view to sale conversion. The reason? If the site couldn’t sell the person the product then a conversion rate will be directly impacted by these people.  Result:

  • In stock products conversion rate: 6%
  • Out of stock product conversion rate: 0%

Of course, we know that the real world isn’t quite as simple as this.  It’s entirely possible that someone could browse through eight products, four in stock, four out of stock, choose the one they like and purchase. What do you do here?  Is that a conversion rate of 12.5%? 25%? 100%?

The reality of course is that in all likelihood you will be looking at visits (if you think your measurement framework has one objective of selling things and another of selling more things to each person that buys something then your first one will eventually lead down to a ‘order visit/product visit’ metric and your second will lead to a KPI of ‘units per order’).  The downside of this is that inevitably your data will look like:

  • Total Products: visit = 10k, conversions = 5%
  • In Stock Products: visits = 9k, conversions = 6%
  • Out of Stock Products: visits = 3k, conversions = 3%

In our situation above where someone viewed both in and out of stock, before buying something – they’re pushing up your out of stock conversion rate.

To add another layer of complication to procedures, we have to take into consideration the difference between visits to temporarily out of stock products and permanently out of stock products (off sale products).

 

Temporary Out of Stock

 

I have quite a few conversations with people in the office about whether we should have out of stock products available to users or not. Users can be annoyed if they end up at a product page which is out of stock or a product listing pages which has out of stock items in it. This will reduce conversion rates even for those who then end up going to other products which is in stock (we’ve done the analysis on this) or the other way around.

But having temporary out of stock on your site holds some advantages:

  1. There is an opportunity for the site to be able to ‘convert’ in other ways. Knowing that you are going to have out of stocks allows you to create a critical success factor or goal of getting users to save the product for when it returns (which can be extended to in stock products)
  2. Maintaining a link through to your products will continue to provide link value to product pages to encourage their rankings in search engines. This means when they do come back in stock, they’ll have maintained their high ranking and traffic
  3. It gives you the opportunity to showcase what products you have (or at least will have in the future) so that users will know the variety of your range. That ‘brand’ will encourage users to come back in the future
  4. You can showcase similar products to promote them

So there is a balance to be had between removing removing them completely and removing the visibility enough that it doesn’t annoy those that accidentally stumble upon it.

This presents you with quite the conundrum  when you are reporting – on the one hand you have a KPI for conversion from product, on the other you have one of save for laters, increasing one could actively decrease the other. One has a demonstrable value, whilst the other is a potential future value. You shouldn’t ignore you potential future value, but you are going to mostly aim for value now.

 

Off Sale Products

 

Off sale products offer you a new challenge. For off sale products you don’t have that opportunity to get you ‘save for later’ because it’s never coming back and you don’t really care whether you continue to have the traffic because it won’t benefit you directly.

However numbers three and four above still hold. Removing the pages loses your ability to get visitors to the site that you wouldn’t have been able to do otherwise through SEO. These pages obviously have some value, but less value that a temporarily out of stock product.  With these pages you are probably going to try your hardest to stop people navigating to them on site, but still be able to find them from outside the site.

 

Partial Out of Stock

 

One of the biggest challenges from an analytics perspective is the partial out of stock measure.

What do I mean by partial out of stock? These are people who turn up at a product page and for some reason you can get it in one size or colour or shape, but not in another. This is an interesting concept because if I arrive at a product page for a pair of jeans, but they don’t have a 34 short (stop it), but only have a size 28 long (who wears those?) then I think that this is out of stock. But you’ll have no way of knowing that or not, because you’ll have demonstrable in stock products.

There are two solutions to tracking this:

  1. Make all your different sizes different product pages. This is good from a tracking perspective and an SEO perspective (unique urls for each product colour, size, etc combination), but not a great user experience (if they don’t have a pink shirt in my size, I might be tempted to buy the yellow one instead)
  2. Have some partial out of stock value that is passed alongside your product view (‘viewed availability’)

Of course a partial out of stock value has to be taken on face value of what it is: it’s telling you how many versions of that product are in and out of stock – it isn’t telling you whether the one that your users wants is in or out of stock.

The other thing you want to think about when creating this is what you are going to do with it.  Knowing that a product converts at x% when it has 20% availability and at y% when it has 70% availability allows you to make the decisions on how much stock you should worry about being low on.  Of course 50% of 2 products is an awful lot different to 50% of 30 products, so you might also want to distinguish in some way or another about how many of them were available or not.

]]>
http://www.whencanistop.com/2015/03/the-importance-of-tracking-stock-in-analytics.html/feed 0
Limiting collection of data won’t help your privacy, only limiting processing and access will http://www.whencanistop.com/2014/09/limiting-collection-of-data-wont-help-your-privacy-only-limiting-processing-and-access-will.html http://www.whencanistop.com/2014/09/limiting-collection-of-data-wont-help-your-privacy-only-limiting-processing-and-access-will.html#respond Tue, 30 Sep 2014 22:47:54 +0000 http://www.whencanistop.com/?p=766 ]]> In the wake of some of the biggest scandals relating to privacy in years, it is an odd question to ask, but is our attempt to increase our privacy misguided? Jennifer Lawrence and co might disagree (and now that I’m not not on Blogger any more I don’t have to worry about stupid visits coming to the site every time I mention a celebrity), but I hope that they bear with me for a few moments.

Jennifer Lawrence by Gage Skidmore at Comic Con 2013

Jennifer Lawrence by Gage Skidmore at Comic Con 2013

Without wanting to sound like a proper politician on this, when something like this happens we quite often start making changes that don’t address the problem that existed in the first place that caused this things to happen.  We don’t really know whether the photos that were leaked were caused by the iCloud or not, but that hasn’t stopped Apple releasing a new version of iOS with encrypted data.

Equally our response to perceived (or real – let’s not get into that here) terrorist activity caused by ISIS, ISIL, the artist formally known as ISIS or whatever they call themselves these days has been varied.  Our Home Secretary has announced today that if they’re in government next they’ll resurrect the so called ‘snoopers charter‘, in Australia they want to do the same thing under the Abbott government. Tim Berners-Lee, the founder of the internet, thinks that the opposite should happen and we should have our privacy embedded in laws.

All these people are missing the point – where we currently are in the world already means we have enough data about people to be able to break their privacy. At MeasureCamp a couple of weeks ago I was hosting a session to work out what Analytics vendors and agencies should do to make life better. Jim Sterne said in that session (and I apologise for paraphrasing)

“It takes a vanishingly small number of data points to get personal information about someone”

This was echoed at a Single Customer View session I went to hosted by DMPG, as Damian Blackden from Device 9 and Joe Reid from Krux talked about how their tools made it incredibly easy to do cross device stitching using fingerprinting (this isn’t new, but it is getting much more advanced than ever before). A vanishingly small number of data points are needed about a user to be able to get personal information about them.

So with the view that with modern computer power and ‘big data’ it is relatively easy to get personally identifiable information from a small number of non-personally identifiable meta data points that are stitched together, why are we worrying about what we are allowed to collect.

Data Collection

The ICO and the European Union have got it badly wrong on cookies. Restricting data collection doesn’t work. It is an illusion of privacy if you think that you aren’t being tracked (even with ‘Do Not Track’ or blocking third party cookies a la Safari). We’re far too advanced now that we can link data points together and get the personally identifiable information, so we are wasting your time with limiting what people can collect.

Not only are we wasting our time limiting what you can collect, but we are wasting our time limiting the way that it can be collected. If we can’t use cookies, we know that there are other ways of collecting data that means that this won’t work in the long term.  It is playing whack-a-mole with technology and sooner or later something new will come to replace it.

Cookie laws are a waste of time

Cookie laws are a waste of time

Blocking third party cookies by default is an even bigger red herring, instead of removing the collection of data, you’re simply handing it over to the big ad network companies: Google, Facebook and Twitter. They’re third party cookies on this website, but you already have their cookies on your browser because you’ve been to their website and got them as a first party cookie.

And therein lies the problem – cookies being first or third party are identified from when they are set, not what they are at the time. I’ve got a Facebook button on this site and that means they know that you are visiting this page.

A far better solution would be increased transparency on what data we are collecting. By forcing websites to explicitly say what they are collecting, users can make the decision on whether they want to use that site or not. Of course this would need to be very up front and users would have to pay attention to it. Opt in doesn’t work (as shown by those media websites who ran advert-less paid subscription alongside free, but ad supported content – nobody wants to pay for something they can get for free). This way, you will only collect data to the level that it doesn’t affect your business.

Data Processing and Access

What we can do is control what sort of processing companies do and who has access to what level of data.

I said that 2013 was going to be the year of the data protection officer. I don’t think I was quite right, but we’re not far from this. Data protection is going to be increasingly important in companies, especially those that deal with sensitive data. This person’s job is not to decide what should be collected and what shouldn’t be collected. We’ve already worked that one out.

This person’s job will be to decide how data should be processed and who should be allowed access to them. Long gone will be the days of live databases where if you know the right developer you can have access to an entire companies customer bases credit card details. The data will have to be stored in ways that make it impossible for the truly sensitive data to be accessed.

The data that is personally identifiable information, will have to be stored in a way that makes it impossible for someone outside the systems to be able to stitch it together with data from another system. Access to databases should only be available to those who need it. People who have left the company should have their access revoked as soon as they leave the organisation or after a period of inactivity (but for the love of god, don’t have passwords expiring if the account is active!).

More importantly we should not be storing usernames and passwords unencrypted.  If we did this one simple thing, then the blushes of Ms Lawrence and others would probably have been saved as their accounts were probably hacked by someone stealing a third party database of usernames and passwords that is used across different accounts.

]]>
http://www.whencanistop.com/2014/09/limiting-collection-of-data-wont-help-your-privacy-only-limiting-processing-and-access-will.html/feed 0
SEO considerations when moving from Blogger to WordPress http://www.whencanistop.com/2014/08/seo-considerations-when-moving-from-blogger-to-wordpress.html http://www.whencanistop.com/2014/08/seo-considerations-when-moving-from-blogger-to-wordpress.html#comments Fri, 29 Aug 2014 19:34:47 +0000 http://www.whencanistop.com/?p=699 ]]> Well I said I’d tell you about the SEO things that I was going to do for this blog when moving from Blogger to WordPress, so here it is (I’ll keep this updated as I do new stuff).

Below you’ll see a graph of my search traffic over time in the last 12 months.

whencanistop organic search

Whencanistop.com visits from Organic search in since May 2013

My organic search is down a little since last year, but I’m not overly concerned at the moment.  The last two weeks represent the new website (of course the last week is just 4.5 days so far, one of which was a bank holiday).  At the moment I’m quite happy, but I think there are some things that need to be worked on.

1. Site URLs

This was a big problem for me for a couple of reasons. Firstly Blogger decided that my urls were going to be built in a strange way and a strange way that keeps on changing.

Here is an example.  The title of the post is:

The difference between Accuracy and Precision

Here is the URL:

http://www.whencanistop.com/2009/11/difference-between-accuracy-and.html

Now there are five things that are important here:

  1. After the domain name I have a /year/month set up
  2. The ‘slug’ as WordPress calls it, has the ‘title’ of the post
  3. The way blogger did it was to limit the number of words so that it wasn’t too long
  4. Blogger removed common joiner words, but not all of them (eg it removed ‘the’ in the example above, but not ‘and’)
  5. There’s a .html at the end

This was partially solved when I imported all the posts into the site by changing the permalinks to a custom structure:

/%year%/%monthnum%/%postname%.html

This solved the date and .html and solved the postname for some of the posts, so I had to go through them all individually and update them. Inevitably I knew there would be some I’d miss, so I’ve been keeping my eye on 404 error page problems from Google Webmaster tools.

404 report webmaster tools

Error page report from Google Webmaster tools for whencanistop.com

 

The URL I’ve listed up there is one of them that I picked up, so I’ve updated it and let Google know.

The other option you have at this point is that you could set up you htpaccess to point all the old URLs to new ones using 301 redirects.  I’ve got 150 odd posts on here, many of them several years old with many links, so I decided it was easier to keep them the same.

2. Links within posts

When I imported the blog from Blogger it brought with it all the links, but it updated them based on the new url structure.

That means that they are all now wrong, hence that graph above is going upwards in red lines.  Every post more or less has a link to another post on the site (deliberately so, because I want to showcase all the other content and inline links are the best way of doing it).

Of course I imported all the posts before I updated the the structure of the permalinks, so they’re now all wrong.  Even if I had waited until after that, they’d have still all been wrong because of issue number  of the odd thing that blogger did with missing linking words.

I’ve only just spotted this, so I’m going to think about the best way of solving it before I do anything.  My suspicion is that there are two options – either go through each post looking for links and updating them or going into the database and writing a script that updates all the links to the new version in one go.  Neither sound great to me.

3. Images

Image search webmaster tools data for whencanistop.com

Image search data for whencanistop.com

Images on were a big problem for me.  As you can see in the one above, I get quite a lot of impressions within search engines through image search.  That’s almost 90,000 impressions and over 500 clicks according to webmaster tools.  The equivalent in web searches is 52,000 visits and 2,500 clicks.

The trouble with images is that the old images all had URLs that looked like this:

http://1.bp.blogspot.com/-8zwH6nxCrCY/UyH0Nr-KbNI/AAAAAAAAA_c/ZR0Zcnv3KIw/s1600/SEO_email.jpg

This is a problem with a capital P.  There is no way I could replicate that in this site and I can’t redirect because they’re not on the same domain.  So unfortunately I basically have to just give them up and hope that eventually the new image URLs get indexed.  Tough luck on that one.

4. ‘Category’ pages

The old site had a load of category pages (for want of a better word). In Blogger world these are called ‘archive’ pages.  Here is an example of one:

http://www.whencanistop.com/2012_04_01_archive.html

http://www.whencanistop.com/category/hbx

They were mostly for monthly archives and for the tags that I gave to pages.  I’m not upset that these are gone, because they are effectively duplicate content that only really have real long tail value of search terms between different pages.

As it turns out the way that WordPress does the category pages in the same way, so that isn’t a real big problem.

As with the images, the data suggested that this wasn’t a huge volume (3k impression, 90 clicks in web searches and 30k impression, 250 clicks in image search in the last couple of months for the archive pages).  I’m not going to lose sleep over that one.

5. Page load speed

crawl data google webmaster tools

Crawl stats from webmaster tools for whencanistop.com

One of the big worries is that it will take longer for the site to be crawled.  Firstly I’ve moved from hosting that was controlled by Google to one that is hosted by a company called TSOHost.  Will it take longer for Google to get my content now?  Of course it will.

Unfortunately there isn’t much I can do about that.  I think part of the reason for that increased time spent downloading a page is because of the issues I have with the links n the pages described above.  Hopefully when I solve that problem then we won’t have this problem any more.

Nonetheless I’ve been double checking the page load time on pingdom and nothing seems to be too problematic at the moment.  A total load time of 1.86s for a page with several images isn’t too bad, most of it loads within 1.6s apart from some heavy lightboxes for images.  0.8s for the server to respond is quite high though.

Something to keep an eye on for the moment.

]]>
http://www.whencanistop.com/2014/08/seo-considerations-when-moving-from-blogger-to-wordpress.html/feed 2
What are ‘segments’ in Measurement Frameworks? http://www.whencanistop.com/2014/08/what-are-segments-in-measurement-frameworks.html http://www.whencanistop.com/2014/08/what-are-segments-in-measurement-frameworks.html#respond Fri, 29 Aug 2014 13:16:50 +0000 http://www.whencanistop.com/?p=691 ]]> A couple of weeks ago I was privileged to talk at ProductTank in London: a group of product managers who meet monthly to have a few drinks and talk about being a product manager (well worth signing up if you are a product manager!). I was a guest speaker along with Matty Curry at Lovehoney and Peter O’Neill at L3Analytics discussing Web Analytics and how to do it. Predictably I was talking about measurement frameworks in non-retail environments and how to use them to work out KPIs from your business objectives and strategy. An important thing for a product manager who wants to be able to report on the success of their product. We ran through some case studies of mostly nameless companies and how they did this.

Measurement Framework for When Can I Stop

A Measurement Framework for When Can I Stop

To my surprise one of the big questions that I got subsequently was “What do you mean by segments and how do I work them out?” Since then I’ve had several other conversations about this. My explanation of segments is always:

A segment is anything you can change that will effect your KPIs

Of course the classic example for many companies is your paid search campaigns – you can change them, so you want to look at how each one does individually. However maybe what we need to do is go back a level first.

Dimensions

A dimension is a measurable quantity that has a series of distinct values. Dimensions can be related to the person themselves (eg what is their age, or their sex), a particular visit (where did they come from to get to the site, what day of the week is it, have they visited before) or a particular page (what is the url or title of the page, what category does the product on the page belong to, what was the previous page). These dimensions are effectively the heart of your reports. When you look at a report for something you will have a column that says give me the values for this dimension, with subsequent columns being for the measures associated with it (whether that is ‘impressions’, visits, conversions, revenue, visitors, etc). The thing with dimensions is that they tend to be quite custom depending on what it is that you are doing. Sure some things are common and you will find them in analytics tools:

  • Traffic sources (either as a domain name, or grouped together into types like ‘organic’)
  • Search terms (this is a bit of a cheat though, because it comes from the same thing as above)
  • Page urls (or page titles, or page names or all of the above)
  • Browsers, operating systems, location, etc (all of these come from information sent by the browser with its request for the page)
  • New or repeat visits (based on whether the user has a cookie or not, usually)
Google_analytics_unique_landing_pages

A landing page report tends to be a standard dimension

But for you and your website there are going to be different things that you might want to capture. In the example of my website, I tag each article with a series of categories so that users can find all posts on a certain subject. I might want to know what sort of articles people are looking at, so I might use as my dimension the category of the article. This tends to be relatively common across many publishing sites. However on another project I’ve been working on recently, one of the things we do is get people to login and suddenly information that the users give us about themselves is a useful dimension. Note that if you do have a registration process (or any form for that matter) make sure you only collect essential data. You should not be collecting data for your dimensions if it s going to have an impact on your conversion rates. And if you do collect non-essential data users are more likely to give it up if you prominently tell them why you are doing it and what you intend to use it for (not in the small print).

Segments

When we talk about segments, what we actually mean is what particular value (or group of values) from a dimension are we most interested in and want to treat as a separate entity. With these as a separate entity, you can then do other things with them – for example view how they behaved in a different dimension. As an example, one of my clients last year had a site that was aimed at driving people off to other websites where they could make a purchase (of sorts). The KPI was how many people clicked off to these 3rd party websites. But it turned out that you could only buy the product if you were in the UK or Ireland, so we created a segment for the UK and Ireland. We then used this segment when doing our analysis and reporting in conjunction with the other dimensions. Matty Curry talked at Product Tank about how he segmented products that were out of stock from his figures. That way he had a figure for conversion rate for in stock products to see what he needed to do to get more purchases from them. And for the out of stock products that people were finding, he could send that to the warehouse team to stock up on. ‘Is the product in stock’ is a dimension and ‘out of stock’ is the segment. When I worked at a publishers, we segmented a certain type of article out into a different group. These articles were syndicated from somewhere else and we wanted to see if they actually had an impact. Sure we could see them in the pages report (and we could tag it as a separate dimension if we wanted) so that we could tell how many of them there were. But the hypothesis we had was that they were difficult to find in Organic search and didn’t encourage users to stay on the site because there were few links. We created a segment for these users and low and behold those things turned out to be true and we could work out our negotiations on that. Ultimately these are things that you as a business will need to work out for how you operate, what you have personnel doing and how much control you have over change in each area.

Creating segments in your analytics system

New visits and repeat visits to this blog

New visits and repeat visits to this blog

How do you do this in your analytics system? Well in most systems there is the option of adding a secondary dimension to your existing one, so that you can break down one dimension by another. In this case, you ma need to create ‘fake’ dimensions for groups of another dimension (traffic sources tends to be one of these ‘fake’ dimensions – the system calculates it in the background based on a rule set of what to put where) so that you can get the segments that you need. Or alternatively you can use the inbuilt segmentation creators that exist in most analytics tools today. Google Analytics introduced their’s back in 2008 (I’m still amazed at how old this blog is!) and SiteCatalyst not long after. As always though – working out what it is that you need to measure and how much you control over it is the most important thing to do first.

]]>
http://www.whencanistop.com/2014/08/what-are-segments-in-measurement-frameworks.html/feed 0
Welcome to the New When Can I Stop http://www.whencanistop.com/2014/08/welcome-to-the-new-when-can-i-stop.html http://www.whencanistop.com/2014/08/welcome-to-the-new-when-can-i-stop.html#respond Wed, 13 Aug 2014 18:05:48 +0000 http://test.whencanistop.com/?p=684 ]]> Welcome to the new WhenCanIStop. It looks a bit different, doesn’t it? What is going on with all that Alec?

Back in 2012 when I last did my last post about how this blog was performing, one of my key points was that I was aiming for two posts every month. Well this year, bar this post there has only been one so far and in 2013 there were only 14. Last time I wrote I said that I was trying to work out the purpose of this blog and changes in situation has meant that this blog has changed significantly because of it.

What has changed is the strategy. The stated purpose of the previous blog was that I was able to use it gain authority in the area of Analytics and then use that to get myself better jobs. Now I’ve decided to move to being a freelancer, that strategy no longer works and I’ve had to think up a new one. The old blog never really worked at driving business my way (it had a way of contacting me in the side bar, but nobody ever did).

If you imagine that in the new world the strategy is to get people to use me as a freelancer, then the objectives of the blog fall out relatively simply. I still want to be seen as an industry expert, but I also want to tell the frequent visitors about the services I offer and provide a simple place for people to get in contact if they’ve heard about the brand.

 Measurement Framework for When Can I Stop

A Measurement Framework for When Can I Stop

When Can I Stop is now a business that operates in the Analytics sphere, as opposed to just a blog that someone who works for other companies upkeeps. Because of that I needed to change the way that the blog worked so that it was more about letting people know that and getting them closer to the products and services that are offered. You’ll see that the significance of that instead of just being a blog, there are now tabs at the top that show what I can offer as a service and my experience. The plan is that this is going to be fluid and develop over time.

The other significant problem of the ‘old’ blog was that I’d taken a templated visual from Blogger from almost seven years ago now and then fudged it over time to suit whatever it was that I wanted to do. Widgets were put in that didn’t work so well, things were moved around, but all the time the background remained the same and so did the colour scheme.

The Old When Can I Stop

The Old When Can I Stop

In the new When Can I Stop World I needed something that was unique to me, so that when I go off and see prospective customers I can have a brand that they will see on a business card. Creating a brand meant that I had to keep as much as possible from before (so the dots still exist in the ‘o’), but I’ve made the whole thing a lot cleaner and better looking:

When Can I Stop brand

When Can I Stop brand

For those who are interested, I’ve also used a new font in this as well (its called Caviar Dreams) so that it is distinctive (all the titles of posts are in Caviar Dreams as well).

To make the whole thing a bit cleaner, I’ve also changed the background from being dark blue to being light green (it’s actually the green from the second dot in the old style). I’ve dropped the blue for the moment, but that is mainly because it didn’t fit in very well with the rest of the colour scheme going on with the site.

Of course having a brand means that I can now do other things like create templates of documents that follow this branding for when I produce work. The dark blue background of old was nice, but it wasn’t going to win any powerpoint presentation prizes. Now that the overlaying colour is white, I can use the touches of the blue, green and orange in the documents that I produce.

So what is coming next?

The aim is that I will go back to producing more blog posts. Historically the ones that have done well in terms of visitors have been how to guides for tools. However whilst they highlight the expertise I have, they don’t really highlight any skill that I have beyond being able to use the tool. So the posts are intending to change tact a little. My core skill is being able to define business objectives from Strategy (as I will be doing as a training course for measurecamp5 in September) and this is the service that I’m looking to expand.

You may have already noticed that taking effect. In my menu bar at the top I don’t talk about all the individual tools that I know and how deep I know them. That’s of no use to to you when you only have one tool and you want someone who has deep knowledge of it. What I can offer is the practical ways of using that tool by defining it back into your business objectives, hence one of my main tabs at the top is ‘measurement frameworks’.

Of course this is all going to be fluid and you might find that in a couple of weeks time I’m suddenly focussing on something else. At the very least, I intend for it to be a fun journey, so you’re welcome to join me.

In the next couple of weeks I’m going to be doing some posts on here about the pitfalls of moving platforms and the things I’ve done to try and make it so that I don’t lose anything. Keep your eyes peeled!

]]>
http://www.whencanistop.com/2014/08/welcome-to-the-new-when-can-i-stop.html/feed 0