Farewelling my smartwatch – a tale of data value

I stopped wearing my smartwatch this week.

Or, to be more accurate, I stopped wearing the latest smartwatch that I have been wearing. I started off with a Fitbit Charge HR (arguably not a smartwatch, I’ll give you that) until that fell to pieces, then moved on to a Sony Smartwatch 3, and then had a brief dalliance with a borrowed Ticwatch S. They now all sit abandoned while I toy with the idea of putting them on eBay, taking them to bits or strapping them to the dog while I’m at work to see just how active he isn’t during the day (greyhounds are good like that).

When I started out my experimentation with wearables, it was for a pretty solid reason. I was unhappy with my fitness, and I wanted to get an idea of how many steps I was doing in a day, monitor my steps, my heart rate when I was exercising, and get some feel for how many calories I was burning in a day (acknowledging how furry these estimates are when measured purely through a wearable device). Starting with the Fitbit, I started to get a sense of my activity level. I also started using Google Fit on my phone as a way of getting an idea of my activity from that (given that I’ve pretty much always got my phone on me). I started to compare what different platforms thought I was doing in terms of activity, adding Strava into the mix for more focused activities (read: cycling).

For a while, it really helped. I managed to get to a point where no matter which device or estimate I was looking at, I was getting a pretty good estimate of how many calories I was burning in a day, and how that varied between a day full of meetings in the office compared to a weekend where I might spend a couple of hours out on the bike, and then come back to a house full of chores to keep me moving for the rest of the day. This gave me some very tangible boundaries on where my diet needed to be, how far off I was from striking the right balance between intake and expenditure, and what difference it made to fit exercise into my daily life.

But then things started to get annoying. The frequent charging. The catching on my motorbike jacket every time I went to put it on to ride to work. The anxiety of leaving the house without it and realising that my activity wouldn’t be counted (seriously – it’s a bloody fitness tracker, not an ankle bracelet that’s going to send a message to your parole officer if you leave it behind). But mostly, the fact that the data I was getting from it was no longer really telling me anything I didn’t already know. I knew more or less what sort of range of calorie expenditure I’d have throughout the day depending on how active I’d been, and the ‘extras’ like getting notifications of messages and emails just weren’t adding that much in the way of value.

This got me thinking about a question I get asked a lot by people when I talk about our experiences setting up Flinders Connect, and now redesigning and reworking six more student service environments across the University – what data do you collect about the type/volume of enquiries that you receive?

If I had to answer that question in one sentence, then it would be that we collect just enough to help us understand what we need to understand in order to make good decisions, and no more.

…we collect just enough to help us understand what we need to understand in order to make good decisions, and no more.

The reason for this at the time was pure pragmatism. There was a theoretical ideal that every single enquiry that arrived – through the website, in person or over the phone – would be manually  logged in a single CRM, right down to a person asking for directions to the nearest bathroom, and that the CRM should be the absolute source of all interactions with a student, no matter how small they were. The problem with this was (1) we didn’t have the resources to track to this level of detail without significantly slowing down service to customers during peak periods, and (2) it would have sent the service team around the bend with frustration of capturing data to that level of granularity without any apparent benefit, including times when students didn’t want to give us their names, or had a question which was so simple that recording the interaction in the CRM would have taken longer than it took to answer the question.

In short, each piece of data we collected came at a cost, and the cost of following the ‘purist’ CRM philosophy outweighed the value that was going to be generated from it. If data is the new commodity, then in the same way as any other commodity there is benefit in understanding how valuable it is before you start digging it out of the ground.

If data is the new commodity, then in the same way as any other commodity there is benefit in understanding how valuable it is before you start digging it out of the ground.

The cost of data isn’t just limited to the time it takes to extract it, particularly in situations where we can collect data behind the scenes without any conscious effort of any individual to generate it. This is where the discussion gets very large, very fast, when we start to talk about (for instance) the ethical cost of data (how do we determine whether we should even collect it in the first place), the service barrier cost (will collecting mandatory data drive customers away who otherwise would access the service), the security management cost (how much effort needs to be made to keep the data appropriately secure), the storage cost (not just space, but ensuring that the data are collected in a way which is connected, reliable and usable) and the interpretation cost (if we have too much, then will we ever be able to identify anything meaningful from it). Of course the last of these is where machine learning and AI can, and will, make vast impacts (good and bad) as we unleash it on huge datasets to do its own exploration.

In our case, what we did have as a ‘lower cost’ alternative was raw numbers of customers coming through the physical queueing system, which we could add to the enquiries landing in the CRM from online channels, and in turn add that to incoming phone data – all of which was captured behind the scenes without the need for conscious action. A perfect picture? No. But one which still uncovered the height of the combined peaks in demand, which all hit at once, what proportion of them were coming in through various channels, and this in turn gave us critical information for planning the next cycle, and at least some hints at things in the data that warranted further investigation.

It gave us enough data to measure some basic service metrics, to compare over several cycles (one per year in this case, much longer than my one per day on the smartwatch), to have enough staff in place to cope with demand throughout those cycles, and to start many other discussions about where to focus service improvement efforts. This will continue on as the whole environment undergoes more change, and we start to collect more data, but still – only enough to help us make the decisions we need to make, and always with one eye on the costs involved.

 

Leave a comment