Zen and the art of Learning Analytics

I was fortunate this week to travel to Dunedin on the South island of New Zealand to attend the ascilite 2014 conference, and one of the notable aspects of the program for me this year was the number of papers relating to learning/learner analytics in some shape or form. While there have been papers relating to this field dating back as far as the 1999 ascilite conference, this year for me was the year that analytics really emerged as one of the dominant topics of conversation. The most encouraging thing for me though was that the analytics conversations appeared to be shifting away from the ‘what’ and ‘how’ of analytics to the ‘why’, which is where the real interest in analytics (and most other things) really lies.

SIPDE and the art of motorcycling

On the way back to the airport through the truly stunning landscape of the south island, complete with wide, smooth, curvy roads, I thought back to my days as a motorbike instructor helping learner riders to get their licenses. Most of them had never ridden before, and it was for them a case of not only learning how to coordinate all four of their limbs to make a vehicle go where they wanted to, but at the same time take on a plethora of risks that might have been there as a car driver, but that have a vastly higher impact should they make an error of judgement as a motorcyclist. As part of the theory component of the course, we’d introduce them to the SIPDE model of managing risks that they would encounter on the road, which in brief incorporated the following process:

Scan for risks, continually looking around for potential danger;
Identify risks, by consciously noting that there was something in the vicinity which could bring them undone;
Predict what might happen next, in particular when dealing with larger vehicles and what they might do;
Decide on a path of action, looking at exit routes, speed changes, warnings, or simply to remain aware; and
Execute the path of action, following through on whatever they had chosen as the appropriate action based on the situation before them.

The challenging thing as a motorbike rider is that this all needs to go on inside your head, without the opportunity to consult with anyone else in the split second of solitude you often have in which to make a call on a situation. This made me reflect back a little on one of the comments I heard during a conversation on analytics during the ascilite conference (unfortunately I’ve forgotten who said it), specifically:

‘you can’t just dump a bunch of analytics on academic staff and expect them to understand and act on them.’

Analytics and the art of SIPDE

It struck me that in the field of analytics right now we are probably placing a lot of expectation on academic staff of being able to interpret data from their virtual learning ecosystem and turn this into appropriate actions, but without giving them the support they need to understand the data and what inferences can be drawn from it. This was nicely summed up by Gunn [ref]Gunn, C. (2014). Defining an agenda for learning analytics. In B. Hegarty, J. McDonald, & S.-K. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 683-687).[/ref] in her paper on defining an agenda for learning analytics:

‘Perhaps the most important research question for learning analytics at present is: how can available raw data be converted into actionable knowledge for teachers and learning designers who are not technical experts?’ – Gunn

This is where I think something like a SIPDE model could be used quite effectively when transposed into a learner analytics model, however with two important differences:

  1. the introduction of more than one actor into the model to support different stages of the process; and
  2. the closure of a feedback loop to enhance the overall effectiveness of the system (which could also be related back to our motorbike rider, but I’ll leave that to the reader as an exercise)

For the purposes of translating actions into actors, I’ll noun-ify the five steps in the SIPDE model, and talk in a little more detail on each.

The Scanner – in this scenario, the Scanner is the system itself. This will often have the LMS (Moodle, Learn, whatever) as the primary data collection point, however how long this will remain the case for is another story altogether. That aside, the system itself is the thing which is continually collecting, aggregating and displaying information in a virtual learning ecosystem. This could be as simple as the generation of one of the very basic log reports in something like Moodle. In an LMS sense, the Scanner is the system itself and how it collects and presents data.

The Identifier – The identifier is the person or the part of the system which picks up ‘outliers’ or ‘anomalies’ in the information available to it, and nominates them as something potentially out-of-the-ordinary. In many cases, it will either be a person running a manual report and spotting something in the data that looks like it might be concerning, or it will be pre-defined queries which incorporate some kind of tolerance criteria which will then trigger a red flag to someone that there might be an issue. A simple example of an identifier would be someone looking at a ranked listing of students according to their logins to a VLE, and identifying those who haven’t logged in at all as being potential concerns (the Identifier as a person), or a query which has been written to periodically scan the log tables, identify students who haven’t logged in at all, and sending a notification to a course coordinator (the Identifier as a system). When talking at a systematic level, this is where much of the effort has gone in to date in terms of attempting to create valid criteria which can reliably identify potential issues and bring them to the notification of the Appropriate People, which often ends up manifested in some kind of dashboard or exception report.

The Predictor – The Predictor is the interpreter of the data, and this is where I suspect we have a gap in many cases. The Predictor needs to be a person or people with the required knowledge to cover things like the nuances of how the data were collected/constructed, the desired intent of the online course environment, and potentially of the students involved. In the context of effecting meaningful change, this is possibly the most significant part of the process as it is the one where the actors involved need to assess the inferences that could be drawn from the data, and what options there are for interventions. This step is where the question by Gunn posed earlier comes to the fore – how do we appropriately predict cause based on the data we have available, and who needs to be involved? This is also where the potential for more than one person to be involved becomes a valid option rather than assuming that the educator should be the sole actor attempting to predict what might be inferable from the red light on a dashboard staring back at them.

The Decider – The Decider is the person or persons who need to make the final call on the appropriate path of action in any data-driven intervention, and this area is another one where there could be multiple players involved, which in itself will no doubt make this a challenging aspect to formalise. If Student Support believe a student requires some kind of intervention action, yet their Lecturer does not believe this is necessary/appropriate, then who comes up trumps? I’ll leave that one for another day, and simply say that there will invariably be a person or persons who will need to make the final call based on the information provided, and that there needs to be a very clear accountability model for what this should look like.

The Executioner – Yes, I know, this is unfortunate nomenclature, but sometimes sticking doggedly to a model of terminology in the spirit of consistency will do that to you. The Executioner is the person or persons who acts on whatever interventions are deemed appropriate by the Decider. The Executioner may also be more than one person, depending on the number of actions that are identified from the interpretation of the data. This is a nice segue into the next part of this post, namely the different areas of action which could be identified by the Predictor and Decider in this process.

SIPDE and the art of intervention

There are four potential areas of intervention that are immediately apparent – student support interventions, learning design enhancements, system enhancements and analytics enhancements. It is worth noting that this is also the point where the feedback loop is closed in the SIPDE model, either within the objects of the analysis or of the model of analysis itself, which we will discuss later.

Student support interventions

I list student intervention first as it is the one most commonly associated with analytics in the form of ‘early interventions’, sending ’nudges’ to students asking whether or not they need help, providing them with links to support channels and the like. As this area tends to get most of the attention in analytics conversations, I will give it the least, in a show of solidarity for those areas that have a greater chance of ‘flying under the radar’.

Learning design enhancements

As Russell [ref]Russell, C. (2014). Herding cats and measuring elephants: implementing and evaluating an institutional blended and mobile learning strategy. In B. Hegarty, J. McDonald, & S.-K. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 211-221).[/ref] states,

‘Learning analytics and institutional data can also be used to facilitate pedagogical evaluations within disciplines, as part of systemic and evidence-based curriculum redesign.’ – Russell

Flipping this concept around are Kennedy et al[ref]Kennedy, G., Corrin, L., Lockyer, L., Dawson, S., Williams, D., Mulder, R., Khamis, S., & Copeland, S. (2014). Completing the loop: returning learning analytics to teachers. In B. Hegarty, J. McDonald, & S.-K. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 436-440).[/ref], who state that:

‘We cannot fully understand students’ learning processes as captured through learning analytics without an understanding of the design of the learning task.’ – Kennedy et al

It stands to reason then that the learning design modifications and enhancements should be a clear consideration for the Predictor when analysing potential ‘red flags’ in data. Do the data show a genuine issue with student interaction, or are there elements of the course that are contributors to low engagement by students that could be rectified by support from a learning designer? In cases like this, it may well be a combination of the academic and a skilled learning designer analysing the data to try and establish the likelihood of course design being a culprit and work with the Decider (if they are not one and the same person) to map out a plan of action to improve the course. In this sense, analytics and continuous improvement of online learning environments should in many cases be inextricably linked.

System enhancements

Technology can be just as much of a barrier as an enabler to students, and this should be a consideration for whole-of-system custodians (LMS administrators or reference groups) who may well have a role to play as Predictors/Deciders in this model. Perhaps identification of system-wide issues is something which would take time to infer from analytics data, as it may require the aggregation over time of common threads (quantitative and qualitative) in reasons for students not engaging with an online learning environment. Without dwelling on this as a source of frustration for students, some examples might be difficulties in accessing the VLE on mobile devices, poor system performance/stability or poor interface design leading to confusion for students.

Analytics enhancements (with a word on ethics)

This is the unique element in these four areas, as it relates to analysing the analytics themselves. This area provides the closure on the feedback loop mentioned earlier in terms of the measures themselves as being continually under review. In this sense, a learning analytics project is never ‘finished’, it is simply en-route to its next refinement as the system matures and evolves. This is also an area where the Predictor needs to be someone with an intricate understanding of what data have been collected, how they have been aggregated, filtered and presented, and what technical limitations there might be on the inferences that can be drawn. Enhancements on the analytics themselves could comprise modifications to the queries being run to eliminate ‘false positives’, or to cross-reference information within the system (for example, only show me students with low online engagement IF they also have at least one assignment which has passed its due date), or possibly most importantly the addition of new questions which build a broader, richer picture of the data. This refinement of questioning may also mature over time as new challenges emerge, or old ones disappear, with the faceless ’Scanner’ in the relationship on a path of continuous evolution.

It is also in this area where ethics should also be given a very clear voice, ensuring that any changes to the ’Scanner’ are done within the appropriate ethical and legal frameworks. Beattie et al[ref]Beattie, S., Woodley, C., & Souter, K. (2014). Creepy Analytics and Learner Data Rights. In B. Hegarty, J. McDonald, & S.-K. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 421-425).[/ref] sum this up well when they state that:

‘The tripartite relationship between learner, teacher and educational institution exists in a rapidly evolving social, legal and technological environment. Any technological changes can (and do) rupture the fragile balance of respect and trust upon which this relationship is founded.’ – Beattie et al

If there is a single step where ethics should be given a gatekeeper role in the process of analytics, then this is it.

Zen and the art of getting started

As the old saying goes, the longest journey starts with the smallest step. History is littered with Great-Big-Data-Warehouse projects that never made it off the drawing board before being unceremoniously dumped on the scrap heap, and I suspect we will see plenty more analytics projects that start off with a simple vision but rapidly end up attempting to solve all the world’s problems before even getting out of the blocks. Arguably a good basis for action in a new analytics project, reflected broadly in the work of both Beer, Tickner & Jones [ref]Beer, C., Tickner, R., & Jones, D. (2014). Three paths for learning analytics and beyond: moving from rhetoric to reality. In B. Hegarty, J. McDonald, & S.-K. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 242-250).[/ref] and Wohlers & Jameison[ref]Wohlers, S. D., & Jamieson, J. (2014). “What in me is Dark, Illumine”: developing a semantic URL learning analytics solution for Moodle. In B. Hegarty, J. McDonald, & S.-K. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 110-119).[/ref], could be summed up as to start with the simplest questions from the most readily available data which will give at least some meaningful information. Remember that analytics, for all their vast complexity, can start with a question as simple as ‘which of my students haven’t logged in yet’, which should be easily available from just about any VLE worth its salt. Yes, complex analytics will require more complex tools, but a sound starting point may well be the simplest questions.

It is also worth mentioning at this point the advantages of the various actors in this model working as collaborative stakeholders in any analytics process. Beer, Tickner & Jones talk about three paths of learner analytics implementations – “doing it to” academics, “doing it for” academics and “doing it with” academics. Their perspective on this can be nicely summed up as:

‘These three approaches described above are not mutually exclusive. Elements of all three approaches are very likely to, and perhaps need to exist within LA implementations.’ – Beer, Tickner & Jones

The model I have outlined in this post very consciously attempts to clearly identify the various actors in play in a learning analytics implementation and ensure that they have a voice at appropriate stages of the feedback loop. Whilst it does not introduce any group-breaking concepts, I hope it does at least state a case for the need for multiple ‘Predictors’ in how analytics are treated once they are generated by the Identifier, whoever or whatever they may be. I will round out this post with an excellent quote from Wohlers & Jamieson on the value of collaboration. Wind the clock back perhaps twenty years (or less) and the same conversations were very probably being had, but with the word ‘analytics’ replaced with ‘educational technologies’ in the broader sense, and I look forward to the evolution of learning analytics to the point that it can deliver the reality based on the rhetoric.

‘(Learning institutions) looking to strengthen their self assessment practices would do well to consider developing analytic capabilities. Conversely, champions of analytics would do well to position investment in analytics alongside self assessment strategies and capability.’ – Wohlers & Jameison

Addendum: I would like to thank all of the authors referenced in this post for their contributions to the 2014 ascilite conference, their range of perspectives on analytics and the conversations I was lucky enough to have around this topic was enough in itself to make the trip worthwhile.

6 thoughts on “Zen and the art of Learning Analytics”

  1. Great summary. Thanks so much for doing this. The analytics article I shared as a concise paper ‘A comparison of exploratory learning analytics methods in three cases’ will be published in Technology, Knowledge and Learning later this year as ‘Exploratory analysis in learning analytics’

Comments are closed.