Management in the age of AI

Author’s note: This post has been sitting in my Drafts folder since early 2019, and it seems like a good time to release it given my new role. Note that I have not modified anything other than a little formatting and dropping the odd internal note to myself about how I was going to finish off the post, I wanted to leave it as an unfinished thought bubble that now is more important than ever.

Perhaps it was turning 45, or having a change of role at the start of this year, but whatever the cause I’ve been reflecting on the fact that I am more or less at the ‘half time’ mark of my professional career. When I started in the workforce twenty years ago, most of the web was little more than a set of static HTML pages, Netscape Navigator was the most popular non-IE browser going around (yes, IE sucked back then too) and the Nokia 9000 Communicator introduced graphics-enabled web browsing to a phone for the first time.

The sheer volume of change in how we interact with the world around us through technology that has occurred since I got my first job is nothing short of stunning, and it has got me wondering what the tech landscape will look like by the time I retire twenty years from now – give or take a few. Of particular interest is how AI will change the workplace, and even more specifically, how will it change the role of those in leadership/management roles.

By the end of my career, will I need to know how to manage hybrid teams, staffed by a combination of human employees and well-defined AI constructs?

Being even more specific, my question boils down to this – by the end of my career, will I need to know how to manage hybrid teams, staffed by a combination of human employees and well-defined AI constructs? If so, then what might this look like in practice?

One of my team shared this video with me earlier this week. The video is six months old, but it illustrated just how close I think we are to needing to deal with my ‘hybrid management’ scenario. Have a look – it is worth the five minutes to make it all the way through.

 

First things first, I’ve got to give a hat tip to the team who made Genie a reality. To have reached the point shown in the video in what appears to be only a couple of short years must have taken a clear, bold vision, some serious focus and the investment to back it all up, and then to have the right people and technology to execute the whole thing. Love it or feel terrified by it, you have to give it credit for being a quantum leap in how Universities might support students in the very near future.

What it has done in my mind though is to think of how my own management methods might need to shift if I had a Genie as part of the team.

I was fortunate enough to see Prof Genevieve Bell, head of the 3A Institute at the ANU, speak a few weeks ago at an event run by our own Flinders New Venture Institute, and the crux of her talk was how we can look back at the first three industrial revolutions as a way of predicting the events of the fourth.

Prof Bell posited that in each of the first three industrial revolutions, the technology surged ahead, and many of the social, economic and management norms impacted by the technology were left behind, taking time to play catch-up with the new order. She went on to suggest that the fourth industrial revolution will follow a similar pattern, except that this time the changes in technology will be more (a) abstract and (b) self-directed, thanks to AI determining to some extent its own future.

So what would it mean for me as a leader of workplace teams if the rise of AI delivers almost-sentient artificial beings like Genie, as well as the ability for non-technical users to teach and manage them without the need for specialist IT support?

Firstly, there is the question of learning, and how this new staff member would be ‘trained’. (Note that I never expanded this bit out, but there is a whole post in this alone, which might be why I parked it…)

Back to Genie – somewhere in Deakin, there would need to be someone taking shared responsibility for what Genie says, and over time, this will become more Genie and less human. To take the pathologically bad case, what happens if Genie provides information that causes a student to fail, or worse – self harm? Who is liable?

To shift this to a more proactive view, how do I, as a manager (if I was working somewhere which had a construct like Genie) ensure that this almost-sentient being is appropriately managed/led? Doesn’t need much of the IR that humans do, but will the interactions with me – and our students – teach Genie the culture within the environment? If Genie learns over time from the collected interactions with its human community, then it is just as susceptible to picking up a toxic culture as a human employee is?

Wrap up: leadership of AI seems like a major gap in the current world of research, and likely that this will be a major game of catchup for many organisations investing in AI. I’d love to hear from anyone who is researching, or working in, a space like this.

Leave a comment