Why AI needs the Humanities (and how tech conferences are getting it wrong)

I’ve been at my first ‘proper’ IT conference for a while this week, the Digital Government Summit in Perth. The delegates (and vendors) looked like a fairly predictable IT crowd – mostly male, mostly white, mostly middle-aged. In spite of being a perfect demographic fit, I was still one of the odd ones out, and here’s why – I’m not a ‘real’ IT person.

I never have been a ‘real’ IT person for that matter. The first time I touched a computer was in my mid-teens – word processing on (from memory) a Commodore C64 at high school, connected to a dot matrix printer. Fast forward a few years and I gracefully bowed out of first year Computer Science at Uni with a WNF, after staring blankly at a blinking cursor for days after being told ‘now use EMACS to write an emulator in Ada for a vending machine’ was enough to tell me that I was clearly heading towards oblivion as a coder, and that I should shift my attention to the noble field of mathematics. Fast forward a more few years, and as a failing PhD mathematician I turned in desperation to a short-term employment contract writing VB6 code, and discovered I was still pretty rubbish at coding, even in an easier language.

What I did discover around this time though was that I was a half-decent software tester. For whatever reason, sitting on the interface between end users and developers was a much better fit, and the story of my career kinda wrote itself from there.

But to this day I’m still an IT fraud, and in a room full of people with ‘real’ degrees in IT and backgrounds in networking, security, storage, databases or development, I’m on the periphery.

What I have been able to do though after twenty-odd years working as a fraud in the IT sector is to observe how the questions have changed, and to state with a fair degree of confidence that because of this we need different people in the room at conferences like the one I attended this week.

How the questions have shifted

Twenty years ago the question felt like it was more often ‘can we use technology to solve this problem?’. Writing code was typically the domain of the ‘IT guy’, a mysterious God who could benevolent or cruel, and who worked in dark, mystical arts. Often, simply getting a technical solution to a problem at all was a win, no matter how crude we would probably now regard some of our creations of the time.

Over time, the question seemed to shift more to ‘how best do we use technology to solve this problem?’. PRINCE2, PMBoK, ITIL, OOP, Agile and take your pick of the multitude of other acronyms – all methods that attempted (with varying success) to find the optimal way to solve a problem using technology, and then make sure the solution continued to work effectively and reliably. To bring structure, repeatability, predictability to the ways we structure, create and implement a technical solution to a problem. Finding any solution gave way to finding an optimal solution.

I get a sense now that we’re about to tackle a far bigger question, and one which probably cannot be answered by most of the people in that conference room, including the ones up on stage. The question we’ll need to answer, as a species, is ‘how best do we teach technology to solve our problems for us, now that the technology is doing a lot more of the thinking on our behalf?’

A better way to phrase this is perhaps ‘how can we make AI think and act like a better version of humanity than the quite frankly God-awful version that we’ve allowed ourselves to become?’

‘how can we make AI think and act like a better version of humanity than the quite frankly God-awful version that we’ve allowed ourselves to become?’

This question isn’t at all new, but in recent times it has gotten more airplay when even people like Elon Musk raise a flag to say ‘folks, its time we started to take this AI stuff seriously’.

There is already evidence that AI can pick up some really, really bad habits from those who create it, whether it be image recognition intelligence confusing dark-skinned people with gorillas, or chatbots being turned into hate-spewing anti-Semites, or AI simply reinforcing gender biases in an industry that hasn’t covered itself in glory in terms of equality on this front.

On the plus side, we now have some of the major AI players taking the need for transparency in their artificial intelligences very seriously, removing at least some of the risks of unseen biases lurking within AI black boxes.

What needs to change

This is a good start, but transparency is necessary, but not sufficient, in our quest to avoid replicating the darker corners of humanity. Transparency may help us to understand how the decisions are being made, but it will not help us teach AI how to make decisions that represent the ‘better angels’ of our nature. This needs different skills, and as I looked around that conference room this week I saw very few of those skills on offer, and this needs to change. What we need are the people who are connected with the very essence of humanity, and who can guide technology into the light rather than the dark. What we need are a lot more frauds in the room, in fact we need people even more on the periphery of technology than I am.


Transparency may help us to understand how the decisions are being made, but it will not help us teach AI how to make decisions that represent the ‘better angels’ of our nature. 

Some pundits are already calling for a bigger focus on Humanities as the skills needed, and I can foresee a mass shortage in this field if we don’t consciously start to look at these as the skills of the future. As we approach a time when AI becomes far more self-directed and self-sustaining, we need less coders and more anthropologists, historians, ethicists, psychologists, philosophers and theologians. We need the people who can help us build new intelligences in a way which will shape the future of humanity for the better, and the clock is ticking.

Conferences like the one I attended this week must gradually morph away from talking about the what of technology, and for that matter the how, and start to put a focus on the why. To do this in an emerging age of AI we need a strong presence from the Humanities on stage, leading the conversation, and we need to listen. If we don’t, then we run the risk of AI continuing to learn its habits from those who are already creating it, driven by capitalism, greed, bias and addiction.

Footnote: thanks to Ania Karzek for providing the inspiration for this post through her presentation at the aforementioned conference.

2 thoughts on “Why AI needs the Humanities (and how tech conferences are getting it wrong)”

  1. Thought provoking, indeed. Though I’ll be interested in learning if you think we have 2 different questions mixed up in this article.

    1. How do we prevent AI from being an extension of human bias and bringing more transparency in the way AI is designed?
    2. Are we using AI right? (this question wasn’t asked, but that’s where we should start)

    AI is based on Machine Learning, which is self learning based on how humans interact with it. Which leads to a GIGO situation. If we stopped using AI for replacing human to human interactions (such as chat-bots) and used it more for objective outputs (such as using complex weather patterns to predict agriculture output, thus helping save crops where needed, or milking cows ), we would have saved ourselves from ourselves.

    BTW, if AI is concerning…await the AR (augmented reality) typhoon :).

Leave a comment