Artificial intelligence has the potential to transform education, from customising teaching to supporting accessibility. But there are challenges, too.
Earlier this year BJSS and Amazon Web Services (AWS) brought together professionals from several institutions to discuss the impact and potential of artificial intelligence (AI) in the field of higher education.
Among the attendees were educators, researchers, and tech experts, leading to a rich conversation reflecting a range of perspectives.
The session shed light on the challenges and opportunities associated with integrating AI into the academic landscape.
These are my notes on the key insights and takeaways.
AI can help students acquire basic skills and knowledge more quickly. It can help educators develop and customise teaching. It can assist students in their reflective practice. And it can support accessibility and language acquisition.
But it’s important that AI really helps do things better, not just faster, or cheaper.
There are significant challenges around the quality, recency, and appropriateness of the inputs and outputs of AI.
Machine learning, a crucial aspect of AI, is especially relevant in education, and innovative tools like generative AI (GenAI) are emerging as valuable resources – but only when implemented with ‘guardrails’.
That is, checks and controls built in to prevent inappropriate or inaccurate outputs.
‘AI in education’ should also include ‘education in AI’. Understanding how AI works is the first step towards using and managing it appropriately.
AI excels at repetitive tasks and analysing patterns in large datasets.
However, it is important to remember that AI cannot create something that does not already exist in the data, in some form.
Despite the temptation to play with AI, which is new and undeniably fascinating, we need to assess whether it really is the right solution for the problem we’re trying to solve.
More straightforward alternatives, such as automation, might be more effective – and have a lower carbon cost.
The opportunities are not only in the practice of education but also on the business side of running educational institutions, and where the two meet. For example, with the creation of chatbots to augment student support, or field enquiries.
The session also emphasised the importance of AI ethics and the risks associated with AI-generated content. How do we maintain academic integrity and avoid biased outputs?
It’s essential to establish proper controls and guidelines to ensure the responsible use of AI in education.
We also need to consider digital equality. Access to AI tools shouldn’t come with financial barriers that exclude individuals or institutions.
Trials and tests within academic groups are critical to build up AI capability in the higher education sector.
These experiments highlight the need for comprehensive governance policies to balance against maintaining the flexibility to innovate.
Getting this right reduces risk, including the particular risk of ‘shadow IT’ – when people use unauthorised, potentially unsecure tools and systems to experiment when they’re not given the means to do so safely.
We used big questions to frame our discussion – as a mix of Liberating Structures' wicked questions and classic design thinking ‘how might we’ questions:
The discussion between higher education, BJSS and AWS provided a safe space to exchange ideas on the challenges and opportunities presented by AI in education.
AI continues to evolve rapidly. In that context, it’s crucial for educational institutions to stay informed, establish robust governance policies, and collaborate across disciplines. Only by doing this can they harness the full potential of AI safely and responsibly.