
Artificial intelligence is already reshaping how students learn, write, practice and show what they know.
Yet much of the higher education debate is still stuck on a narrow question: “Are students cheating with AI?”
The new season of OES’ ‘The Thought Bubble‘ podcast launches this week, and features a discussion between academic experts and leaders Dr Erin Jancauskas and Sue Kokonis where they argue that this is the wrong place to start.
This launch episode kicks off a six‑part series on assurance of learning in the age of AI, designed for higher education leaders, regulators, academics and sector partners who want to move beyond detection and towards connected, evidence‑based assurance.
Listen to Episode 1 of the new season now:
‘Assurance of Learning in an AI World: What Problem Are We Actually Solving?’
Why assurance of learning in an AI world matters
Hosted by Amanda Ford, Associate Director of Generative AI from Online Education Services (OES), The Thought Bubble asks how universities can still credibly demonstrate that learning has happened in AI‑enabled environments.
Rather than focusing solely on misconduct, the first episode explores a bigger question: How can institutions build defensible evidence of capability over time across a whole course of study?
The episode speaks directly to the concerns of quality that academic leaders, policy stakeholders and regulators in this quickly evolving period of AI’s influence pose.
The points discussed will be top of mind to those who need to balance academic integrity, equity and innovation. It connects to ongoing sector conversations about AI policy, regulatory expectations and future‑ready curriculum design.
Learn more about the series here:
The Thought Bubble, Season Two – ‘Assurance of Learning in the Age of AI: A Connected Approach ‘
AI as both opportunity and disruption
The episode features a candid conversation between discussion between OES Chief Academic and Partnerships Officer, Dr Erin Jancauskas and sector expert and renowned academic adviser, Sue Kokonis, where they introduce the OES ‘Connected Assurance Framework’ as a practical way to approach this shift, without requiring a technical deep dive. It brings together three dimensions of assurance.
With a deep understanding of the higher education sector and online learning programs, Erin Jancauskas describes generative AI as a “dual‑edged sword”. He says it opens powerful opportunities to enhance the student experience, while simultaneously disrupting traditional notions of assessment, academic judgement and evidence of learning.
Sue Kokonis brings a complementary lens to look at assurance of learning through the lens of large‑scale online delivery. She highlights how AI is already embedded in everyday student practice – summarising readings, proofreading, generating quiz questions – and why blunt bans or defaulting to face‑to‑face exams risk harming the very learners that higher education has worked hardest to include.
Together, Erin and Sue reframe AI not as a fringe threat to be managed, but as a mainstream capability challenge for universities that simply requires adaptation of practices.
Alongside this core discussion, the episode also features a conversation with OES’s Director of Government Relations, Rebecca Hall and sector expert and policy consultant Chris Gartner. Their discussion extends the framing of this topic into the policy and regulatory domain, unpacking what assurance of learning in an AI world means for quality frameworks, compliance expectations and the evolving role of regulators.
This additional perspective grounds the episode firmly in the realities facing the sector, connecting institutional practice with external accountability.
It highlights how governments and regulators are beginning to interpret AI’s impact, and why universities need approaches that are not only pedagogically sound, but also defensible in regulatory settings.
Learn more and access the OES Assurance of Learning Positioning Paper here: www.oes.edu.au/assuranceoflearning
From catching cheating threats to assuring confidence
A central idea in the episode is the shift from asking “how do we catch misuse?” to “how do we build confidence that graduates have genuinely achieved the learning outcomes their qualifications claim?”
Instead of relying on isolated, high‑stakes assessment events, Erin and Sue make the case for connected assurance of learning – a program‑level approach that brings together:
- Pedagogical assurance: curriculum and assessment design that generates cumulative evidence of capability.
- Relational assurance: educator–student relationships and academic judgement built over time.
- Technical assurance: systems, data and signals that support visibility, traceability and proportionate controls.
This episode introduces OES’s ‘Connected Assurance Framework’ as a practical way to hold these dimensions together without turning the conversation into a technical deep dive.
For a deeper dive into the framework, which was recently published as a policy positioning paper visit: OES’s Assurance of Learning resource hub.
Why redesign beats restriction every time the education environment evolves
One of the most memorable moments in the episode is Erin’s “calculator analogy”. Reflecting on his engineering studies, he recalls the sector’s journey from banning calculators in exams to redesigning assessment so that the real test became choosing and applying the right formula and critiquing the result.
Today’s AI disruption, he argues, follows a similar pattern.
Initial lockdown may feel safe, but long‑term confidence in learning will come from assessment redesign, not permanent prohibition. That means rethinking tasks, outcomes and evidence so students can build AI capability and deep disciplinary judgement.
From a policy perspective, this also signals a likely shift in regulatory expectations. As Rebecca and Chris highlight, regulators are less interested in whether AI is used, and more focused on whether institutions can demonstrate that learning outcomes are valid, reliable and verifiable.
Equity, student voice and AI literacy
Sue cautions that rapid returns to in‑person exams as a default integrity response can disproportionately disadvantage online, working and equity‑background students who rely on flexible provision.
At the same time, she invites the sector to see high levels of AI use as a sign that students are already on their AI literacy journey; not a proxy for misconduct.
The conversation also surfaces under‑discussed risks, including homogenised thinking and the loss of authentic student voice when learners lean too heavily on AI‑generated feedback.
Erin and Sue argue for helping students decide what to offload to AI, and what must remain a distinctly human skillset.
Start the journey with Season 2, Episode 1 of The Thought Bubble podcast
Episode 1 sets up the rest of the season, which will go on to explore where assurance really lives, how design functions as integrity infrastructure, how relationships make learning visible, and how technology can support trust when it is placed in service of pedagogy.
If you are working on AI policy, program design, academic integrity, quality assurance or re‑registration readiness, this episode is a practical starting point.
Listen to The Thought Bubble podcast now on your preferred streaming platform >