Connected assurance in the age of AI
In this Q&A, OES Chief Academic and Partnerships Officer, Dr Erin Jancauskas, and Senior Academic Expert and Advisor, Sue Kokonis, explore what that looks like – and why equity, student agency and AI literacy have to be part of the story.

As generative AI reshapes how students learn, write and demonstrate capability, higher education is facing a deeper question than “are students cheating?”.
The real challenge is how universities can still stand behind what a qualification claims, in a world where AI is part of everyday study and work.
In OES’s Connected Assurance Framework, this means re-thinking assurance as a connected, program-level system rather than a single checkpoint.
In an AI-enabled world, Erin and Sue explain that credible assurance depends on:
- treating AI as a capability to be developed, not just a risk to be contained
- designing program-level evidence of learning, rather than relying on single points of control
- preserving equity and flexibility for diverse cohorts
- helping students decide what to offload to AI – and what must remain distinctly human.
Q&A between Dr Erin Jancauskas and Sue Kokonis
This Q&A was captured as part of an interview for OES’s The Thought Bubble podcast, where in season 2 the OES team unpack what it means to assure learning outcomes in higher education.
“Gen AI is a dual-edged sword.”
Q: Why is this such a pivotal moment for assurance of learning?
Erin: Gen AI is a real dual-edged sword, isn’t it? It’s opening up amazing opportunities to improve the student experience, but on the other side it’s also raising real questions about assessment and assurance of learning. We can’t pretend this is not happening.
If we did, it might make some things like assessment easier, but we’d also be taking away one of the sets of skills graduates are absolutely going to need to use and feel comfortable with.
The real issue for us at the moment is that AI is reshaping assessment, academic judgement and the evidence of learning across the sector.
In response, institutions and regulators are seeking greater confidence that graduates have genuinely achieved the learning outcomes their qualifications claim.
The core challenge is not whether learning can be secured at individual assessment points, but whether institutions can credibly evidence learning over time in these AI-enabled environments.
Sue: And we still haven’t seen the full impact of AI yet. Industries have been experimenting, testing and piloting behind the scenes, and we’re starting to see full-scale rollout occurring.
That raises big questions about what we are teaching students, how we’re educating and supporting them, and how we’re ensuring they are capable and competent when they leave with a qualification.
“We can’t respond with blunt tools – equity has to stay in view.”
Q: How does this look from the perspective of online and equity-cohort students?
Sue: It’s interesting if you look at it from the perspective of the online student. At OES, we work with partners in Australia and more broadly, predominantly supporting working professionals, mature-age students, first-in-family students and other non-traditional learners who study online because they’re time poor and it’s difficult to get to locations.
So, we need to be really careful that the sector doesn’t jump to a knee-jerk reaction where everyone just turns up for face-to-face exams, because that would disproportionately affect equity students.
Erin: Assurance of learning is about institutions being able to say that their graduates have truly mastered the skills a particular degree or qualification says they should have.
So, it’s about putting in place a system that enables institutions to do that.
And, it’s not just about securing individual assessments, it’s about being able to credibly evidence learning over time and across a complete program.
“Connected assurance: building the picture pixel by pixel.”
Q: OES has published a positioning paper that describes the OES ‘Connected Assurance Framework’. What does that framework involve?
Sue: This is an area OES has been thinking about a lot as we talk with our partners and with sector leaders who are thinking deeply about this space.
We’ve developed the Connected Assurance Framework, which is a holistic way of approaching programmatic assessment. Do you want to explain what we’re doing there?
Erin: It’s quite simple in concept. There are three components.
There’s technical assurance, which is about making sure we put in place environments where students are demonstrating their knowledge.
There’s relational assurance, which is about strengthening the connection between the educator and the student so there is real knowledge of individual students and their learning progression.
Then there’s pedagogical assurance, which centres on new ways of assessment, authentic assessment, and program-level curriculum and assessment design.
One of our partners at WSU explains this really well using the example of an image that becomes clearer pixel by pixel.
At first, you only see a few pixels, then enough to think it might be a face, then enough to think it might be the Mona Lisa… and then eventually you know it is.
That’s a lovely metaphor to explain the concept of programmatic assessment. You don’t need every single ‘pixel’ of the bigger picture nailed down through an exam or a face-to-face session.
But you do need enough to be able to put your hand on your heart and say this student meets the requirements for the degree we’re about to confer on them.
Sue: It’s a very holistic way of doing it, because it says there are multiple touchpoints and ways of building confidence rather than one narrow, prescriptive approach, which risks getting it wrong. And on the pedagogical side, there are many different ways we can create assessments.
In some cases, we’ll be saying to students that they should be using AI to do this work and to support them. In other assessments, we need them to demonstrate their human capability.
On the technical side, we know so much more now. We have many more touchpoints and far more analytics about students. For example, if a student opens an assignment 20 minutes before submitting it, that’s obviously a red flag. If you have a whole set of those behavioural red flags, they become pixels in the picture and give you a better idea of what might be going on.
Erin: That’s a really important point. In digital environments, whether fully online or part of a hybrid student experience, you often get more complete data and insight into a student’s learning progression through their use of the LMS. In the early days of this conversation there was a lot of hand-wringing about online students and whether they were at particular risk, but I don’t think that’s the right frame.
“From slide rules to calculators to AI: why redesign beats restriction.”
Q: Erin, you often use an analogy from your engineering days. How does that help frame this moment?
Erin: I’ve seen this kind of disruption before…
In my first year of engineering everyone carried a slide rule.
In my second year digital calculators appeared.
In my third year programmable calculators arrived and universities moved immediately to technical lockdown.
You had to hand your calculator in at the door, or prove the memory had been cleared, because assessment at that stage was about memorising a formula and demonstrating you could use a device.
But over time, assessment changed; By fourth year you were given the equation sheet and the challenge was to work out which formula to use, apply it, and comment on the accuracy of the result. It became much more like real engineering practice.
That was a shift from mechanical computing to digital, and now we’re going through another disruption as we move from digital into artificial intelligence.
The journey we’re on is that institutions first move to secure assessment, but over time assessment itself changes. That’s why this is explicitly called out in our Connected Assurance Framework.
Sue: That’s a great example. And it is a retrograde step to try to ban AI or lock everything down in the first instance. What’s critical is that industry is changing too. If we think about this as a shift on the scale of the industrial revolution, then higher education is in an interesting position.
On one hand, employers want graduates with AI capability. On the other hand, they also want critical thinking, deep discipline knowledge, the ability to evaluate AI output, and the ability to work with AI in intelligent and ethical ways. There’s a tension there, alongside questions about whether higher education is really preparing students for that reality.
“80% using AI doesn’t mean 80% cheating.”
Q: How should the sector interpret high rates of AI use among students?
Sue: When people hear that 80 per cent of students are using AI, the knee-jerk reaction is to think 80 per cent are cheating. That’s not the case. We should be saying 80 per cent of students are already on their journey to AI literacy.
Student agency is a really important part of this, because when students leave university, they want to be capable and competent in their chosen career. We need to recognise that this is a genuine concern for students and guide them on how to use AI well.
Our research has been looking at what makes sense to cognitively offload to AI and what should remain part of the human skillset. If a student wants to summarise something, do some editing, quiz themselves, find content, do some coding, or get a precedent in a legal context, those kinds of patterned and repeatable tasks are well suited to large language models. A student should never have to format a reference list manually again.
But we should not let students bypass building the human capability required to evaluate AI output. They still need discipline knowledge, stakeholder engagement, prioritisation, strategy and creative thinking. I think we’ll see a lot more of that in the next couple of years.
So, it’s a mistake to think universities are sitting on their hands. Every university leader we’ve been speaking with is thinking deeply about this and rebuilding curriculum. Just as industry has taken time to scale AI, the sector is doing that work too.
“Don’t outsource what makes you original.”
Q: Are there risks in how students use AI for feedback and writing?
Sue: I was listening to a short clip from Nicholas Thompson, the CEO of The Atlantic, where he referenced research done with Google DeepMind.
It compared student assessments completed before 2021, where feedback came from tutors, with a current cohort where feedback was provided by a large language model.
The findings were really interesting. In general, the large language model made the writing more homogeneous. The editing pushed students in a similar direction, so the work was perfectly fine, opinionated, sharp and clear, but it was also becoming more generic and less human. That’s a critical point.
We need to work with students and say: by all means, draft something and get feedback with AI, but do not outsource the part that makes you original and creative. That’s what employers are looking for in graduates.
Erin: That’s really interesting. So, the human mind still has an edge in creativity and difference. It can take more tangents, whereas there is a tendency for AI-assisted work to become more uniform.
Sue: And if you put your work into AI too early, it can also give you a frame that you then work within. Unless you take a really critical mindset and challenge that frame, it’s very easy to just follow it. We’ve all seen writing that is obviously generically generated. It’s perfectly valid and the writing is clear, but it doesn’t feel cutting-edge or boundary-pushing.
That’s why we need to educate students and put them in the driver’s seat. The simple question is: do you want to use AI for your whole degree and then arrive in front of an employer with a lightweight skillset that doesn’t set you apart from other graduates?
Watch the clip from The Atlantic here: Nicholas Thompson on AI-optimised writing >
“Assurance, AI and the new frontier.”
Q: Given all of this, what gives you optimism about where the sector is heading?
Erin: I think this is exactly the kind of provocation that drives innovation. It’s a really exciting time. Once we get beyond the fear that this somehow threatens the whole sector and the whole principle of a degree and learning, we can understand that this is a new frontier and that amazing things are going to come out of it.
We’re lucky to work in an organisation that is closely connected with more than one university, because that gives us different perspectives and creates something like a community of practice working together to leverage the opportunities here.
To hear the full conversation between Erin and Sue, listen to Episode 1 of the season:
“Assurance of Learning in an AI World: What Problem Are We Actually Solving?” >
Explore Season 2 of The Thought Bubble >
Read more about OES’s Connected Assurance Framework >