Does QA Really Assure Quality?

This is the 3rd and penultimate in a series of posts about the nature and value of quality assurance for higher education. The aim is to provoke a bit of thought and debate, mostly for myself and hopefully in the sector. I also wanted to use some of the ideas I used in my MA dissertation to address wider questions (hence some of it is lifted bodily from that work).

So to briefly recap, my first post contextualised the debate and discussed the need to be reasoned and to avoid polarised stances when debating QA. The second was a look at how we got to where we are in quality regimes and what some of the broad issues in the field are.

This post is a more applied look at what quality assurance means at an institutional level. Rather than dealing with abstractions I’ll try to use a case study as a thought experiment to answer this question.

How QA happens (in theory):

Interactions between students, staff or other stakeholders and the quality regime can be broadly categorised as either ‘events’ or ‘continuous activities’. While quality events grab more headlines and are definitely associated with QA, it is the continuous activities which determine the quality of the teaching and learning experience (see ‘Talking about Quality’ by Prof. John Brennan for a very interesting examination of this topic).

QA events are the reviews, audits and reports that dominate the activities of national QA bodies. Their purpose is to judge activity in an institution at a point in time against a pre-determined metric for a specific purpose. For example: the validation of a new programme and its associated self-evaluation exercises and panel visits. These events are the most obviously associated with the notion of QA and the most easily analysed and critiqued aspect of quality regimes because they are the most obvious. They also have the greatest influence on what lecturers, students and the public at large think of QA as a concept.

Continuous quality activities include teaching, examination and research; student feedback, professional development, writing learning outcomes, reporting to managers; external examination and so on. They are the rituals and routines that make up a large chunk of formalised activity in higher education. Typically there is an administrative tier within institutions that either centrally or at a disciplinary level collects, aggregates and reports on the data generated by these aspects of the quality regime. This data is, in theory, used by the institution to track and help improve quality of teaching and learning and research. The data and response of the institution to the data are (again, in theory) the major inputs into quality events.

The Key Question

The central question that I would like to see debated is: Are the rituals and routines of continuous quality activities actually used to improve quality? Do they simply propagate further rituals and routines and administrative workloads?

In pithy terms: Do quality regimes actually influence quality or do they simply feed the QA beast? Jethro Newton (2000) talks about this idea in ‘Feeding the Beast or Improving Quality: Academics’ perceptions of quality assurance and quality monitoring’.

Trying to break this question down into something answerable is extremely difficult. I suppose what one needs to know is if the generation, collection and reporting of data on teaching and learning activity constitutes a systematised approach to improving educational quality? Does it at the very least provide a check against idleness and corruption?

It would be extremely worrying (and in a way hilarious) if institutions and governments are expending vast public resources on administrative activities designed to enhance public confidence in the expenditure of public monies in higher education if all they do is simulate the effects they’re trying to examine.

 

An Application of the Question to Student Feedback

Rather than trying to answer these questions in abstract, I’ll try to address them in relation to a specific continuous quality activity. I will use student feedback as a sort of case study/thought experiment.

Feedback questionnaires, attendance and progression statistics are often used as a proxy for student satisfaction which is seen as an important indicator of teaching quality and a tool for improving it. For some (Paul Ramsden for instance) it is seen as the most important indicator of quality because learners are the principle arbiters of the learning environment. At the most basic level pedagogical theory constructs student satisfaction as a function of student engagement with learning and their likelihood of achieving curricular outcomes. It doesn’t quite work like this in practice because, as everyone in HE knows, students’ responses are not static and independent of non-educational concerns (their own interest in the topic for example) and are frequently used as a superficial assessment of lecturers’ performances.

Student feedback does generate data that is collected by the administrative echelons. So is this data used to affect improvements in teaching and learning or is it used to spawn further administrative exercises? As with everything sociological, the answer is a little bit one way and a little bit the other. The list of ‘depending on…’ variables is also long and inexhaustible. There are at least as many ways of collecting student feedback as there are academics.

At the most fundamental level, student feedback should provide a check against idleness and corruption. If a lecturer isn’t turning up, or is abusive, or is absolutely and indisputably terrible then student feedback will identify this and management should be alerted and corrective action taken. So yes quality can be generally assured from that perspective. At a more strategic level, it is only useful for quality assurance if, after being collected, the data is used to affect change in the teaching and learning environment. Otherwise it is simply collecting data for the sake of reporting, ranking or just to fill filing cabinets. From this perspective the activity of seeking and collecting student feedback is not in itself a systematised way of assuring quality, only a systematised way of checking if it is happening at all. The assurance part comes when lecturers’ and students get to use student feedback to make informed improvements to teaching and learning.

My own research indicates that lecturers prefer to gather feedback informally through conversations with current and past students. They do see feedback as an aspect of their professional development but not as an assessment of performance. As a participant in Louise Morley’s study notes: “[students’ opnions] are important, but they are important as a clue rather than as a solution in themselves.

It is fair to say that quality teaching (whatever that is) should result in satisfied students and their views are an important indicator of this. They are only a useful indicator though if they are used by the regime to facilitate improvements in quality teaching… through professional development maybe?

 

So…. Is Quality Assured?

Simply having a QA regime and going through the motions of continuous quality activities doesn’t guarantee that either will result in something that the public can have confidence in. They need to be used correctly, as a process not as a goal.

Learning theory suggests that target based assessment strategies for academic purposes cause students to simulate the target behaviour. The best example of this I can think of is the Irish leaving certificate: It famously examines school leavers’ abilities to do the leaving cert, not their knowledge of the curriculum. For this exact reason, quality experts say that quality indicators only have a shelf life of two years before academics ‘get wise’ to them.

In the case of student feedback, quality will only be assured if feedback is seen as a tool for improving quality, not an attempt to achieve ‘good’ student feedback. The use of the UK’s national Student Survey to rank institutions is a good example of a continuous quality activity being used as a metric rather than a tool. Myths abound of institutions bribing first years with free pizza to fill out the survey positively!

Given what QA is supposed to achieve however (public confidence in HE etc.) continuous activities do demonstrate a systemic attempt to assess educational quality with the goal of improving it in mind. At the very least they make it very hard for quality to be absolutely dreadful. Whether or not these activities improve quality depends on whether they are used as targets or tools. I believe that distinction depends on whether or not individual staff, students, administrators, managers and all other stakeholders use them correctly and feel a sense of ownership of them.

Continuous quality activities that generate data used by the regime are important for assuring educational quality; if, and only if, all of those who are involved in the activities get to close the loop and use the outcomes of those activities to influence the quality of provision in HE. I have a suspicion that this ownership or ‘closing the loop’ issue is the one of the biggest causes of difficulty in QA in HE.

2 Responses to “Does QA Really Assure Quality?”

  1. Making QA Disappear: How we might re-think quality assurance for higher education. | Hugh Sullivan's Blog Says:

    […] Just another WordPress.com weblog « Does QA Really Assure Quality? […]

  2. ramadan Krasniqi Says:

    I like the helpful information you provide in your articles.
    I’ll bookmark your blog and check again here regularly. I am quite sure I’ll learn
    lots of new stuff right here! Best of luck for the
    next!

Leave a comment