Making QA Disappear: How we might re-think quality assurance for higher education.

June 12, 2013

It has taken me two and a half months to get around to writing this and posting it, apologies for that.

This is the final post in a series of four about the nature and value of quality assurance (QA) for higher education (HE). The aim is to prompt some debate and thinking about QA and its role in education. It is largely informed by work I did last year in writing my MA dissertation, but also generally by my interest in the topic, and my belief that the sector in general (and definitely in Ireland, as a result of the creation of QQI) is approaching a point of rethinking the very concept of quality assurance.

This post will make most sense if you’ve read the previous three: In my first post, ‘Golden Age vs. New Dawn‘ I talked about the extremes of opinion in this debate and how any useful analysis of QA has to avoid these. In part two I gave a ‘Brief History of Quality‘ and a brief look at how we ended up where we are. In my third post I asked ‘Does QA Really Assure Quality?‘, and I tried to unpack what QA means at an institutional level and to see if it really works, using student feedback as an example.

This post is a more speculative and tries to suggest a way forward in thinking about QA for HE. In it I propose a principle for re-thinking QA and integrating it into lecturers’ jobs better, as a possible way of more effectively realising the benefits of QA.

Academics’ Perspectives

For my MA dissertation (‘Not Convinced: Lecturers’ Conceptions of Quality Assurance’) I interviewed a number of lecturers about their perceptions of quality assurance and their interactions with processes for QA in their institution. My intention was to gauge lecturers’ opinions of the quality regime that they work in and to establish if there is a difference between what those, like me, who work in quality assurance and those who teach in higher education institutions define as quality. I found some evidence for this, and suggested that a more significant body of work is required to tease it out.

Looking back on the discussions I had with lecturers, I realised I found a few other things unlooked for which have begun to interest me about this area. The conversations covered a large range of topics from their pedagogical ‘styles’ to how they prefer to gather student feedback to what administrative demands were placed on their time. I steered the conversations towards QA and the quality regime fairly regularly and thinking back I’ve come to realise that there were three general types of attitude towards continuous quality activities among lecturers. (See previous post for a definition and discussion of continuous quality activities, or see ‘Talking about Quality’ by Prof. John Brennan for a very interesting examination of this topic).

Some lecturers demonstrated different attitudes at different points throughout the same conversations.

1.            QA external to the lecturers’ job

Some (typically those who are antagonistic towards QA) saw it as something ‘over there’ with institutional management. This feeling that QA is an external imposition that is tacked onto the job means that they cannot ‘buy into’ QA processes as they don’t feel ownership of them. They generally had very clear ideas about what QA is when expressing this view, for instance in railing against bibliometries.

 

2.            QA is integral and essential to the job

When expressing views about the value of QA processes, at times lecturers became evangelical about the need for QA processes and how their roles as teachers, researchers and administrators wouldn’t be possible without QA systems. Typically this view is most common among believers in quality assurance, or those involved in Quality Committees and institutional management. This was particularly common in discussing the need to improve teaching or curricular content based on student feedback and the demands of the discipline.

3.            QA internal to the lecturers’ job, but invisible

Some of those that I spoke to did see the rituals and routines generally considered QA as part of their job, but not part of QA. This view was expressed in some way by those who felt QA was neither good, bad nor indifferent.  What is considered to be good QA practice is essentially seamlessly integrated into the lecturer’s job to the extent that they don’t perceive it as QA.  This surely should be a goal of quality managers(?). I encountered this when I asked questions about teaching styles, or professional development.

There is a spectrum of attitude towards QA ranging from hostility to enthusiasm across lecturers. There is also a perpendicular spectrum of awareness ranging from a high degree of familiarity to ignorance of QA processes. The diagram below is a rough attempt to graphically represent the opinions outlined above.

diag

The first thing to note is that the South-West quadrant is empty. I suspect that this is because lecturers are only hostile to QA when they see it as externally imposed by management, and are therefore acutely aware of it, as if it were a pathogen invading a body.

QA only really adds value when lecturers buy into continuous quality activities (the rituals and routines that generate the data for the quality regime: examination, publication, feedback etc.) and use them to monitor and improve the quality of the educational experience. Whether or not they conceptualise them as part of ‘Quality Assurance’ is immaterial in many ways. In order to get buy in, therefore, from those who are hostile and aware of QA processes, either one of two things need to happen:

First they could be ‘sold’ the value of QA and convert to believing in the concept of managerial involvement in quality assuring their activities. Difficult and adversarial probably.

Or, the activities involved need to be conceptually divorced from the concept of QA and married to the value that they add to the individual’s job.

Making QA Disappear

I remember reading once (thought I can’t remember where) that when designing the Kindle, Amazon’s design team goal was to have the device “disappear” from the user’s hands as they read. The point of this was that the Kindle system shouldn’t put anything between the reader and the book, rather they should forget they’re using it because they’re just enjoying reading (I have to say this totally worked on me, I have been known to try to manually turn the page on my Kindle). Otherwise it’s just another gadget right?

As things stand one of the significant difficulties around the concept of QA is that it is a management tool that is somewhat superimposed on academia. If QA could disappear into the background of teaching, research, examination, writing learning outcomes, reporting and so on, then those activities while still generating the data, and performing their tracking and enhancement functions would become good practice for professional lecturers, rather than extra requirements.

I think that in rethinking QA, the education and training community needs to set itself the goal of make QA disappear into the background of the teaching, learning and research environment.

How to do this? Governance vs. Pedagogy

The trite answer is that I have no idea , but I have a feeling that a certain principle needs to be adopted first.

A common criticism of the literature on QA is that it tends to be focussed on administrative and governance issues, rather than curriculum and pedagogical issues. The same is often thought to be true in big quality events and reviews of institutions; the paradigm is an outside body assessing the organisation and governance of the institution/programme in general.

While I understand the need for assessment and agreement of QA procedures, I think that the focus on that pulls institutional effort away from assuring the quality of the curriculum, while that’s where individual lecturers’ attention is focussed.

I think that in order for QA to disappear, the focus of QA procedures needs to be more closely aligned to what those teaching think of as important quality issues.

Realising the Value of QA

As I keep reminding myself, the point of QA is to ensure trust and public confidence in education and training provision. It facilitates investment and participation in HE by the public at large. At a micro-level, when those goals are transported into the learning environment they provide a check against idleness and corruption, accountability for the spending of national resources,  and hopefully assurance that the quality of education is good and being continuously improved.

In an imaginary world where QA as a term had disappeared from the day-to-day of lecturing at the coal-face; it would still happen. There would still be an administrative tier within institutions handling QA information. There would still be national-level quality regimes assessing institutions and programmes. Crucially, there would still be a public-facing quality assurance system that would provide that trust and public confidence in the educational experience provided.

The significant difference would be that the tension between lecturers and the administrative regime they work in might be lessened. I don’t doubt that it would still be present; there will always be a trade-off between accountability and academic freedom. Re-focussing on pedagogical and curricular quality might resurrect an older problem with QA, that assessment is overly intrusive into the teaching, learning and research environment.

Summary

It’s not quite a solution, but something to think about if nothing else. If the goal were to try to integrate QA into lecturing to the point where QA disappears (as a term, or a distinct concept) the difficulty of getting buy-in from lecturers into continuous quality activities would be reduced.

Focusing those activities and the associated events a little more in the direction of the substantive teaching and learning might be a way of achieving this. It might also help to shed the image of QA as a self-perpetuating bureaucracy.

Post-script

I hope I have struck a balanced position in these posts and that they’ve been a little bit interesting to some of you. I have enjoyed trying to order my thoughts a little bit and articulate things informally without any prejudice. Only a small proportion of the opinion in these posts on the value of QA are supported by real primary evidence. It has mostly been a collection of suspicions, gut feelings and intuitions which I hope resonate a little bit with people.

My contact details are available on the about page, please get in touch if you are interested in what I’m interested in.

Please feel free to comment.

Advertisements

Does QA Really Assure Quality?

March 19, 2013

This is the 3rd and penultimate in a series of posts about the nature and value of quality assurance for higher education. The aim is to provoke a bit of thought and debate, mostly for myself and hopefully in the sector. I also wanted to use some of the ideas I used in my MA dissertation to address wider questions (hence some of it is lifted bodily from that work).

So to briefly recap, my first post contextualised the debate and discussed the need to be reasoned and to avoid polarised stances when debating QA. The second was a look at how we got to where we are in quality regimes and what some of the broad issues in the field are.

This post is a more applied look at what quality assurance means at an institutional level. Rather than dealing with abstractions I’ll try to use a case study as a thought experiment to answer this question.

How QA happens (in theory):

Interactions between students, staff or other stakeholders and the quality regime can be broadly categorised as either ‘events’ or ‘continuous activities’. While quality events grab more headlines and are definitely associated with QA, it is the continuous activities which determine the quality of the teaching and learning experience (see ‘Talking about Quality’ by Prof. John Brennan for a very interesting examination of this topic).

QA events are the reviews, audits and reports that dominate the activities of national QA bodies. Their purpose is to judge activity in an institution at a point in time against a pre-determined metric for a specific purpose. For example: the validation of a new programme and its associated self-evaluation exercises and panel visits. These events are the most obviously associated with the notion of QA and the most easily analysed and critiqued aspect of quality regimes because they are the most obvious. They also have the greatest influence on what lecturers, students and the public at large think of QA as a concept.

Continuous quality activities include teaching, examination and research; student feedback, professional development, writing learning outcomes, reporting to managers; external examination and so on. They are the rituals and routines that make up a large chunk of formalised activity in higher education. Typically there is an administrative tier within institutions that either centrally or at a disciplinary level collects, aggregates and reports on the data generated by these aspects of the quality regime. This data is, in theory, used by the institution to track and help improve quality of teaching and learning and research. The data and response of the institution to the data are (again, in theory) the major inputs into quality events.

The Key Question

The central question that I would like to see debated is: Are the rituals and routines of continuous quality activities actually used to improve quality? Do they simply propagate further rituals and routines and administrative workloads?

In pithy terms: Do quality regimes actually influence quality or do they simply feed the QA beast? Jethro Newton (2000) talks about this idea in ‘Feeding the Beast or Improving Quality: Academics’ perceptions of quality assurance and quality monitoring’.

Trying to break this question down into something answerable is extremely difficult. I suppose what one needs to know is if the generation, collection and reporting of data on teaching and learning activity constitutes a systematised approach to improving educational quality? Does it at the very least provide a check against idleness and corruption?

It would be extremely worrying (and in a way hilarious) if institutions and governments are expending vast public resources on administrative activities designed to enhance public confidence in the expenditure of public monies in higher education if all they do is simulate the effects they’re trying to examine.

 

An Application of the Question to Student Feedback

Rather than trying to answer these questions in abstract, I’ll try to address them in relation to a specific continuous quality activity. I will use student feedback as a sort of case study/thought experiment.

Feedback questionnaires, attendance and progression statistics are often used as a proxy for student satisfaction which is seen as an important indicator of teaching quality and a tool for improving it. For some (Paul Ramsden for instance) it is seen as the most important indicator of quality because learners are the principle arbiters of the learning environment. At the most basic level pedagogical theory constructs student satisfaction as a function of student engagement with learning and their likelihood of achieving curricular outcomes. It doesn’t quite work like this in practice because, as everyone in HE knows, students’ responses are not static and independent of non-educational concerns (their own interest in the topic for example) and are frequently used as a superficial assessment of lecturers’ performances.

Student feedback does generate data that is collected by the administrative echelons. So is this data used to affect improvements in teaching and learning or is it used to spawn further administrative exercises? As with everything sociological, the answer is a little bit one way and a little bit the other. The list of ‘depending on…’ variables is also long and inexhaustible. There are at least as many ways of collecting student feedback as there are academics.

At the most fundamental level, student feedback should provide a check against idleness and corruption. If a lecturer isn’t turning up, or is abusive, or is absolutely and indisputably terrible then student feedback will identify this and management should be alerted and corrective action taken. So yes quality can be generally assured from that perspective. At a more strategic level, it is only useful for quality assurance if, after being collected, the data is used to affect change in the teaching and learning environment. Otherwise it is simply collecting data for the sake of reporting, ranking or just to fill filing cabinets. From this perspective the activity of seeking and collecting student feedback is not in itself a systematised way of assuring quality, only a systematised way of checking if it is happening at all. The assurance part comes when lecturers’ and students get to use student feedback to make informed improvements to teaching and learning.

My own research indicates that lecturers prefer to gather feedback informally through conversations with current and past students. They do see feedback as an aspect of their professional development but not as an assessment of performance. As a participant in Louise Morley’s study notes: “[students’ opnions] are important, but they are important as a clue rather than as a solution in themselves.

It is fair to say that quality teaching (whatever that is) should result in satisfied students and their views are an important indicator of this. They are only a useful indicator though if they are used by the regime to facilitate improvements in quality teaching… through professional development maybe?

 

So…. Is Quality Assured?

Simply having a QA regime and going through the motions of continuous quality activities doesn’t guarantee that either will result in something that the public can have confidence in. They need to be used correctly, as a process not as a goal.

Learning theory suggests that target based assessment strategies for academic purposes cause students to simulate the target behaviour. The best example of this I can think of is the Irish leaving certificate: It famously examines school leavers’ abilities to do the leaving cert, not their knowledge of the curriculum. For this exact reason, quality experts say that quality indicators only have a shelf life of two years before academics ‘get wise’ to them.

In the case of student feedback, quality will only be assured if feedback is seen as a tool for improving quality, not an attempt to achieve ‘good’ student feedback. The use of the UK’s national Student Survey to rank institutions is a good example of a continuous quality activity being used as a metric rather than a tool. Myths abound of institutions bribing first years with free pizza to fill out the survey positively!

Given what QA is supposed to achieve however (public confidence in HE etc.) continuous activities do demonstrate a systemic attempt to assess educational quality with the goal of improving it in mind. At the very least they make it very hard for quality to be absolutely dreadful. Whether or not these activities improve quality depends on whether they are used as targets or tools. I believe that distinction depends on whether or not individual staff, students, administrators, managers and all other stakeholders use them correctly and feel a sense of ownership of them.

Continuous quality activities that generate data used by the regime are important for assuring educational quality; if, and only if, all of those who are involved in the activities get to close the loop and use the outcomes of those activities to influence the quality of provision in HE. I have a suspicion that this ownership or ‘closing the loop’ issue is the one of the biggest causes of difficulty in QA in HE.

A Brief History of Quality

March 8, 2013

This is the second in a series of posts about QA for higher education. I hope that this gives the final pieces of context for debate and triggers a bit of thought about the need for discussions about quality.

As will become clear my expertise in the area is largely concentrated on the UK and Irish systems so I would be delighted to hear if my account applies across other systems.

The Beginnings

Ideas of benchmarking and standards are not a particularly new phenomenon in higher education, but they were certainly not part of the ‘Idea of the University’ as John Henry Newman articulated it or the liberalism vs vocationalism debates that dominated thought about higher education in the mid-19th century.

The oldest bespoke form of QA for higher education would of course be the external examiner system which seems to have been started when the University of Durham invited Oxford scholars to examine their students in 1832. This was done to demonstrate publicly that their degrees were of a similar standard to Oxford’s.

However, the first instances of broadly agreed thresholds for post-compulsory education had already occurred by then. The establishment in the early 19th century of professional accreditation bodies (for law, medicine etc) required some kind of articulation of minimum thresholds of learning. Previously these thresholds were implied by membership of bodies achieved through apprenticeship, but not articulated by a formal accrediting institution.

At university level it wasn’t until the 1960s when globalisation began to really influence higher education that ideas of benchmarking and standardisation of educational levels themselves began to become popular in Western nations. Quality assurance as a concept began to emerge then but hadn’t become fully formed until quite a bit later. (Marilyn Strathern’s book Audit Cultures gives some great background reading and discussion of this issue.)

The Origins of QA in New Managerialism

‘Quality Assurance’ is a term borrowed from manufacturing where it refers to systematic activities designed to ensure that the requirements for a process or product are met.

The appropriation of the concept for use in higher education is a result of a public policy process throughout the 1980s and 1990s that transferred ways of thinking from the private to public sectors. This concept which became known as new managerialism is typified by the attitude that being economically useful automatically makes an organisation behave like a business. Under principles of new managerialism public sector organisations that contribute in any way to the economy are to be governed by free market economics and principles of efficiency, productivity and output. This and the growing (and justified) concern about the efficient use of public funds in the 1990s gave rise to increasingly powerful and structured forms of regulation for higher education. This culminated in the legal establishment of national bodies to both fund and assesses universities.

For instance the UK: the University Grants committee was established in 1919, then replaced in 1989 with the Universities Funding Council, which in turn was abolished in favour of HEFCE in 1992 – a far more structured organisation which had a quality assessment division which became the QAA in 1997. In Ireland the Higher Education Authority became legally responsible for funding higher education in 1971, and then in 1999, four agencies with responsibility for QA in different parts of the HE sector were established (and have very recently been amalgamated into one organisation, where I’m currently working: QQI.)

I believe based on anecdotal evidence that the same pattern is true in Australia and New Zealand and a similar but not national set of systems for standardisation and public oversight emerged in the USA.

Quality Regimes

In an industrial context, QA is about defects, errors and repeatability. In public institutions it is about trust, confidence and reliability. Consequently it is a concept that implies value judgements and means power. As Louise Morley says (in Quality and Power in Higher Education):

“Quality is a political technology functioning as a regime and relay of power.

A regime is a set of political rules and norms that indicate the way something is governed. Modern usage of the term usually has negative connotations of authoritarianism. This is actually quite appropriate for discussing QA in higher education because the processes used to assess and control higher education are often quite strict and in some ways arbitrary. The term ‘quality regime’ is frequently used in the literature to describe the conditions, rules and norms of audit, benchmarking, learning outcomes and other tools and rituals that make up quality assurance practices.

Ostensibly, quality regimes exists to safeguard the public interest in higher education and to provide confidence-inspiring oversight of how universities spend public money and educate populations. As discussed in my previous blog post, they are often presented by proponents as a transparent process that ensures efficiency, value for money and useful outcomes from a resistant higher education sector. Transparency is used almost as a synonym for integrity, so resistance to participation with the quality regime is constructed as dishonesty.

Another way of looking at quality regimes is as a management practice that is designed to exert control over academia. One of my heroes Stefan Collini talks about how more and more systems are put in place “to ensure that professors must constantly and frequently provide an account of how they spend their time and the ‘value’ of these activities.” This type of accountability militates against flexible and self-directed forms of professionalism and creates a panopticon effect where academics self-police.

Initially, national quality assurance systems involved intrusive audits to measure every facet of activity. The panopticon effect was then so pronounced that levels of insecurity and anxiety, as well as the cost and administrative burden, became prohibitive to the quality regimes. There was something of a backlash against this form of audit culture in the early 2000s and as a result, responsibility for educational quality was shifted to the institutions themselves and a ‘light touch’ QA system became the norm across the European Higher Education Area.

The Modern Light Touch System

In a light touch QA system the regime judges the effectiveness off an institution’s quality assurance arrangements as opposed to scrutinising activity. Quality of teaching, learning and research are not investigated – rather confirmation that agreed procedures and systems for doing so are in place and up to scratch within the institution. The light touch approach generally goes:

  1. Self-evaluation – Institutions produce reports on the effectiveness of their own QA procedures against which they can be assessed.
  2. Peer review/Audit – Periodically panels of ‘QA experts’ and/or, depending on the type of review, subject matter experts visit institutions to analyse documentation and to interrogate the people responsible for quality and other stakeholders. They then pass judgement.
  3. Published reports – Finally, the first two stages are combined in a publicly available report that gives a synopsis of the judgements of the quality regimes. If any approval is needed for anything (programme validation for instance) it is either given or denied at this stage along with recommendations.
  4. Follow up – The institution agrees future actions and then reports on their progress.

The word ‘judgement’ is important here because although quality regimes are often described as evidence-based, and they are often discussed in terms of measurement and quantification, in the end no policies or procedures can spirit away the presence of value judgements. A participant in Louise Morley’s study notes:

“The method is not scientific. We supplied the hypothesis, the evidence and the witnesses.”

Looking at the Current Situation

Understanding how QA got to where it is (albeit in a whistlestop fashion) is useful for thinking about the field as it exists at the moment. QA as a concept in higher education is a combination of two very different phenomena: First; the 180 year old pragmatic response of academia to the need to assure external stakeholders (professions, the public, the government, students, etc.) of standards and the value of what they do. The second phenomenon, that of QA systems and processes, is a management tool that was somewhat shoehorned into the sector 150 years later and has been difficult to integrate because of its influence on power relationships and its origins in industrial error-checking.

I think there is a need for QA management systems and to systematically assure the public(s) of the quality of what goes on in higher education. I feel though that the current light touch model is too far removed from the ‘front-line’ of educational quality. With the exception of those academics who become academic managers or who become involved in quality events, this type of QA is something that happens out there, at an institutional level and doesn’t have any significant impact on the quality of their work. It appears to many that the only influence QA regimes have is to impinge on academic freedom and to increase the administrative burden.

As I will attempt to show in next week’s post, quality assurance as part of the academic’s job is about significantly more than these light touch quality events and that there are aspects of the management technologies that genuinely do add value to the job of the academic. The single biggest issue that I can identify at this point is the balance between adding value and restricting activities, hopefully this can be teased out.

My instincts tell me that while responsibility for quality has been successfully shifted to institutions, ownership of the concept has not yet reached those who are on the front lines of quality (or poor quality) provision.

Golden-Age Vs. New Dawn

March 4, 2013

I am planning to use this blog over the next few weeks to talk about a few issues in quality assurance (QA) in higher education that I’ve looked into as part of the MA dissertation I did last year in International Higher Education. Parts of it will be shameless reproductions of sections of my dissertation (but without proper referencing or stylistic conventions), but I plan to be less cautious in my language and to give more of my own personal opinion.

My intention is to think about and provoke debate about the nature and value of QA in higher education but this is an extremely difficult thing to do. Partly because very few things cause the higher education community to lapse into drowsiness as effectively as discussions of QA but also because of the polarised positions adopted by critics and proponents of regulatory systems in higher education.

I hope to deal with this polarisation in some small way as a contextual exercise, before trying to deal with the actual nature of QA processes.

The Golden-Age View

Critics of the very concept of higher education QA lament the recent direction of change in the university towards one that is more regulated. Generally (as Stefan Collini points out in his excellent book ‘What are Universities For?’) this complaint about the growth of accountability frameworks happens alongside unexplored claims about what it used to be like working in higher education.

Against the rhetoric of metrics and audits driving quality up and ridding institutions of inefficiency, conversationally many academics dismiss QA as a new-fangled modern interference in their work that corrodes the value of academic scholarship. The sentiment appears to be that ‘if only we could do things the good old fashioned way, we would be able to be good academics again instead of bureaucrats and we would automatically have quality.’

There is a powerful discourse of loss, damage, contamination and decay in higher education” (Louise Morley, another good read called ‘Quality and Power in Higher Education’)

It is tempting to adopt this golden-age type view when critiquing QA because of the apparent difficulties it creates and the fact that it has undoubtedly changed or made more complex the jobs of academics. It is also tempting to suggest that the weighty administrative burdens caused by the introductions of QA requirements offset the benefits of the systems.

It doesn’t stand up to much scrutiny though as it suggests that in the past higher education institutions were perfectly good at assuring the quality of their teaching and research and wouldn’t need oversight if they could return to the way things were then… if, as Collini points out, “there were any agreement on when, exactly that was.

There is no evidence to suggest that teaching and research were superior in the past. Similarly nothing suggests that the added value of QA processes doesn’t offset the extra effort it requires administratively.

An unfortunate interpretation of the golden-age view is that in the past quality was assured by restricting participation in higher education to a privileged few. This is obviously not the case in 2013, nor should it be. The basic premise that things were better ‘before’ is only useful in the debate if, and only if, the ‘before’ is clearly defined and the way in which things were better is obvious.

If there is any merit in the view that things were better in higher education before the growth of regulation and QA of standards of provision, it is not articulated very clearly in the literature or by the complainers that one talks to from time to time.

The New Dawn View

There is an equally powerful discourse in higher education of institutions being fuddy duddy, irrelevant, old fashioned and inefficient organisations. Proponents of regulation of higher education express the view that they need modern management technologies and oversight in order to curb elitism.

This new dawn view constructs QA as protecting the public interest from the conservative practices of higher education institutions and ushering in a culture of accountability and quality.

It is a tempting position from several perspectives: There is a very significant public investment in higher education. Institutions should have to provide confidence that they are using that investment for the benefit of the public and not profiting from or wasting it. A globalised economy needs systems to ensure the fitness for purpose of one of its major sources of human capital. Individuals need to be assured that what they get from their time (or careers) in higher education are at or above the proclaimed threshold of quality.

The message is mixed though. Every summer images crop up in the media of ivory towers filled with port-swilling dons in articles about university admissions. Public policy documents and white papers frequently point to a need to “drive quality up” (Students at the Heart of the System, DBIS 2011) and to ‘enhance quality and relevance’ (National Strategy for Higher Education to 2030, Hunt report 2011). Paradoxically those documents and media sources hail higher education institutions as centres of innovation that are vital for driving economy, culture and citizenship while simultaneously they are presented by some as obstreperous institutions that need a regulator standing over their shoulder.

Endorsement of the systems and structures of QA is categorised as accountability and honesty, while resistance is equated to elitism or worse. If the case is to be made clearly for the value of QA in higher education, it needs to be open to critique from within and a consistent and fair message needs to be put across.

A Sensible Perspective

The views that I’ve outlined are clearly extremes. Nobody of any significant authority will contend that institutions should not provide a justification for the public investment in money, time and reputation in higher education. Similarly policy-makers do not genuinely argue that without QA regimes no universities could possible teach or research well.

As can be expected, a sensible perspective from which to analyse QA regimes is somewhere in the middle. QA must be examined in a realistic context. Higher education as it functions now is more important than how it may have functioned in the past and quality as a concept must be open to critique based on how it affects universities. The value of QA must be weighed up as a balance between a need for accountability and the burden of accounting.

It may be (and probably is) the case that the growth of QA processes and regulation of higher education has damaged some of the functions of academia that previously went unmolested. I’m thinking of freedom of enquiry, time spent in unbounded activities and those sorts of things. What is yet to be firmly established is whether or not those sacrifices have borne fruit in terms of the quality of teaching and research. The chances are that QA systems will turn out to be important ways of doing this, there is however very little acknowledgement of the fact that they do have a significant influence on the anxieties and complexities of higher education.

Looking at Quality

So this is the context of thinking about quality assurance as I see it. The polar extremes of proponents and critics of higher education make it difficult to address the issues. Higher education in 2013 has never functioned as it currently does so in a way this is a new dawn, but not because of increased regulation. Things certainly have changed as a result of the growth of QA, so perhaps there was an age when things were simpler; although not necessarily a golden one.

February 23, 2013

Alma Matters

I’ve  been keeping an eye on the debate happening in UCD over affiliation to USI for the past few days. I’m aware that referendums are happening in DCU & Maynooth as well, but UCD has particularly crossed my radar due to the arguments springing up.

Whilst I am for USI affiliation on balance, I do think that there are valid reasons for wanting to disaffiliate for USI. Mark O’Meara’s essential point in this article was that if you disagree with a group of people, why would you want to be in a Union with them? In theory, this argument is perfectly valid, although I disagree with Mark on its application in practice.

It’s a great disappointment, then, to see some of the points being raised by the “no2usi” campaign team in UCD (don’t even get me started on the textspeak).

Some are indeed relevant, but many are badly…

View original post 2,214 more words

The Value of a Strong USI in Irish HE Policy

February 21, 2013

There are several USI disaffiliation referendums coming up in UCD, DCU and Maynooth and in reading some of the commentary on twitter one sentiment struck me in particular.

That USI should be a lobbying and policy-making organisation and not a ‘movement’.

This prompted several questions in my mind about the value of USI and its activities, from the perspective of higher education in Ireland generally.

I’ve come to the conclusion that this position is wrong. USI can very easily be both an effective lobbying and negotiating organisation and a movement that contests political will and argues principled positions in public forums. In fact I strongly suspect that the ‘movement’ side of USI strengthens the professional lobbying and negotiating functions.

As someone who (when involved as Education Officer in 09/10) was never really into the ‘movement’ side of things, I found my work in policy-making and at the negotiating table significantly enhanced by the efforts of my colleagues within USI taking principled and public stances on issues affecting students.

USI has one tool which makes this dynamic work – it’s ex officio positions at the board tables of the organisations that regulate and set policy on all aspects of HE in Ireland. The officers have to be appointed to the positions by law in most cases so organisations like the HEA, QQI and the DES want USI officers’ participation to be useful. They know that USI is coming to them armed with policies and solutions which are formed, mandated and constructed through interaction with constituent students’ unions. The national policy makers want to hear from the officer in question because (s)he represents a movement that they can see is deeply interested in the issues they are addressing (or not addressing as the case frequently is).

USI is not at the negotiating table simply as a ‘representative’ of students. For that you pick any 19 year old from the nearest community college to attend the meetings and nod along. They are there to contribute to debates, critique other ideas, propose solutions and make the process of forming policy more effective and the results more beneficial to students. A good officer cannot do this unless they have a principled and strong position from which to make their contributions. It is this form of engagement that is so valuable to the formation of policies that govern how higher education is managed nationally. This is why policy makers seek out USI’s contribution. They don’t always adopt it or agree with it, but they do listen to it.

Qualifications and Quality Ireland is a new organisation responsible for regulating much of post-compulsory education in Ireland. It was created by amalgamating a group of organisations and one of the first tasks it has is to develop policies for quality assurance and qualifications across further and higher education. The fact that QQI is actively seeking USI’s input into this process is evidence that policy makers in the sector know that USI is a valuable asset within the sector and that by tapping into that ‘movement’ they can better serve the institutions and students of Ireland because USI has a suite of principled positions to take on each issue that will arise. Those will be brought to bear by the officers, but driven by a strong political organisation.

I strongly feel that if USI were not both a ‘movement’ and a lobbying and negotiating organisation, it would not be as useful within the higher education sector. If it were only a political movement, then the unfounded cries of “talking shop” and “waste of money” would ring true for its detractors. Similarly if it were only a group of officers who sat at board tables and nodded along or, worse, argued off their own bats, then the feeling that “USI does nothing” would probably be true.

It’s Not Your Fault – Why Policy and University Response to Policy is Failing Graduates

October 25, 2011

Irish and British government policy for a long time in higher education has been ‘more’ and ‘better’ and ‘wider’ and ‘deeper’; all rhetoric that means ‘more qualifications’.

Tony Blair in 1997 said his top three priorities for the government were “Education, education and education”. In 2008/09 the Irish government had a stated aim to double the number of PhDs in the Irish workforce.

It is widely believed that the way we’re going to get ourselves out of economic doldrums is by educating more people to a higher level. Norway-post-Soviet-collapse is often held up as a paradigm of educating your way out of recession.

 In the British sense, this means an obsession with more ‘skills’ for the workforce. In Ireland it means more capacity in the HE system to produce more degrees – hence the pressure on institutions to increase the places on courses.

At the same time as this, both governments are increasing the burden on the student by increasing fees and pushing for a market-isation or commodit-isation of Higher Education.

Think about the combination of these pressures for a moment:

  1. We need more qualified people to make the country better.

  2. It is your responsibility to bring about these increases by investing in your own education.

It is strange that these pressures persist in a climate where there are few jobs for all these highly qualified graduates. The result is that people are hearing the policy message ‘help the country and yourself out by getting a degree’ and then finding themselves jobless or over-qualified for a job that they can get. The message being absorbed is that it is our own fault if we don’t have the right skills or the right CV or the right qualification to get ourselves jobs, and hence my fault that there’s a bit of an economic mess going on.

Regrading the economic mess, I don’t want to speculate about who may be at fault. I do know that it is not the fault of recent graduates though.

I do wish to point the finger squarely at policies concerned with increasing throughput in higher education for creating this sense amongst my generation that it is our responsibility to fill ourselves to the gills with competencies and get ourselves jobs. I think there are three main things that need to be challenged to address this:

1. Public Policy on Skills and Higher Education

Governments pushing for more degrees, more qualifications and more ‘generic skills’ is what starts the problem. It’s simple to see why politicians do it: Focussing on qualifications is an easy rhetorical option. It’s straightforward to bash out more qualifications and it appears to be progressive and helping the country. The likes of OECD and UNESCO measure the economic potential of countries by counting qualifications. To a certain extent they assume that every accountancy degree means an employed accountant.

Policy statements like “Skills are the simplest, best, most direct way to boost productivity” put the education cart before the economic development horse.

There is no point in increasing the number of people in higher education and the debt that they carry if there’s nowhere for them to go when they’re finished.

I would like to see governments stop applying pressure on higher education systems to do more, but rather to do better. More degrees in business studies floating around the economy isn’t going to add value to the country – but more societal understanding, better technology and more knowledge transfer will.

2. The Attitude of Recent Graduates

For people (like me) who finished a degree between 2007 and 2010 and found that that it’s not just a case of qualifying and sashaying into a job there is a problem of confidence. We need to stop relentlessly trying to self-improve, re-qualify and do things more correctly to get a job. I think we need to take a deep breath and have a Good Will Hunting moment where we say ‘its not our fault’ to ourselves.

3. The Response of Universities

Universities and other higher education institutions need to take the students’ side a bit more on this issue.

This isn’t about fees. This is about governments shifting responsibility for providing opportunities to people in education from themselves to those very people. Universities need to be challenging this position by asserting the role of a university in society with ferocity. Universities are not here to provide skills and tick boxes so that kids can become economic agents. Universities are here to be the tip of the spear of knowledge in our society. Students may wish to go to university for economic reasons, but their belief in the reasons for doing this should be challenged while they are there. They should enter University as potential economic agents and emerge as reflective potential economic agents. It is the world of work that should be responsible for transferring the skills that create functioning economic agents.

In summary I think that this qualifications issue is being scapegoated by policy makers, I think Universities buy into that idea by accepting that it is their responsibility to produce these ‘skills’ and graduates then accept the blame for the policy not working by upskilling or reskilling themselves when the first round of it didn’t work.

Educating your way out of a recession only works if you have somewhere to go to other than economic stagnation and unemployment. Governments need to realise this, Universities need to be more vocal about it and graduates need to stop blindly trusting in it.

My response to Chapter 3 of the Higher Education White Paper: A better student experience and better-qualified graduates.

September 20, 2011

Section 3.27 starts with the phrase: “Higher education is a good thing in itself.”

As this suggests, higher education has an intrinsic social value. When looking at the student experience, this White Paper completely ignores all other aspects of the learning experience other than ‘to prepare students for a rewarding career’, even though it states that this is only one of the purposes of entering HE.

In terms of the student experience, there must be an attempt to protect those ‘good’ things about HE and I feel that the best way to this is to focus on the environment within HE institutes, by creating climates for learning.

Given the numbers of international students and the globalised nature of today’s working world, there must be an emphasis on worldly education that equips students to relate to other cultural perspectives. One way of achieving this is by focusing on creating an educational ”safe space’ for debate, creativity and scholarly endeavour; rather than a production line that churns out ‘well qualified’ graduates.

What society needs is conscientious, skilled thinkers who have had the opportunity to learn about the world and their chosen discipline in an engaging way, rather than youngsters who have spent three years as a ‘bum on a seat’ in a lecture hall or lab which is the situation UK HE is trending towards.

A major factor that creates this kind of environment is students’ perceptions of how they are dealt with in HE. Approachable lecturers, trust in the assessment procedures and a feeling of engagement with their learning are all things that students find important and that can generate this climate.

The chapter makes an excellent case for a strong risk-based quality framework for HE. While this is vital for ensuring standards, it ignores a whole range of proactive and internally driven measures than can be adopted as to achieve quality assurance through quality enhancement.

Many institutions in the UK have excellent opportunities for staff development and even requirements for teaching qualifications. These should be acknowledged and encouraged, as they are the sorts of practices that create a the good climate for student learning that has been described.

In summary I would like to see the White Paper acknowledge and commit to preserving the wider socio-cultural values of higher education. In relation to the student experience there must be a commitment to providing a globally relevant education in an environment that is conducive to good student learning through engagement. This kind of engagement can be fostered by the required implementation of quality enhancement procedures that encourage higher education teachers to build climates for learning.

Fictional Rugby Teams

January 25, 2010

In 2006/7 I lived in a house in Dundrum with four former UCD undergrads. They were respectively a chef, a sports journalist, a computer science phd student and a lawyer, but all of them were serious rugby fans.
The house had lots of rituals and rules associated with living in it. Chiefly, the house’s ‘Lists’ were something treated with immense respect. Without going into too much detail, the walls of the living room were covered with various kinds of List.

On weekends with Heineken Cup action on we would all sit around nursing hangovers and eating Brian (the Chef)’s amazing chilli con carne with cornbread or beef with Guinness stew and endless cups of tea. When the rugby ended we would cast about for something to talk about and it would invariably come down to the lists.

A lot of the Lists were fictional rugby teams. For example, a team made up entirely of historical dictators or animals of the forest. The debates we had on the rugby teams were always heated and very entertaining. In an attempt to revive some of that debate; here’s two teams, one made up of superheroes and the other supervillains:

Superheroes:
1. The Hulk
2. The Beast
3. The Thing
4. Optimus Prime (Captain)
5. Mr. Fantastic
6. Wolverine
7. Inspector Gadget
8. Iron Man
9. Spiderman
10. Superman
11. The Flash
12. Batman
13. Captain America
14. The Human Torch
15. The Silver Surfer

Supervillains:
1. Beebop
2. Bane
3. The Abomination
4. Megatron
5. Doc Oc
6. The Joker
7. The Punisher
8. Sabertooth
9. Catwoman
10. Ming the Merciless
11.  Black Flash
12. Mr. Freeze
13. Venom
14. The Juggernaught
15. Mr. Sinister

I haven’t gone so far as to do the replacements, that’ll take time. Any contributions are welcome.

Who do you think would win?
Any selection comments?
What is the probability of getting Marvel and DC to agree to do a film of the game?