Scope, transparency and style: system-level quality strategies and the student experience in Australia
As Australia moves to an approach to quality assurance that is framed around regulation and risk, it is timely to reflect on the merits of external quality audit supported by a fitness-for-purpose approach. This chapter explores the proposition that external review has made a difference in enhancing the higher education student experience in Australia in terms of the scope, transparency and style of quality-enhancement activities adopted by higher education providers in Australia.
Rather than seek to demonstrate improvement through comparing performance on indicators for the quality of the student experience, this chapter identifies three important dimensions where quality audit may have had an impact on the quality of the student experience through supporting improvements to the continuous improvement efforts of institutions. Benefits of external review include expanding the scope of activities worthy of consideration for continuous improvement efforts, improved transparency in the activities and outcomes supported by institutions and qualitative improvement in the approach taken to continuous improvement within institutions. In short, it is proposed here that external quality audit has supported improvements in the quality of the student experience through improvements in the scope, transparency and style of the quality-enhancement activities adopted by higher education providers in Australia. This chapter addresses these in the context of historical developments in system-level quality assurance strategies in Australian higher education, and is presented in two parts: the first section provides an overview of the development and implementation of successive system-level quality initiatives in Australian higher education, while the second compares external review with comparable system-level quality strategies. The chapter concludes by addressing the merits of external review and future prospects for systemlevel quality strategies in Australia.
External quality audits conducted by the Australian Universities Quality Agency (AUQA) were the principal means of monitoring and reporting on the activities of Australian higher education providers during the decade 2001–11. Reports of audit findings lent transparency to the quality improvement activities of institutions, and external review appears to have encouraged an expansion in the range and scope of the quality-enhancement activities of providers. While it would be difficult to establish conclusively that improvements in the quality of the student experience have been as a direct result of external audit, it is easier to see how external quality audit may have had a significant influence on assurance, enhancement and reporting activities relevant to the student experience during this time. In evaluating the contribution of external quality audit during the 2000s, it is worth reviewing developments in system-level higher education quality initiatives in Australia in the lead-up to AUQA’s establishment, before addressing prospects for future development.
AUQA was established in March 2000 as an independent agency jointly supported by Australian Federal, State and Territory Governments. Over the decade 2001–11 AUQA conducted roughly 150 external quality audits of universities and other higher education providers.1 AUQA’s establishment came at what was recognised as a time of increased diversity in the scale, organisation and mode of delivery in higher education, following significant expansion through the 1980s and 1990s (DETYA, 2000). AUQA’s approach was based on encouraging critical self-review on the part of providers, supported by regular external quality audits. While on the one hand AUQA’s establishment signalled a new approach to system-level quality assurance in Australia, on the other it also represented the culmination of efforts to encourage critical self-assessment on the part of higher education providers extending back to at least the late 1970s. The major system-level programs and strategies relevant to this development are outlined below.
The Commonwealth Tertiary Education Commission (CTEC) was established in 1977, effectively replacing the Australian Universities Commission. CTEC established the Evaluations and Investigations Programme (EIP) in 1979. The EIP became renowned for publishing high-quality research papers on specific aspects of Australia’s educational systems, with a view to informing development and improvement in both policy and practice. After CTEC was replaced by NBEET in 1988 the EIP continued as a branch within the Department of Education, Employment and Training (DEET) and subsequent government departments, publishing targeted policy and research papers through to 2005.
Aims of the EIP initiative included the evaluation of courses, organisational units and resource usage in tertiary institutions. They also included promoting a climate of critical self-assessment within higher education providers, supported by external review. To this end, CTEC commenced a series of major discipline reviews in 1986 to determine standards and examine the quality of university teaching and research in Australia by major field of study. Discipline reviews were given renewed impetus in 1988, when the framework for the ‘national unified system’ was announced (see below). Particular issues noted at the time were the future needs of disciplines, whether teaching and research activities were of an appropriate standard and issues around duplication and institutional efficiency (Dawkins, 1988, p. 86). By the time discipline reviews were discontinued in 1991 these had covered law (Pearce et al., 1987), engineering (Williams, 1988), teacher education in mathematics and science (Speedy et al., 1989), accounting (Mathews et al., 1990), agriculture and related education (Chudleigh et al., 1991) and computing studies and information sciences education (Hudson, 1992).
Follow-up reports found that discipline reviews were overall viewed as an effective stimulus for the adoption of self-review and self-appraisal strategies, and were seen to have some success in this (for examples see Whitehead, 1993; Caldwell et al., 1994). In many cases they served as an effective impetus for broadening the scope of activities worthy of consideration for quality assurance purposes. They also had the effect of raising the profile of quality assurance activities within institutions and lending some transparency to their activities. Despite these advantages, Discipline Reviews were nevertheless perceived as slow, expensive and lacking mechanisms to promote follow-up activity. They were seen as lacking the means for ensuring that review recommendations were subsequently acted upon by institutions or that quality-enhancement activities were supported and encouraged on an on-going basis (Higher Education Council, 1992; DEST, 2003).
The year 1988 saw significant changes to the higher education landscape in Australia, including significant changes to the way institutions were funded, the establishment of a framework for a national unified system of universities and a national government-supported loan scheme for student fees through the HECS. The Australian Government also restructured its advisory arrangements at this time, effectively replacing CTEC with the NBEET. Unlike CTEC, NBEET was an advisory body, leaving the responsibility for program delivery to the relevant government department. Reporting directly to the Minister, the Board also convened separate advisory Councils for Schools, Skills, Research and Higher Education. Significant attention also turned to the efficiency and effectiveness of higher education provision in Australia during the late 1980s and 1990s, and to the means of measuring quality and performance in particular.
Through the 1990s the Australian Government published The Characteristics and Performance of Higher Education Institutions, providing a range of indicators with the aim of highlighting quality and diversity in Australian higher education (DETYA, 2001, pp. 11–12). Indicators published in these reports summarised staff and student characteristics, research performance and other available data on provider activities. These data were also used by institutions to review their own performance (benchmarking both within and across institutions) and by government to monitor quality across the Australian higher education sector. In 1989 the Australian Government convened a group of experts to develop and report on a trial study of indicators useful in evaluating the performance of institutions at the department and faculty level, and of students at the level of academic award, discipline group and field of study (Linke, 1991, p. xi). Their final report, informed by the work of Cave et al. (1991) and Ramsden (1991b), classified a range of performance indicators reflecting dimensions of institutional performance.
The Linke report (Linke, 1991) suggested that performance indicators were of most use as part of a university’s self-assessment process rather than in direct application to funding at the national level. This point was taken up in a 1991 policy statement on higher education in which the Minister for Higher Education, the Hon. Peter Baldwin, MP, outlined that while the government had no intention of prescribing a common set of performance indicators to be used by every university, it was interested in assisting universities to develop quantitative and qualitative indicators of performance in teaching and assessment, as well as for organisational areas such as governance, finance and staff development (Baldwin, 1991, pp. 31–32; DEST, 2003, p. 259). Among the most prominent developments to follow from this has been the use of data from the Course Experience Questionnaire (CEQ) both within institutions and in national benchmarking and funding decisions (Ramsden, 1991a, 1991b; Ramsden and Entwistle, 1981). Financial and staffing data, enrolment trends and data from student surveys have since formed the basis of the means for evaluating and comparing institutional performance in Australian higher education, and since that time have often been taken as institution-wide proxies for the quality of the student experience (Palmer, 2011).
The Higher Education Funding Act (1988) represented a fundamental change in the way higher education was funded in Australia. The government had previously determined operating grants for universities based on provisions in the States Grants (Tertiary Education Assistance) Act (1987). The new Act provided for institutional funding subject to ministerial determination informed by the Education Profile of the institution. Educational Profiles therefore formed the basis of agreements between the Australian Government and each higher education provider participating in what was to become the national unified system. Beyond informing funding determinations, Profiles also became the principal means for each institution to define their mission, their goals, their role in the sector and their particular areas of activity.
From 1996 institutional Quality Assurance and Improvement Plans would be integrated into the Educational Profiles process. This reflected a renewed emphasis at the time on the idea that maintaining and enhancing quality in higher education could best be achieved if universities were able to operate in a framework of government encouragement without unnecessary intervention. In 1995 the Higher Education Council had reported that many institutions shared the view that national quality strategies should have a less direct relationship to the allocation of funding, and that a greater emphasis should be placed on institutional outcomes. This was accompanied by the view that greater account needed to be taken of the diversity within the system, and of the strengths and needs of newer universities in particular (Higher Education Council, 1995, p. 20). The framework subsequently recommended by the Higher Education Council addressed systems and processes in place as well as outcomes reflected in measures like the CEQ, the Graduate Destination Survey (GDS), student entrance scores, student attrition and progression rates (DEST, 2003). Through this process universities were assessed on a range of qualitative and quantitative indicators of quality and performance, and this information was also used by government in monitoring the viability and sustainability of institutions in the system. All publicly funded Australian universities were required to include Quality Assurance and Improvement Plans as part of their Educational Profiles, a requirement that was later to form part of the Institutional Assessment Framework (IAF) from 2004. By the end of this period the IAF was frequently referred to as a key element in the ‘strategic bilateral engagement’ between government and higher education providers, with the purpose of encouraging institutional quality, accountability and sustainability while at the same time minimising government intervention and reporting requirements.
In 1991 the Australian Government released the Ministerial Statement Higher Education: Quality and Diversity in the 1990s (Baldwin, 1991). The paper addressed weaknesses of previous approaches to quality review at the discipline level, including their effectiveness as a quality assurance strategy, deficiencies in the comparability of findings across institutions and the need for improved quality assurance processes at both the system and institutional levels. The paper also suggested that universities should receive differential funding more closely tied to their performance on an agreed set of indicators of quality and performance. Outcomes from the discussion paper included recommendations for the establishment of what was to become the Committee for Quality Assurance in Higher Education (CQAHE) (established in 1992) and for the allocation of funds over and above base operating grant amounts determined by the assessed performance of each institution (Higher Education Council, 1992).
Convened by the Higher Education Council of NBEET and chaired by University of Queensland Vice Chancellor Brian Wilson, the CQAHE (or Wilson Committee) conducted independent audits of institutions and advised the Australian Government on funding allocations through associated incentive programs between 1992 and 1995. Self-review formed the basis of evaluation, with review at the whole-of-institution rather than discipline level. Reviews addressed quality assurance processes within institutions, evidence of self-evaluation and the quality of outcomes as reflected in the indicators available. They included a site visit from the review team to supplement information presented in quality portfolios prepared by each institution. In evaluation, equal emphasis was placed on evidence of quality processes, self-review and outcomes that could be demonstrated in available indicators. Various incentive funding initiatives were employed as part of this strategy, including the National Priority Reserve Fund Grants, schemes funded through the Committee for the Advancement of University Teaching and through the EIP. Three rounds of independent whole-of-institution audits were conducted between 1993 and 1995. Each round had a specific focus: an overview of teaching, learning, research and community service in 1993; teaching and learning in 1994; and research and community service in 1995 (DEST, 2004, p. 9). Institutions received differential funding based on their performance in these reviews, through the Quality Assurance and Enhancement component of universities’ operating grant. While the relationship between the CQAHE and individual institutions was on a confidential basis, publication of the review reports themselves in part or in full was left as a matter for each institution (Anderson et al., 2000).
Evaluation of the CQAHE program suggested that it had been successful in raising the profile of quality assurance practices within institutions and of the need to monitor, review and make adjustments to institutional processes where gaps were identified. While there were mixed opinions as to how helpful these reviews had been, particularly in light of the consternation about the ranking of institutions in published results, it was perceived that external review had in fact supported considerable advances in establishing effective institutional practices. The whole-of-institution approach had the advantage of involving a much broader share of the university community in self-review activity. Reviews appeared to trigger considerable change in institutional quality assurance practices, including adoption and greater acceptance of continuous improvement and self-evaluation practices. They also appeared to encourage the increased dissemination and use of data from performance measures within each institution (Vidovich and Porter, 1999; Chalmers, 2007). CQAHE reviews have since been offered as a good example of the effective use of external review informed by a fitness-for-purpose approach in evaluating the performance of institutions relative to their mission and aims (Gallagher, 2010, p. 92). CQAHE reviews have also been held as an example ‘par excellence’ of government achieving systemlevel quality-improvement policy aims through ‘steering at a distance’, as opposed to adopting a more direct approach through legislation or the use of direct funding incentives (DEST, 2003, pp. 257–8).
The Australian Universities Quality Agency (AUQA) was established in March 2000 as an independent agency jointly supported by Australian Federal, State and Territory Governments. The agency’s brief was to monitor, audit and report on quality assurance in Australian higher education. AUQA’s responsibilities included the publication of reports on the outcomes of audit visits and also of the quality assurance processes, international standing and relative standards in the Australian higher education system (DETYA, 2001, p. 12). Over the decade 2001–11 AUQA conducted roughly 150 external quality audits of universities and other higher education providers. AUQA’s brief followed in many respects from that of the CQAHE in conducting independent audits of institutions at the whole-of-institution level. AUQA’s approach was based on encouraging critical self-review on the part of providers supported by regular external quality audits. The Agency was also responsive to the need for balancing the comprehensive assessment of institutions with the need to keep bureaucratic requirements, costs and time involved in the review process to a minimum (DEST, 2003, pp. 271–2; Adams et al., 2008).
An important contribution of these reviews was an expansion in scope of the kinds of activity subject to review, and therefore in turn worthy of evaluation and improvement efforts. While AUQA did not employ an externally prescribed standards framework, the scope of issues given consideration as part of the audit process was informed by the kind of criteria typical of organisational self-assessment. A good example of these are those adapted by Woodhouse (2006), covering organisational leadership, teaching and learning, research, staffing, facilities, enabling services and community engagement. These criteria were applied in support of improving educational value to students, effective use of resources and capabilities and the contribution made to student development and overall well-being (Woodhouse, 2006, pp. 14–15). The scope of the student experience reflected in audit reports suggests a significant broadening of the matters considered worthy of consideration for continuous improvement purposes under this influence when compared with previous approaches. A 2009 report by Alcock et al. reviewed comments, affirmations and recommendations included in AUQA Audit Reports between 2002 and 2007. They identified learning and student support, transition, student conduct, equity and campus life as key areas, along with targeted student support for particular groups (Alcock et al., 2009, pp. 3–4).
AUQA’s approach to external quality audit was premised on the idea of managing for continuous improvement, auditing for fitness-for-purpose and focussing on institutional efforts in meeting their stated mission and goals. Here each institution was conceived of as an integrated system, with attention during audit being devoted to the nature and effectiveness of systems and strategies for monitoring, evaluating and improving performance relative to each institution’s objectives (Baird, 2007, p. 3). Regular audit visits were supplemented by the self-review activities of providers, including the use of trial audits. Among the benefits of external quality audit that have been identified are increased awareness of quality systems, improved internal communication, improved transparency, increased responsibility and ownership for improvements in quality and improved cooperation within and across academic units (addressed in detail in the following section). While often the source of some anxiety on the part of university staff and management, external audit provided opportunities to focus the attention of the university community on ‘quality’, and appeared to be an effective impetus in bringing management, staff and students together to identify strengths, opportunities and risks among their activities and the outcomes they support, and for broadening participation in planning and review activities.
In 2009 the Australian Government announced the establishment of the Tertiary Education Quality and Standards Agency (TEQSA). This new agency has since assumed the majority of functions previously the undertaken by AUQA and State accreditation agencies. Announcement of the new agency has heralded a move to a more standards-based approach. Precisely what this will mean for quality and the student experience will be largely borne out in the development and implementation of the proposed standards framework, and through the definition and measurement of risk. While it remains to be seen just how the marriage of regulatory and audit functions within a single agency will work in practice, the formation of TEQSA represents an opportunity to build on some of the activities developed by AUQA and its predecessors, and to reflect on the merits of the various system-level quality strategies employed and the broader developments in higher education quality governance that have led to its establishment. Aspects of these are compared in detail in the following section.
External review has featured prominently among the main policy levers available to government in promoting and assuring institutional and system-level quality in higher education. However, these have typically been accompanied by additional system-level quality strategies. In understanding the practical impact of external review on the quality of the student experience, it is important to compare the impact of external review in policy and practice with other system-level quality strategies. Four broad strategies for system-level quality initiatives are identified here, namely reporting strategies, performance funding, program incentives, and external review (as outlined in Table 15.2). Each of these is compared below.
|Reporting strategies||Educational Profiles and Quality Improvement Plans|
|Performance funding||Learning and Teaching Performance Fund|
|Program incentive funding||Incentive funding associated with CQAHE review|
|External review||CQAHE and AUQA audit|
Basic reporting requirements have often featured as an adjunct to other quality strategies. These have featured as part of Australian higher education quality initiatives such as the Educational Profiles process, Quality Assurance and Improvement Plans, the Institutional Assessment Framework and even Institutional Performance Portfolios. Supported by varying degrees of transparency, strategies like these have been an effective means of driving institutional improvement efforts, particularly where reporting includes clearly defined indicators for quality and performance. Reporting requirements certainly typify improvement strategies at the ‘action-at-a-distance’ end of the scale, in contrast to strategies involving more direct intervention or review on the part of government. Reporting initiatives like these also have a significant influence on the scope of activities worthy of consideration for continuous improvement purposes.
Financial performance and enrolments have for some time featured prominently among metrics for system-level evaluation and comparison of institutional performance among higher education providers. Following the Linke report (Linke, 1991), the Australian Government has employed a range of competitive, conditional and performance-based funding mechanisms to support system incentives for improvement in key areas of higher education. These include competitive research grants and performance funding initiatives designed to influence institutional behaviour, and indicators adopted to reflect learning and teaching quality. Over time, the emphasis of higher education performance measures in Australia has shifted from relying on a relatively narrow set of institutional performance indicators to encompass a much broader view of the means by which institutional performance may be reflected.
The more detailed the reporting requirements of institutions are, the more reporting requirements begin to look like a system-level ‘performance reporting’ quality strategy. As noted above, transparent measures of institutional performance were an increasingly prominent feature of the reporting requirements of Australian higher education providers through the 1990s. This gave increasing prominence to measures such as enrolment metrics and student surveys (Palmer, 2011). Powerful system incentives may be supported through reporting requirements on specific measures, without those measures being employed as criteria for the allocation of funding. This is perhaps best illustrated in the recent publication of the first full round of results from the Excellence in Research for Australia Initiative (ERA) (ARC, 2011). Among the aims of the Australian Government’s current system-level quality arrangements is to ensure that students have better information about how institutions are performing, and to demonstrate to the community that value for money is being delivered and the national interest is being served (Australian Government, 2009, p. 31). Improved transparency and accountability also featured prominently among justifications for the recent move toward a more standards-based approach. The incentives created through the reporting of performance measures via the proposed My University website may in themselves prove to be an important part of the Australian Government’s quality assurance activities. To this end, the proposed My University website may serve to support a range of performance reporting objectives, and may assist in achieving the right balance between transparency measures, system incentives and performance funding arrangements.
An example of the use of indicators for performance funding purposes as a quality strategy can be found in the use of student satisfaction survey data in the Learning and Teaching Performance Fund introduced by the Australian Government in 2003 (DEST, 2004). While concerns were raised regarding the transparency, appropriateness and rigor associated with the development and use of indicators in the scheme (Access Economics, 2005), the Fund nevertheless had the effect of successfully encouraging a greater focus on the teaching and learning activities of universities. Despite their limitations, the development and publication of institutional indicators for teaching and learning performance drew attention to the relevant activities of providers, recognising the development and use of targeted initiatives in support of on-going improvements in this area. More recently, the Final Report of the Review of Australian Higher Education reached the conclusion that transparent, public reporting of such data on an annual basis would be an effective means of providing a focus for further improvements in this area, and that measures relating to both the quality of teaching and the extent of students’ engagement in their education should be included in any framework for assessing institutional performance (Bradley et al., 2008, p. 78).
There are contrasting perspectives on the role of performance funding in system improvement. On the one hand, performance funding can support system improvements through directing a broad range of activities toward a common goal, without being prescriptive on how that should be done. This creates positive incentives in the area being evaluated as the basis for funding (as, for example, with increased attention to the participation of students from low socio-economic backgrounds encouraged by reward funding based on enrolment metrics for that group [see Palmer et al., 2011, p. 4]). Such strategies also serve to reflect recognition by government of the importance of the area being evaluated, contributing to parity in approach (if not investment) across different activity areas (as in the case, for example, of higher education teaching and research), and facilitating institutional comparisons on the basis of performance funding measures using equivalent indicators. On this view, in order to sustain incentives for institutional performance, quantitative measures for key attributes, at different levels of aggregation, should cover as many of the key functions of providers as possible and should be associated either directly or indirectly with funding incentives.
However, while certainly transparent, in terms of their ‘style’, performance funding incentives risk focussing the attention of institutions too narrowly on the means of evaluation, potentially at the expense of a broader range of activities in support of that which the indicator was originally employed to reflect. The use of quantitative indicators can both contribute to and detract from judgements in managing the characteristics being evaluated, and either way cannot provide a comprehensive measure of educational quality overall. As Linke (1991) put it: ‘to the extent that such indicators may influence institutional practice … they will generate a pressure on institutions to direct their performance to the indicators themselves, regardless of what they reflect, rather than to the underlying issues of educational and research excellence, or indeed to any specific institutional goals’ (Linke, 1991, p. 131). Also often overlooked is the capacity for indicator frameworks to describe the scope for innovation in meeting institutional goals. This can have both a positive and a negative impact as performance indicators work to either stimulate or stultify innovative approaches to achieving their aims. In some cases performance frameworks may simply motivate institutions to manipulate their performance data so as to perform well against the indicators being employed (Chalmers, 2008). Careful judgement is therefore required in employing indicator frameworks, whether they be for funding purposes or not, as there is an inherent risk that either way their use may compromise their original aims.
A further criticism of the use of performance funding entails the assumption that resource allocation is instrumental in improving quality and standards, but where inadequate resources are available, overall performance will remain low. From this point of view, performance funding can be held to perpetuate the status quo, rather than promote innovation and improvement. Those institutions scoring well on funding indicators will continue to do well, even if only in part due to the resources secured with the help of the performance measure. Those performing poorly may continue to perform poorly, and struggle to improve relative to their competitors, given their relatively smaller share of resources in support of those activities. This challenge again emphasises the importance of judicious employment of performance indicators where they also influence program funding.
Given the importance of institutional prestige to higher education providers, the impact of performance reporting strategies can be comparable in effect to that of performance funding (but arguably much more economical from the perspective of government). Both are comparable in the way they influence the scope of activities worthy of consideration for continuous improvement efforts. Both lend comparable levels of transparency to the improvement efforts of providers (though of course the indicators used for performance funding tend to be a lot more refined and subject to far greater scrutiny). They may also be seen to be comparable in ‘style’, particularly in the way in which they focus the efforts of institutions on those aspects defined, measured and, potentially, reported as part of each quality strategy.
While program incentive funding strategies may appear in many respects comparable to performance funding, their influence is potentially quite different in scope, transparency and style. Program incentive funding is typically contingent on program participation and compliance on the part of institutions, with their influence potentially broader in scope than performance funding programs tied to particular indicators. Program incentive funding initiatives may be effective in lending transparency to the improvement efforts of providers, but only where reporting requirements feature as part of the program. Shortcomings of program incentive strategies include the potential lack of comparability in demonstrable program outcomes between providers, particularly where the measures of performance are not clearly defined. Finally, and perhaps most importantly, program incentive funding may be seen as among the least effective of the main system-level quality improvement strategies in terms of its ‘style’ of influencing institutions. While on the one hand program incentive funding can allow institutions a fair degree of scope in determining the activities worthy of continuous improvement efforts, it risks being so broad as to not only detract from the comparability of outcomes between institutions, but also weaken the incentive for institutions to improve their practices in the first place, along with those which might support improvement on an on-going basis. Discipline reviews perhaps represent the best example of this, in being criticised for generating a lot of activity and a lot of evidence around the activities of institutions while lacking comparability in terms of outcomes, and without having a lasting impact on institutional practice.
External review informed by a fitness-for-purpose approach is typically understood as a systematic examination of an organisation’s activities in line with its objectives. Under this approach each institution is expected to have in place appropriate strategies and objectives in line with its aims, and appropriate processes for monitoring and evaluating aspects of its ‘quality cycle’. External audits conducted during the 1990s were supported by voluntary self-assessment on the part of providers. Quality audits of this style served as an effective mechanism for change. It was noted at the time that this holistic approach to self-review had the advantage of being able to involve much of the university in self-review activities, yielding a range of practical and strategic benefits (DETYA, 2000, p. 2). Self-review continued to comprise an important part of quality assurance activities during the 2000s in featuring as a central aspect to the quality framework supported by AUQA. Self-review not only enabled institutions to develop the means to report the kind of information required by an external quality agency, but also had the potential to support improvements independent of the direct intervention of government or an external agency.
External review has been held to stimulate debate on issues related to quality, to contribute to development of a more professional approach to administration and education support structures and to create new routines and systems for handling data and information on educational performance and quality (Stensaker et al., 2011, p. 465). Shah and Nair point to the way that external quality reviews have supported institutions in examining and monitoring processes in ways that they had not done before. They point to the impact of external quality audit on the way in which problem areas have been identified and addressed (Shah and Nair, 2011). Perhaps the single biggest contribution made by external quality audits has been where they have encouraged the development of a sustained culture of continuous improvement, and where they were an effective catalyst for the development of robust quality systems (Anderson et al., 2000; Shah et al., 2007; Adams et al., 2008, pp. 26–7; Shah and Nair, 2011). According to Shah et al. (2010), external quality audits in Australia have been particularly effective in improving internal quality management systems and processes in universities, embedding quality cycles in strategic planning frameworks and in informing the core activities of higher education providers (Shah and Nair, 2011, p. 143).
Stensaker et al. point out that that on some views external quality assurance will only ever be related to structure and process, with little impact filtering through to the actual practices of institutions. They point to a need for a more refined understanding of the dynamics of organisational change in order for external review to be used to best effect (Stensaker et al., 2011). Further to this, Harvey (2002) is often quoted for pointing out that if quality monitoring is seen as an ‘event’ rather than a ‘process’, there is little likelihood of the event’s making much long-term impact. Rather, it is likely to lead to short-term strategies to improve performance on the measures used, and other strategies for ‘gaming the system’. The more quality assurance priorities become focussed on external requirements, the less lasting the benefits are likely to be (Harvey, 2002). Shah and Nair also found that external audits with an improvement-led culture reflected more positive results in terms of self-assessment, external peer review, improvements and follow-up, while audits with a compliance-driven culture were much less successful in engaging institutional stakeholders in quality and improvement (Shah and Nair, 2011, p. 141).
Reports of the relative success of external review in Australia contrast with some of the views reported about the UK experience of quality audit, where external reviews by the Quality Assurance Agency were seen by some stakeholders as a costly and unduly bureaucratic exercise. External review in this case was held to promote a culture of compliance, discouraging the engagement of ideas around quality improvements (Shah and Nair, 2011, p. 142). This was attributed in part to overlapping and burdensome processes, competing notions of quality, a failure to engage learning and transformation and a focus on accountability and compliance rather than on continuous improvement and self-review (Shah and Nair, 2011, p. 142). Further comparisons would be useful in establishing if there were in fact marked differences between Australian and UK experiences of external quality review, why this might be the case and the extent to which there were comparable factors in play.
Finally, external review strategies are sometimes mistakenly held to carry the weight of their influence in their own right, exerting their influence through audit events alone. Alongside the other strategies outlined above, it is easier to see how external review may work in concert with other strategies to support and encourage the continuous improvement efforts of providers to best effect. The success of AUQA audits and other external reviews was, arguably, supported by a clear set of reporting protocols, be that in the public domain or in the quality portfolios used as the basis of review. Transparency was lent to the activities of providers through the publication of audit reports for each institution and the selective publication of evidence from educational profiles and institutional portfolios. In effect, therefore, the scope of external review is potentially very broad, limited largely by the kind of reporting requirements that typically form the back-drop for each review. The measure of success for external review may be found at least to some degree in the quality of publicly available information on the activities and performance of providers as much as in the ‘style’ of monitoring, improvement and enhancement activity they have been found to promote within the institution before, during and after each audit event.
So has external review actually led to improvements in the quality of the student experience? It would be difficult to establish conclusively that any demonstrable improvements in the quality of the student experience could have been as a direct result of external audit. In other respects, however, the student experience of quality assurance represents a noteworthy mirror to quality assurance of the student experience, one that in Australia at least reflects mixed results, at best (Patil, 2007; Palmer, 2008; Gvaramadze, 2011). Based on the European experience, students as a group seem less convinced about the positive effects of the now seemingly endless evaluation activities they are asked to participate in (Stensaker et al., 2011, p. 476). Further to this, the somewhat artificial construction of ‘the student voice’ and ‘the student experience’ in higher education policy and quality assurance circles at the moment suggests the risk of these succumbing to the marketing and myth-making activities of universities rather than their being legitimate matters for inclusion in the scope of continuous improvement efforts. A more optimistic take on this may be that featuring as part of the ‘gloss’ of what universities promote is precisely the path to being ‘in scope’ for continuous improvement purposes.
External quality audit has proved to be an effective means for bringing a broad range of activities and outcomes into the scope of review where supported by quality strategies comparable with the aims of review. Despite the limitations and shortcomings noted above, external quality audits can provide an effective vehicle for engagement on quality issues. They also provide a stimulus for innovation in quality assurance and quality enhancement activities. Finally, they offer an opportunity for institutions to demonstrate that the activities and outcomes they support are at or above a reasonable standard, and that they have identified and prioritised resources to address areas where they may be underperforming.
It has been proposed here that external review has made a difference in enhancing the higher education student experience in Australia. Compared with other system-level quality strategies, improvements yielded through external review have included expanding the scope of activities worthy of consideration for continuous improvement efforts, improved transparency in the activities and outcomes supported by institutions and qualitative improvement in the approach taken to continuous improvement within institutions. There are clearly strengths to the fitness-for-purpose approach supported by external audit that are worthy of consideration under a regulatory framework, compared with other system-level quality strategies, and it is not necessarily the case that one need come at the expense of the other.
Merits of external review include effectively encouraging an expansion in the scope of activities worthy of consideration for quality assurance purposes and promoting greater transparency in quality assurance activity and in the broader activities and outcomes supported by institutions. Finally, and perhaps most importantly, external review appears to have been effective in many cases in promoting a culture of self-review on the part of higher education providers in Australia. Overall, external review quality strategies have yielded the greatest improvement in quality in Australian higher education in cases where they have served to promote a culture of innovation and improvement in quality-enhancement activities. They also appear effective in supporting the constructive engagement of stakeholders on quality issues. They have also led to improved transparency in demonstrable evidence not just of outcomes, but also of the improvement and enhancement activities of institutions, supporting an environment where good practice is not only acknowledged but shared.
Is it possible to develop a system-level quality strategy that effectively integrates regulation and standards with a fitness-for-purpose approach? The answer perhaps lies in removing some of the mis-characterisation of each of the approaches in terms that imply that each is opposed to the others. This question remains largely untested in the context of Australian higher education. Instrumental in supporting positive outcomes in a regulatory environment will be recognition that each new approach exists against a background of former system-level quality initiatives. While each may be found to have its own strengths and weaknesses, each iteration has more or less sought to build on the strengths of previous approaches, in addition to seeking to address their shortcomings. We should hope that the current iteration is no different.
Alcock, C.A., Cooper, J., Kirk, J., Oyler, K. The Tertiary Student Experience: A Review of Approaches Used on the First Cycle of AUQA Audits 2002–2007. Melbourne, Australia: Australian Universities Quality Agency; 2009. [(No. 20)].
Anderson, D., Johnson, R., Milligan, B. Quality Assurance and Accreditation in Australian Higher Education: An Assessment of Australian and International Practice. Canberra, Australia: Evaluations and Investigations Programme, Higher Education Division; 2000.
AUQA. Australian Universities Quality Agency (Archive). Available from: pandora.nla.gov.au/pan/127066/20110826–0004/www.auqa.edu.au/qualityaudit/, 2011. [[Accessed February 2012]].
Baird, J., Quality in and around universities Available from. Paper presented at Regional Conference on Quality in Higher Education, 10–11 December, Kuala Lumpur. 2007. http://www.auqa.edu.au/files/presentations/quality_in_and_around_universities.pdf [[Accessed February 2012]].
Baldwin, P. Higher Education Quality and Diversity in the 1990s: policy Statement by the Hon. Peter Baldwin, MP, Minister for Higher Education and Employment Services. Canberra: Australian Government Publication Service; 1991.
Bradley, D., Noonan, P., Nugent, H., Scales, B. Review of Australian Higher Education: Final Report. Canberra, Australia: Department of Education Employment and Workplace Relations, Commonwealth of Australia; 2008.
Chalmers, D. A Review of Australian and International Quality Systems and Indicators of Learning and Teaching. Sydney, Australia: Carrick Institute for Learning and Teaching in Higher Education; 2007.
Committee on the Future of Tertiary Education in Australia. Tertiary Education in Australia: Report of the Committee on the Future of Tertiary Education in Australia to the Australian Universities Commission. Canberra, Australia: AGPS; 1964.
DETYA. Quality of Australian Higher Education: Institutional Quality Assurance and Improvement Plans for the 2001–2003 Triennium. Canberra, ACT: Department of Education Training and Youth Affairs, Commonwealth of Australia; 2001. [(No. DETYA No. 6666.HERC01A)].
Higher Education Council. The Promotion of Quality and Innovation in Higher Education: Advice of the Higher Education Council on the Use of Discretionary Funds. Canberra, Australia: Australian Government Publishing Service; 1995.
Hudson, H. Report of the Discipline Review of Computing Studies and Information Sciences Education (No. 92/190). Canberra, Australia: Information Industries Education and Training Foundation; Evaluation and Investigations Program; 1992.
, Performance Indicators in Higher Education; Report of a Trial Evaluation Study Commissioned by the Commonwealth Department of Employment, Education and Training. R.D. Linke. Performance Indicators Research Group; Department of Employment Education and Training, Canberra, ACT, 1991. [(Vol. 1: Report and Recommendations)].
Mathews, R., Jackson, M., Brown, P. Accounting in Higher Education. Report of the Review of the Accounting Discipline in Higher Education. Canberra, Australia: Evaluations and Investigations Programme, Commonwealth of Australia; 1990.
Palmer, N. Development of the University Experience Survey: report on findings from secondary sources of information. In: Radloff A., Coates H., James R., Krause K.-L., eds. Report on the Development of the University Experience Survey. Canberra, Australia: Department of Education, Employment and Workplace Relations, 2011.
Pearce, D., Campbell, E., Harding, D. Australian Law Schools: A Discipline Assessment for the Commonwealth Tertiary Education Commission. Canberra, Australia: Evaluations and Investigations Programme, Commonwealth Tertiary Education Commission; 1987.
Ramsden, P. Report on the Course Experience Questionnaire trial. In: Linke R.D., ed. Performance Indicators in Higher Education: Report of a Trial Evaluation Study Commissioned by the Commonwealth Department of Employment, Education and Training. Canberra: Australian Government Publishing Service, 1991. [Vol. 2: Supplementary Papers].
Shah, M., Roth, K., Nair, S., Improving the quality of offshore student experience: findings of a decade in three Australian universities Available from. Proceedings of the Australian International Education (AIEC) Conference. Sydney, Australia 12–15 October. 2010. [Accessed 29 October 2012]. http://www.aiec.idp.com/pdf/Improving%20the%20Quality%20of%20Offshore%20Student%20Experience_PeerReviewed.pdf
Shah, M., Skaines, I., Miller, J., Measuring the impact of external quality audit on universities: can external quality audit be credited for improvements and enhancement in student learning? How can we measure?. Proceedings of AUQF 2007: Evolution and Renewal in Quality Assurance. 2007:136–142.
Stensaker, B., Langfeldt, L., Harvey, L., Huisman, J., Westerheijden, D. An in-depth study on the impact of external quality assurance. Assessment & Evaluation in Higher Education. 2011; 36(4):465–478.
1Reports of these audits are archived at http://pandora.nla.gov.au/pan/127066/20110826-0004/www.auqa.edu.au/qualityaudit/index.html (AUQA, 2011).