Dr Ann Sardesai

Related consultation
Submission received

Name (Individual/Organisation)

Dr Ann Sardesai

Responses

Q8. With respect to ERA and EI:

(a) Do you believe there is a need for a highly rigorous, retrospective excellence and impact assessment exercise, particularly in the absence of a link to funding?

(b) What other evaluation measures or approaches (e.g. data driven approaches) could be deployed to inform research standards and future academic capability that are relevant to all disciplines, without increasing the administrative burden?

(c) Should the ARC Act be amended to reference a research quality, engagement and impact assessment function, however conducted?

(d) If so, should that reference include the function of developing new methods in research assessment and keeping up with best practice and global insights?

"Former Australian Research Council (ARC) chief executive Margaret Shiel, who has been enlisted to lead a review (i) of the organisation, appears likely to recommend the scrapping of the Excellence in Research for Australia (ERA) (ii) ."

We wholeheartedly support the ERA system's termination and believe that scrapping it will save the sector $250 million annually, considering academic time, professional staff time, processes, and submission requirements. Also, costs for the ERA process within the ARC.

"There is no doubt that ERA has been tremendously effective in shifting the focus of Australian research from an emphasis on quantity to quality of outputs, particularly in its early iterations, but it can be argued that ERA has achieved its initial purpose and that the time and resources involved may be better re-directed to other evaluation needs(iii) .

Q8. Concerning ERA and EI:

We support any evaluation system changes that remove the current ERA and EI systems.
Our experience is that putting together submissions for the periodic research evaluation, known as the Excellence in Research for Australia (ERA), is time-consuming and has a significant bias regarding peer review rather than reliance on independent measurement. While the results of this exercise do not directly impact university finances (unlike in the UK), they are taken seriously by our vice-chancellors and deans as prestige indicators. Consequently, university executives continually set ambitious targets to improve their scores for each discipline.

Many academics seem to regard ERA evaluations with trepidation as the benefits from a good score for their discipline are unclear and possibly non-existent. In contrast, the costs of not meeting the targets are potentially severe or even unfair, including removing internal research funding allocations, reducing allocations for research in their workloads, and performance management of academics as they are viewed as underperforming in research(iv).

Our published research highlights the unintended consequences flowing from the EAR exercise. We will now briefly outline some of our findings:

Martin-Sardesai (2016) identifies the critical role played by a VC by translating his vision of a research culture into performance measures designed to interpret government policy, encourage research and provide a means by which it could be measured and assessed. The VC was successful in attaining good results in the university's first research assessment exercise, ERA 2010. This VC was acknowledged in 2012, and he received a significant performance bonus on top of a $1,000,000 salary:

"I have been VC of UniA since 2006. As the chief executive of the university, I have overall responsibility for its academic, research, administrative, financial and development strategies. Strategically, our goal has been to turn UniA into an internationally recognised research university."

However, the perceptions of individual academics about performance measures developed within the university to meet government research assessment requirements were different (Martin-Sardesai and Guthrie, 2016). Academic responses to a survey on the effect of research performance measures reported increased stress levels and decreased job satisfaction. Performance measures within universities are designed to improve and develop the performance of individual academics, motivating them to acquire new skills and perspectives about research and teaching (ter Bogt and Scapens, 2012). However, the authors indicate that performance measures were perceived to be used to assess the performance of individual academics to ensure their contribution enabled a great ERA outcome rather than being a development tool to assist individual academics.

In investigating the organisational change undertaken by an Australian university in anticipation of and in response to ERA 2010, Martin Sardesai et al. (Martin-Sardesai et al., 2017b) revealed that the ERA system was anticipated with the appointment of new senior leadership, a new vision, restructure of faculties and departments. Also, changes to the research PMS, and the ERA system, changed how the public university was transformed. This university underwent a significant transformation, not with the implementation of ERA in 2010, but with the appointment of a new VC in 2006 and his anticipation of ERA based on his UK experience. Changes were made to the vision and mission of universities, and internal structures, which led to changes in strategy, budgets and funding, which further drove changes to the performance measures of individual academics. Academics voiced concern that the concentration on the ERA system inhibits and constrains their work. Academics also perceived that the focus on research had affected their teaching adversely. They were critical of these performance measures' effects on their day-to-day work.

Performance management systems have been an inevitable consequence of the development of Government Research Evaluations (GREs) of university research and have also inevitably affected the working life of academics. In tracking the development of GREs by critically evaluating their adoption in the UK and Australian HES, Martin-Sardesai et al. (2017a) indicate that in 25 years of GREs and with the globalisation of the higher education sector, UK and Australian public universities have experienced sustained changes. There is a general agreement amongst academic observers that governments and institutional management have imposed new centralised and bureaucratic controls on academic labour. The traditional values of professional autonomy and academic freedom are increasingly subordinated to those of the so-called "market". The outcome is a degradation of academic work and the alienation of academics, who are forced continuously to produce more academic outputs at less cost, and who thereby lose control of the product of their labour in addressing the imperative of bailing their universities out of what has been described as a 'government-induced and senior management-led mess' (Harley, 2000, p. 575).

In examining the perceptions of academic human capital (AHC) towards a university's research performance measurement system (PMS) in response to the national research assessment exercise, Martin-Sardesai and Guthrie (2018) provide empirical evidence informing governments and policymakers of the unintended consequences of ERA. Their study reveals that while the efforts of senior management were rewarded regarding ERA outcomes for UniA in the 2010 evaluations, academics' perceptions were different. In the ERA system as a PMS to manage AHC, there were negative responses from academics.

The ERA system has resulted in a continuous endeavour to account for research over the last decades (Martin-Sardesai and Guthrie, 2018). PMS have colonised vast segments of academia and increasingly regulate the conduct of research in the Australian HES. Despite sensitivity to the potentially detrimental effects of an excessive emphasis on PMSs, individual academics are forced to follow the game's rules (Martin-Sardesai and Guthrie, 2016). While it may be reasonable to believe that institutional incentives are required to increase research productivity, an overemphasis on PMSs has stifled innovation and academic freedoms for instance what journals you must publish in. Furthermore, universities and academics have gamed the system to portray themselves as achievers of their performance targets (referred to as a dysfunctional effect of PMSs) (see Martin-Sardesai, 2016; Martin-Sardesai and Guthrie, 2016; Martin-Sardesai et al., 2017a; Martin-Sardesai et al., 2017b).

There is also evidence about the accuracy and appropriateness of the current assessment process, especially for those disciplines where peer review rather than metrics is used to make these assessments. As will be illustrated in the case below.

These authors' skepticism of the ERA peer-review process comes from their observation of an apparent disconnect between increases in the quantity and quality of research outputs, including publications and income, and the score awarded within their accounting disciplines.
In this contribution, we provide two case examples drawn from the accounting discipline that raise further concerns about the disconnect between research outputs and quality and the scores awarded through the peer review process of the ERA. O'Connell et al. (2020) prove this empirically(v).
In this study, the critical research question examined how the outputs and foci of research in elite accounting disciplines changed over 16 years (2004 to 2019). They analysed all papers published in 20 highly ranked accounting journals (defined as A* or A ranked under the Australian Business Deans Council journal rankings) across these 16 years by Australian accounting academics that were highly rated (three or better) in the research assessment exercise. We also compared our results from this group against the same publication list of two universities, as case studies in the accounting disciplines, that were not rated as "world class" (meaning that they received an ERA score below three).

Our primary research findings can be summarised as follows:
* The results exhibit little change across the four assessments for the accounting discipline, with between 11 and 13 universities rated three or above across the four ERA assessments. The vast majority of these were the same universities.
* two case study universities (the Macquarie and RMIT accounting schools), which received scores of two across all four rounds, compared favourably with universities rated higher. Specifically, compared to the higher-rated universities, the two case study universities would have ranked in the middle of total outputs for the period in the same sample of journals (i.e., in seventh and sixth places, respectively).
* research outputs increased significantly across the period for the two case study universities, but their ratings did not change.
* We conducted further analysis of the disciplines to assess research productivity per capita of Macquarie and RMIT against the higher-rated universities sample. This analysis placed them at the higher end of per capita productivity (fifth and fourth, respectively) for the same set of journals compared to the higher-ranked universities.

In looking for possible explanations for O'Connell et al.'s (2020) findings, further analysis was conducted to ascertain what else might explain this apparent anomaly over time. We discovered that the two case study disciplines differed markedly from the highly-rated disciplines regarding the type of research that dominates their outputs. Specifically, while publishing in the same journals, the highly-rated universities were more likely to employ quantitative and positivist research approaches than the two lower-rated case studies.

Specifically, the percentage of quantitative and positivist papers ranged between 57 per cent and 88 per cent for the highly-rated group, whereas the equivalent for the two case studies was 41 per cent for Macquarie and just 5 per cent for RMIT. Instead, the foci of the research adopted by the two lower-rated universities were on the use of sociological, interpretative, and critical tradition qualitative approaches in their research.

The findings also showed that the case study universities adopted a more eclectic set of research topics than the highly-rated disciplines. The highly-rated universities tended to focus on traditional areas of accounting research, such as financial accounting, managerial accounting, and auditing, and in many cases using North American data (de Villiers et al., 2019). In contrast, the two case study universities published in social and environmental accounting, higher education accounting, performance management control, business ethics, accounting history, accounting education and professionalisation, mainly with an Australian and international focus.(Guthrie et al., 2019)
So, what can we make of these findings?

In summary, the ERA peer-review process has a strong bias in accounting towards awarding high ratings to those sandstone universities (who mainly sat on the ERA panel) that conduct quantitative and positivist research in mainstream research topics, Also, ERA scores are "sticky", and significant increases in the quality and quantity of research outputs will not necessarily improve a university's scores(vi) .

These results are perplexing considering that The Australian newspaper rated RMIT University as "the top publishing Australian school in the Accounting and Taxation Field" in 2019 and awarded one of its professors as the top accounting professor (vii) . These awards by The Australian were based on research metrics in the highest cited journals in the field, not peer review. From 2020 the top publishing Australian school in the Accounting and Taxation Field was Macquarie University, and one of its professors was the top accounting professor in 2020 and 2022. However, the ERA panel has only rated RMIT and Macquarie University as a 2 over the period!

This contribution is not a complaint about how the ERA appeared to get our ratings amiss. Instead, it shows how the peer review practices used to judge the performance of academics and their publications are, at least, potentially biased and fragile. The situation is not helped by a substantial lack of transparency in the ERA process, whereby there is no need to justify scores for university disciplines externally. The ARC's practice is to release a report at the end of each round, providing aggregated data for each discipline and the awarded scores for each university discipline. No rationale is provided for calculating that score for a given university. In our perspective, the process is akin to a student submitting an assignment, and the only feedback they receive is a mark without written comments and a report about how the class performed. No university would ever condone or support this behaviour(viii)?

References
de Villiers, C., Dumay, J. and Maroun, W. (2019), "Qualitative accounting research: dispelling myths and developing a new research agenda", Accounting & Finance, Vol. 59 No. 3, pp. 1459-1487.
Guthrie, J., Parker, L. D., Dumay, J. and Milne, M. J. (2019), "What counts for quality in interdisciplinary accounting research in the next decade: A critical review and reflection", Accounting, Auditing & Accountability Journal, Vol. 32 No. 1, pp. 2-25.
Harley, S. (2000), "Accountants divided: Research selectivity and accounting labour in UK Universities", Critical Perspectives on Accounting, Vol. 5 No. 11, pp. 549-582.
Martin-Sardesai, A. (2016), "Institutional entrepreneurship and management control systems", Pacific Accounting Review, Vol. 28 No. 4, pp. 458-470.
Martin-Sardesai, A. and Guthrie, J. (2016), "Australian academics' perceptions on research evaluation exercise", Amity Journal of Management Research, Vol. 1 No. 2, pp. 1-16.
Martin-Sardesai, A. and Guthrie, J. (2018), "Accounting for construction of research quality in Australia's research assessment exercise", in S. Widener, M. Epstein and F. Verbeeten (Eds), Performance Measurement and Management Control: The Relevance of Performance Measurement and Management Control Research, Emerald Group Publishing Ltd., UK, pp. 221-241.
Martin-Sardesai, A., Irvine, H., Tooley, S. and Guthrie, J. (2017a), "Government research evaluations and academic freedom: a UK and Australian comparison", Higher Education Research & Development, Vol. 36 No. 2, pp. 372-385.
Martin-Sardesai, A., Irvine, H., Tooley, S. and Guthrie, J. (2017b), "Organisational change in an Australian university: Responses to a research assessment exercise", The British Accounting Review, Vol. 49 No. 4, pp. 399-412.
ter Bogt, H. and Scapens, R. W. (2012), "Performance management in Universities: effects of the transition to more quantitative measurement systems", European Accounting Review, Vol. 21 No. 3, pp. 451-497.

i (https://www.timeshighereducation.com/news/australia- postpones-research-assessment-exercise)
ii (https://www.timeshighereducation.com/news/excellence- research-australia-2019-results-announced) exercise
iii ERA Review of the Australian Research Council Consultation paper 10 November 2022
iv Martin-Sardesai, A., Guthrie J. and Parker, L. (2020), “The neoliberal reality of higher education in Australia: how accountingisation is corporatising knowledge”, Meditari Accountancy Research, https://doi.org/10.1108/MEDAR-10-2019-0598
v O’Connell, B., De Lange, P., Sangster, A and Stoner, G., (2020). “Impact of Research Assessment Exercises on Research Approaches and Foci of Accounting Disciplines in Australia”, Accounting Auditing and Accountability Journal, Vol. 33, No. 6, pp. 1277-1302 https://doi.org/10.1108/AAAJ-12-2019-4293
vi Guthrie, J. Parker, L. Dumay, J. and Milne, M. (2019), “What Counts for Quality in Interdisciplinary Accounting Research: A Critical Review and Reflections”, Accounting, Auditing & Accountability Journal, Vol. 31 No. 1, pp. 2-25.
vii The Australian (2019). “Business, Economics & Management: Australia’s Research Field Leaders”, 25 September.
viii JAMES GUTHRIE and BRENDAN O'CONNELL (2022) Time to re-think peer review in ERA: The peer review practices used to judge the performance of particular academics and their publications are at least, potentially biased and fragile, Campus Morning Mail 11 April 2022. https://campusmorningmail.com.au/news/time-to-re-think-peer-reports-in-era/

Submission received

02 December 2022

Publishing statement

Yes, I would like my submission to be published and my name and/or the name of the organisation to be published alongside the submission. Your submission will need to meet government accessibility requirements.