Anonymous #06

Related consultation
Submission received

Name (Individual/Organisation)

Anonymous #06

Responses

Q1. How could the purpose in the ARC Act be revised to reflect the current and future role of the ARC?

For example, should the ARC Act be amended to specify in legislation:
(a) the scope of research funding supported by the ARC
(b) the balance of Discovery and Linkage research programs
(c) the role of the ARC in actively shaping the research landscape in Australia
(d) any other functions?

If so, what scope, functions and role?

If not, please suggest alternative ways to clarify and define these functions.

Legislation is hard to change. I would suggest that the legal framework should not be too narrowly defined, as a lively research environment requires flexibility and the capacity to respond to current events and needs. However, I think that support for basic research should be enshrined as the critical function and purpose of ARC funds. Industry engagement is important, but Australian industry need to step up and invest more in R&D. The state should prioritise support for basic research, particularly that which cannot be funded in other ways.

I think the primary role of the ARC in relation to research funding should be ensuring the funding of good researchers regardless of where they work. Direct funding of institutions would probably be a more rational model, except for the fact that the money has historically flowed towards the ‘big’ research institutions (scale attracts scale). Researchers at all our institutions produce remarkable research and contribute to Australian society and economy and the ARC at least allows for that work to be supported and recognised. It also encourages cross-institution collaboration which is critical.

However I think that the ARC should largely not be directing the ‘ideas’ that are funded; this should flow from the academy and from society’s interests.

Q2. Do you consider the current ARC governance model is adequate for the ARC to perform its functions?

If not, how could governance of the ARC be improved? For example, should the ARC Act be amended to incorporate a new governance model that establishes a Board on the model outlined in the consultation paper, or another model.

Please expand on your reasoning and/or provide alternative suggestions to enhance the governance, if you consider this to be important.

Any governance model where power seems to concentrate in an individual (like a CEO or Minister) offers the appearance of a lack of fairness and transparency, and so a board seems more equitable. However I suspect some of this appearance of unfairness is largely due to how this role is positioned in relation to peer review and greater delimitation of powers would probably be more important here. I don’t think that a board that could actively intervene in who got funded would be received better.

Overall I think that the board model is better in giving confidence that academic concerns will shape the operation of the ARC.

Q3. How could the Act be improved to ensure academic and research expertise is obtained and maintained to support the ARC?

How could this be done without the Act becoming overly prescriptive?

I think this is an example where less is more. A clear statement that funding decisions should be guided by academic and research expertise (not politics) and that places emphasis on the value of that advice. You could include a statement that makes a commitment to the separation of universities and the state as a fundamental principle of academic freedom – the UNESCO declaration on academic freedom (1997) has some useful statements in that respect.

Q4. Should the ARC Act be amended to consolidate the pre-eminence or importance of peer review?

Please provide any specific suggestions you may have for amendment of the Act, and/or for non-legislative measures.

Enshrine academic freedom as a principle.

Q5. Please provide suggestions on how the ARC, researchers and universities can better preserve and strengthen the social licence for public funding of research?

One of the central roles that the university plays in a liberal democracy is as a critic – a critic of the state, of business, and of society. I would argue that academics, certainly, and much of the public expect the university to step into that role and that its willingness to do that is viewed as a measure of the health of the sector and of democracy more widely. The current relationship between the sector, the state and business has severely restricted the capacity of the university to enact this role and it is ultimately this that is acting as the long-term driver in the decline of the sector’s social license to operate. If we want to improve this, then we need to open up greater space for the university to ‘be’ the university of the public imagination – the place of brave critics that hold our society to account, who demand for change based on the science and put that before politics, who think about truth and justice before keeping their stakeholders happy. This means adequate funding, but it also means placing some very clear boundaries between the university and its other stakeholders such that we protect its independence.

The Linkage scheme acts as a critical risk to public confidence in science and that is rarely taken into account. Big tobacco’s funding of cancer research was an ethical disaster but also fundamentally undermined the reputation of scientific research for the public. Universities and researchers do work hard to place barriers between themselves and their partners when conducting research, but even if they do this well, this is largely invisible to the public. How do we convince the public of the validity of research that emerges from these relationships? Perhaps more importantly, it is very difficult for researchers to produce results that undermine their industry relationships. If they wish for their contracts to continue and for research partnerships to exist in the future, they need to produce results that keep industry partners happy. And that is a huge amount of pressure to place on individual researchers, not least given that so much of the research sector is on precarious contracts – they are being asked to give up their livelihoods to follow the research. One good scandal is all it would take to seriously undermine not only the Linkage scheme but the work we do with industry. For this reason, I think this scheme should not be normalised as a way of funding research, and that researchers and industry partners should be asked to articulate the ways in which they will support the independence of the research and ensure the validity and robustness of results.

The relationship between the university sector and the state, particularly defence, also needs further interrogation. The university sector should be able to freely criticise government policy and practice without worrying about being punished by an incumbent government. The capacity for a minister to arbitrarily remove or provide funding with little transparency is a very a large disciplinary stick and it is does not follow best practice in other democratic nations. That Defence contracts have so little transparency (for obvious reasons) is also of a significant concern because it makes it harder to demonstrate to the public that there is appropriate distance between the university and the state, as would be expected in a thriving democracy. That this money now plays such a large role in the research sector is a critical issue because Defence research that is happening in one part of the university (e.g. sciences) becomes a dampener on academic freedom in other parts (e.g. political science) that take the role of the critic. I would suggest that there should be attention played to the funding mix in particular institutions such that the weight of one stakeholder cannot act to shape or obstruct the democratic functions of the university as a critic. It also seems likely that quite a lot of defence funding could be channelled through more transparent funding schemes without harming national security.

Finally, a lot of emphasis on social license has been placed on the idea that the public need to see real world impact to believe something needs money – critical contemporary problems. But I am not sure this is the case. Much of the public value new knowledge for its own sake, as investments in culture, society and the future, and as they recognise that new inventions come from strange places. Moreover, a lot of the noise on this this comes from conservative critics who fail to recognise that a ‘solving critical problems’ framework will never prioritise conservative perspectives on knowledge-making (as you need to frame something as a problem that needs fixed). A less reductive framing of knowledge-making might actually see more research that aligns with conservative values and so enable a greater breadth of political perspectives in the academy.

Q6. What elements of ARC processes or practices create administrative burdens and/or duplication of effort for researchers, research offices and research partners?

I agree that all the listed examples are areas of concern. My biggest gripe however is timescale. I know that other countries also have long timescales, but it’s really hard to plan your life when you need to wait a year to find out whether you’ve got a fellowship or grant. It is not clear why it needs to take so long. I note that some countries do a selection sift before peer review (e.g. the college rank them) so only the top set go for review to remove the burden of so much review. This isn’t quicker but is less burdensome. I would like quicker however, so that we could turn things around faster.

The ARC scheme gives out relatively small amounts of money when compared to its international competitors. The scope and scale of what is asked for then appears unnecessary; the application is longer and more burdensome than elsewhere but also relative to the investment being asked for. I would also note that the ‘finish’ of ARC applications is much higher than those I review internationally (and I review for 5 countries and sit on two college of experts), which suggests we are spending an inordinate amount of time on polish that other schemes just don’t expect. I suspect this is because of the pressure on the schemes and possibly a high level of compliance requirements (I know, for example, in the UK if you do something trivial that makes you uncompliant, like font size, they just send it back for you to fix and resubmit). It could be that we just relax a lot of compliance stuff that doesn’t add much value.

Q7. What improvements could be made:

(a) to ARC processes to promote excellence, improve agility, and better facilitate globally collaborative research and partnerships while maintaining rigour, excellence and peer review at an international standard?

(b) to the ARC Act to give effect to these process improvements, or do you suggest other means?

Please include examples of success or best practice from other countries or communities if you have direct experience of these.

More deadlines throughout the year so that there isn’t such a long wait to have to apply and so that there is less pressure on everyone applying at the same moment.

Quicker turnaround of decision-making.

Shorter applications, especially for small amounts of money.

Less rules so less pressure on compliance checking.

Removal of the benefit section/NIT – the UK has done this and many other countries have a much lower burden here.

I am not sure legislation is the best place to dictate the details.

Q8. With respect to ERA and EI:

(a) Do you believe there is a need for a highly rigorous, retrospective excellence and impact assessment exercise, particularly in the absence of a link to funding?

(b) What other evaluation measures or approaches (e.g. data driven approaches) could be deployed to inform research standards and future academic capability that are relevant to all disciplines, without increasing the administrative burden?

(c) Should the ARC Act be amended to reference a research quality, engagement and impact assessment function, however conducted?

(d) If so, should that reference include the function of developing new methods in research assessment and keeping up with best practice and global insights?

The main benefit of ERA/EI without funding is to enable and support research in institutions and areas of research that would otherwise be allowed to deteriorate and that is not an insignificant concern in an economy with limited support for research and where areas like the humanities don’t impact on things like international rankings. I don’t believe that there is a cheap way to do this though; it is extremely burdensome and doesn’t achieve much.

Data driven approaches are a problem. Many areas, not least the humanities, are very poorly indexed and their impact window often falls out of the 3/5 year windows of citation counts. This means that what is measured is poor quality and not a good guide to research excellence. This approach is also bad for innovation. Citations are larger in densely-populated fields and that generally means fields that are long established. Newer fields, especially in the early stages, are massively disadvantaged here and that is anti-innovation and encourages conservative research practice. Moreover an emphasis on publication in general has led to significant academic integrity issues as people try to ensure their careers in a competitive job market. The ARC should consider the impacts of its accountability practices on behaviour and the direction of research.

There are also a large number of people already measuring academic success, e.g. THE, QS etc, which are already being massively gamed by universities. Do we really need to spend even more money on this? Surely accountability could be measured in the general ‘health’ of your institutions and viewed from a rounded perspective that considers the general mission of the institution?

You could measure it in ARC applications and their quality, using the scoring system that already exists and mapping across FOR codes? Use your data twice. This would encourage universities to support applications across their fields and to provide time for that would require internal investment of research areas. If it’s just a proxy, it’s as good as any.

Q9. With respect to the ARC’s capability to evaluate research excellence and impact:

(a) How can the ARC best use its expertise and capability in evaluating the outcomes and benefits of research to demonstrate the ongoing value and excellence of Australian research in different disciplines and/or in response to perceived problems?

(b) What elements would be important so that such a capability could inform potential collaborators and end-users, share best practice, and identify national gaps and opportunities?

(c) Would a data-driven methodology assist in fulfilling this purpose?

I think that this seriously underestimates how poorly metrics sell to the public (and to researchers who are highly sceptical of the data which always acts as a proxy for excellence and never excellence itself). The average member of the public has relatively poor numeracy and so rarely appreciates enumerated success measures. The decline of social license has mapped quite closely to the rise of public management and accountability measures, in no small part I would suggest as the language of metrics turns people off. No one cares how many publications an institution produces and very few people know what a citation count is to care whether it is high or low. Measuring things in terms of a FOR code is probably as abstruse as it’s possible to get – it’s an administrative grouping and has very little relevance beyond that. Indeed this is one of the reasons we have had pressure to prove impact. People want to hear about how someone has cured cancer or discovered some new element. They want to hear about ideas and their impacts, and how it enriches what students learn and how society evolves, or how it might help their business, and for most people this becomes more important than scale (even though scale drives impact). We need to turn back to the actual research and to talking about what researchers achieve and what they are restricted from achieving by various constraints. And we need to recognise that sometimes local research that has no international market will be the most significant for an Australian audience because it is what they need to solve their local problems. And none of our evaluation capability produces these types of information or makes what we do visible to non-experts (and by that I mean the only people who can interpret this data are senior Australian academics – not even our PhD students can interpret this stuff).

Q10. Having regard to the Review’s Terms of Reference, the ARC Act itself, the function, structure and operation of the ARC, and the current and potential role of the ARC in fostering excellent Australian research of global significance, do you have any other comments or suggestions?

We should change the name of the Discovery programme. The Doctrine of Discovery was the legislation that allowed invaders to treat indigenous land in Australia and across the globe as ‘terra nullius’. (It was also the name of the ship Capt Cook was on when he was murdered.) There is now a significant academic literature by Indigenous scholars that critiques a model of science framed through ‘discovery’, arguing that it replicates a Eurocentric and imperialist mindset to knowledge-making – see Linda Tuhiwai Smith, Decolonising Methodologies as an example. Moreover, it is a model of being in the world that is now under critique in public discourse. See for example internet memes that say 'I didn't steal your meme; I discovered it', and also this song (which is just fab): https://www.youtube.com/watch?v=9Jd3JZb7WVA If we are serious about decolonising our institutions and creating a space that is safe for Indigenous scholars to work we cannot do so under a racist umbrella.

Submission received

21 November 2022

Publishing statement

Yes, I would like my submission to be published but my and/or the organisation's details kept anonymous. Your submission will need to meet government accessibility requirements.