Thomas Stace

Related consultation
Submission received

Name (Individual/Organisation)

Thomas Stace

Responses

Q1. How could the purpose in the ARC Act be revised to reflect the current and future role of the ARC?

For example, should the ARC Act be amended to specify in legislation:
(a) the scope of research funding supported by the ARC
(b) the balance of Discovery and Linkage research programs
(c) the role of the ARC in actively shaping the research landscape in Australia
(d) any other functions?

If so, what scope, functions and role?

If not, please suggest alternative ways to clarify and define these functions.

[Note: I have written a narrative document attached later as a PDF that puts these comments in context]

The ARC is the central basis on which many researchers build their research careers. It is also a substantial source of funding for the university sector. So it is crucial for the ARC to communicate and enable the outcomes for which the Federal Government provides funding. The ARC has historically maintained a reputation for funding basic research. But this has become very clouded in recent years, with a growing emphasis on impact, engagement and outcomes.
This is a big problem. Basic research has long-term benefits that are extremely hard to anticipate.
The ARC needs to embrace this dichotomy:
If an ARC funding program is for basic research, then the national benefit or public interest is essentially irrelevant, and should neither be assessed nor raised in the application process.
If a program is applied, with anticipated public value, then it is the Federal Government’s obligation to identify this in its policy settings, and set up systems to fund these effectively.
Further, when there are clear national interest outcomes, such as economic, public, or cultural, the ARC does typically does not receive funding form the relevant government organ to support this activity; instead these activities necessarily displace other research directions.

Q2. Do you consider the current ARC governance model is adequate for the ARC to perform its functions?

If not, how could governance of the ARC be improved? For example, should the ARC Act be amended to incorporate a new governance model that establishes a Board on the model outlined in the consultation paper, or another model.

Please expand on your reasoning and/or provide alternative suggestions to enhance the governance, if you consider this to be important.

The ARC has no mechanism for researchers to engage directly with it. The approved pathway for researchers to approach the ARC is via university research offices, which represent the interests of the institutions. This can create clear institutional conflicts of interest where the interests of the university are not aligned with those of the researchers or the Federal Government.
More broadly, the inability for ordinary researchers to engage with the ARC makes it opaque. This engenders distrust, degrades confidence, and makes the ARC unresponsive to research needs.
The ARC could adopt a variety of models for managing direct engagement with the researchers that it funds. For example, program managers that have personal engagement with specific projects are common in US funding models. These individuals have domain knowledge, and are important sources of information on agency interests, priorities and resources to proactively coordinate research directions. Alternatively, an ombudsman could be established for researchers to provide feedback to the ARC in cases where maladministration is suspected.
Whichever model is adopted, the ARC needs to facilitate direct engagement with researchers as a means to improving transparency and confidence in its administration.

Q5. Please provide suggestions on how the ARC, researchers and universities can better preserve and strengthen the social licence for public funding of research?

My specific experience is in attempting to secure economic benefit from my research. This is effected by the ARCs legal assignment of Intellectual Property (IP) to universities, the ARC’s National Principles of Intellectual Property Management for Publicly Funded Researches, and the disparate practices at universities around IP management.
In my own experience (described in part in my submission to the University Research Commercialisation Scheme consultation, see page 1 therein), universities are not incentivised to behave in a way that enhances Australia's economic benefit from research commercialisation, for example acting to prevent the formation of potential startup companies based on ARC-funded research. The ARC’s National Principles espouse that, in order to “foster the most beneficial use of Australian research and development to secure value for industries, government, researchers and the community.”, universities that receive ARC funds will:
“foster the most valuable use of this IP by … making the IP openly accessible through licensing and accessibility arrangements which allow for its use”
Provide “means to help researchers identify IP that should be protected and/or commercialised, or IP that could benefit innovation/the economy of Australia by being made freely available”
“Define the ways in which benefits from the development and exploitation of the IP will be allocated…”
Provide “guidance in relation to potential conflicts concerning IP management…”
Provide “assistance to researchers in fulfilling their obligations and responsibilities as well as rewarding and encouraging their participation in any subsequent exploitation process…”
Provide “support for researchers so that they can recognise when their discoveries may provide benefit through promotion and open dissemination or when they may have potential commercial or other public value”
Provide “advice to the creators of the IP on the options available for either commercialising the IP”
These principles are admirable, but are essentially self-assessed for compliance by each university, being “approved by the institution's governing body”.
In my experience, the practices that Australian universities adopt are largely undocumented in their policies and procedures, and are often contrary to the National Principles.
Certainly, there is a strong incentive for any given institution to preference its own short-term interests (e.g. cash-flow from IP licensing) over longer term economic benefits (e.g. formation of new research-intensive businesses) that the ARC is attempting promote through the National Principles. In some cases, these practices are directly antithetical to the national interests.

Q6. What elements of ARC processes or practices create administrative burdens and/or duplication of effort for researchers, research offices and research partners?

The ARC serves a critical function in the Australian research sector. It is vital that this function is performed efficiently, effectively, and ethically.
Efficiency: The in the consultation paper, the ARC describes the imperative to be efficient. However, as for engineering, the notion of efficiency depends on choices about the numerator, for example, the quantity of funds under management, and the denominator, for example the cost of administration. If the denominator is the direct administrative cost of run-in the ARC, then the efficiency of the organisation is extremely high, since these costs are small compared to the funds dispersed.
However, the ARC relies on very substantial work products of parties external to the ARC itself, such as:
University research offices, who spend substantial resource on application compliance checking, grant writing assistants, and grant management obligations.
External reviewers, who provide detailed assessments of their peers’ proposals. These consume significant amounts of time, and represent an opportunity cost since they compete with other duties such as teaching, service, supervision, and indeed actual research.
The college of Experts, who I understand are not remunerated for their service.
Grant applicants themselves, who spend significant amounts of time applying for grants with low probabilities of success.
Relative to the ARC, these are externalities, but they do represent real opportunity costs to the research community. If these externalities were included in the efficiency calculation, the real efficiency of administering the ARC would change.
There are many opportunities for improvement, however these cannot be assessed by an external party. I will address some examples here:
Rejoinders: there is a widespread belief amongst my peers that in the review process, the rejoinder to referees comments is extremely inefficient. It is opaque, and it is received wisdom that the external referees scores (which are not known to applicants) count relatively little to the internal ARC evaluation formula. The assessment of the College of Experts counts (I believe) for much more, but there is no indication of the Expert assessment available to the applicant through the rejoinder process. So applicants find themselves responding to assessors’ comments that have relatively low direct effect on the actual numerical ranking, and so the opacity is multiplied.
Institutional Compliance: university research offices spend substantial effort on compliance checks, which in my experience amounts to validating font sizes in figures and captions, margin widths, account balances in Project Costs and so on. These take time, are tedious, and have zero correlation with the value of the proposed research. That the process requires these manual administrative checks is indicative of a thoroughly broken system. But in spite of these detailed checks, debacles like the “pre-print” scandals of 2021 emerged. This caused enormous, ongoing actual and reputational harm to Australian research, which I observed directly as a research supervisor.
Applicants and Assessors: ARC applicants spend a great deal of time preparing detailed, well-argued grant proposals. These go into what can only be described as a lottery system with a fixed total payout. In my experience as both applicant and assessor, the outcomes are extremely poorly correlated with the submissions, and so the effort in writing grants is merely the cost of entry into the lotto.
As an assessor, I have experience of applications that were not funded in one round in which they scored poorly, only to be funded in a subsequent round, with presumably high scores.
Ultimately, this is a consequence of the mismatch between applications and available funding, which brings with it various perverse outcomes. For example, there is a real opportunity-cost to serious researchers who spend a summer writing grants, during which time they are not doing research. For another example, random success in a round will favourably bias a researcher’s career and enhance their success later; random failure has the opposite effect.
I can think of various solutions to the resource constraint problem, such as: (1) increase available funding to accommodate all “fundable” applications, i.e. those that are of sufficient quality to merit funding, (2) to define different assessment criteria by which to award proposals, such as objective randomness in an actual lottery, or alignment with predefined national interests, or (3) to discard prospective applications entirely, and provide future research funding on a retrospective basis, such as past publication record assessed on its merits. There may be others, but the current system is worse than any of these three proposals.
Structural inefficiencies: The current process has built in inefficiencies. Their actual effect is something that should be determined empirically.
For example, the ARC has a very long turn-around time for funding successful applications. In the case of the “standard” Discovery Project, this is at least 12 months from project conception to funding announcement, with additional time then required to hire suitable people, procure equipment etc. It can easily become 18 months before a funded project can begin, on a project that is funded for only 3 years.
As an anecdote, for the recent Industry Laureate Fellowship process, an industry partner expressed shock that it would take the ARC as long as 6 months to make a funding decision. The joke is that this is half the time expected for most ARC decisions.
This temporal lag for successful projects interacts badly with the infrequent, annual application cycle. As a researcher, I cannot apply when the time is ripe; only when the portal is open. With this structure, the rational choice for my colleagues is to put in any proposal during the application period, thereby artificially increasing the absolute number of applications per year, rather than a well-timed application that is best suited to the project.
There are two very significant implications from this reasoning: (1) the ARC should have open applications all year for “standard” schemes like the Discovery Projects, and (2) the decision time should be shortened dramatically to 8 weeks. With these constraints lifted, my belief is that the ARC would reduce the absolute number of applications and thereby improve the overall success rate, and reduce the pressure on applicants to submit proposals on an artificially imposed timeline.
Absolute administrative burden: The ARC has implicitly identified that there are structural problems in its processes. For example, the Centres of Excellence program has a two-step process of a “light-weight” Expression-of-interest (EOI) stage, and a much more cumbersome full application. However, the EOI can easily be 100+ pages long, and the Full Application over 600 pages long. (I am a CI and a former Deputy Director of an ARC CoE, so have specific insights). With 100+ EOI submissions, and 20 Full submissions, the ARC generates >20k pages of applications per round. This program allocates ARC funding of about $60M/pa over 9 successful CoE proposals, so I estimate that the CoE program allocates less than $3k/(page submitted)/(year of funding).
ARC Discovery projects, which are roughly 100 pages long have a success rate of around 20%, a lifetime of 3 years, and total funding of around $500k per project, so the DP program allocates about ($300 of funding)/(page submitted)/(year of funding).
These allocations are both very low. Assessors cannot reasonably judge 600+ page applications for CoE’s, nor 100+ page applications for standard Discovery Projects. The ARC should publish a targeted rate of application overhead for each program. In my opinion, the ARC should aim for a minimum administrative target rate of at least ($10k funding allocated)/(page submitted)/(year of funding), with economies of scale for large programs like CoEs.
Recommendation: That the ARC rationalise its objectives (e.g. efficient, meritorious and timely funding allocations), and then design systems to achieve these objectives.

Q7. What improvements could be made:

(a) to ARC processes to promote excellence, improve agility, and better facilitate globally collaborative research and partnerships while maintaining rigour, excellence and peer review at an international standard?

(b) to the ARC Act to give effect to these process improvements, or do you suggest other means?

Please include examples of success or best practice from other countries or communities if you have direct experience of these.

The ARC has no mechanism for researchers to engage directly with it. The approved pathway for researchers to approach the ARC is via university research offices, which represent the interests of the institutions. This can create clear institutional conflicts of interest where the interests of the university are not aligned with those of the researchers or the Federal Government.
More broadly, the inability for ordinary researchers to engage with the ARC makes it opaque. This engenders distrust, degrades confidence, and makes the ARC unresponsive to research needs.
The ARC could adopt a variety of models for managing direct engagement with the researchers that it funds. For example, program managers that have personal engagement with specific projects are common in US funding models. These individuals have domain knowledge, and are important sources of information on agency interests, priorities and resources to proactively coordinate research directions. Alternatively, an ombudsman could be established for researchers to provide feedback to the ARC in cases where maladministration is suspected.
Whichever model is adopted, the ARC needs to facilitate direct engagement with researchers as a means to improving transparency and confidence in its administration.

Q8. With respect to ERA and EI:

(a) Do you believe there is a need for a highly rigorous, retrospective excellence and impact assessment exercise, particularly in the absence of a link to funding?

(b) What other evaluation measures or approaches (e.g. data driven approaches) could be deployed to inform research standards and future academic capability that are relevant to all disciplines, without increasing the administrative burden?

(c) Should the ARC Act be amended to reference a research quality, engagement and impact assessment function, however conducted?

(d) If so, should that reference include the function of developing new methods in research assessment and keeping up with best practice and global insights?

The Engagement and Impact Assessment, in my relatively limited experience of the process, risks significant selection bias. The university solicits requests for case studies of impact outcomes from ARC-funded research, and so what is reported are specific examples of successes. However this is not systematic, and it necessarily misses all the failures: opportunities that were missed because of poor management choices and practices.
Instead of building impact assessments around potentially idiosyncratic case studies, the assessment process should focus on institutional practices and pathways. For example, at the end to the assessment, these questions should be answered:
Is IP really openly accessible, e.g. to licence back to the inventor or other businesses, or are there hidden hurdles in the way?
What practices does the host institution actually implement to assist researchers in building impact?
What does the host support institution do to support researchers in recognising benefit from their discoveries?
What guidance and assistance does the institution provide to manage conflicts of interest, or does it simply seek to avoid CoI at all costs?
The answers to these questions can certainly be illustrated by case studies, but anecdotal instances do not provide answers in themselves.

Q10. Having regard to the Review’s Terms of Reference, the ARC Act itself, the function, structure and operation of the ARC, and the current and potential role of the ARC in fostering excellent Australian research of global significance, do you have any other comments or suggestions?

My specific experience is in attempting to secure economic benefit from my research. This is effected by the ARCs legal assignment of Intellectual Property (IP) to universities, the ARC’s National Principles of Intellectual Property Management for Publicly Funded Researches, and the disparate practices at universities around IP management.
In my own experience (described in part in my submission to the University Research Commercialisation Scheme consultation, see page 1 therein), universities are not incentivised to behave in a way that enhances Australia's economic benefit from research commercialisation, for example acting to prevent the formation of potential startup companies based on ARC-funded research. The ARC’s National Principles espouse that, in order to “foster the most beneficial use of Australian research and development to secure value for industries, government, researchers and the community.”, universities that receive ARC funds will:
“foster the most valuable use of this IP by … making the IP openly accessible through licensing and accessibility arrangements which allow for its use”
Provide “means to help researchers identify IP that should be protected and/or commercialised, or IP that could benefit innovation/the economy of Australia by being made freely available”
“Define the ways in which benefits from the development and exploitation of the IP will be allocated…”
Provide “guidance in relation to potential conflicts concerning IP management…”
Provide “assistance to researchers in fulfilling their obligations and responsibilities as well as rewarding and encouraging their participation in any subsequent exploitation process…”
Provide “support for researchers so that they can recognise when their discoveries may provide benefit through promotion and open dissemination or when they may have potential commercial or other public value”
Provide “advice to the creators of the IP on the options available for either commercialising the IP”
These principles are admirable, but are essentially self-assessed for compliance by each university, being “approved by the institution's governing body”.
In my experience, the practices that Australian universities adopt are largely undocumented in their policies and procedures, and are often contrary to the National Principles.
Certainly, there is a strong incentive for any given institution to preference its own short-term interests (e.g. cash-flow from IP licensing) over longer term economic benefits (e.g. formation of new research-intensive businesses) that the ARC is attempting promote through the National Principles. In some cases, these practices are directly antithetical to the national interests.

Submission received

23 November 2022

Publishing statement

Yes, I would like my submission to be published and my name and/or the name of the organisation to be published alongside the submission. Your submission will need to meet government accessibility requirements.