Anonymous #18

Related consultation
Submission received

Name (Individual/Organisation)

Anonymous #18

Responses

Q10. Having regard to the Review’s Terms of Reference, the ARC Act itself, the function, structure and operation of the ARC, and the current and potential role of the ARC in fostering excellent Australian research of global significance, do you have any other comments or suggestions?

I have the following general comments; some of them are connected.

1) The system is flawed in the following sense: Some disciplines (like pure mathematics) don't require major grant funding for their research projects. However, Universities obviously want to have grants, so they push everyone to apply for them. As a consequence, there are projects that receive $300K-$500K even though these projects could also be done without grant funding. (In pure mathematics, for example, the only resources required are quality research time and a tiny bit of travel money.) When the general public looks at such projects, it is understandable that they might question whether spending this money was necessary -- and rightly so!

2) As a consequence of what I have described in 1), researchers are often asked to apply for as many grants as they are allowed to. This puts additional pressure on everyone, and reduces the overall success rate. I would suggest that the max number of grants / applications should be one grant, not two.

3) Conflict of interest: I am often asked to assess DP applications in my area in the same round where I have applied for a DP. This is a serious conflict of interest, and should be avoided.

4) The application process is very onerous; researchers spend weeks on writing and polishing their proposals. (It is particularly onerous for overseas PIs, who don't have experience with the ARC.) Yet, the success rate (18-20%) is very low. The process should be simplified: other countries (like New Zealand) have a two-tier process: the first round requires a short document; the second round requires a detailed application, but then the success rate is 50% or above.

5) Transparency: Very often the assessor reports don't seem to be not directly linked to the final outcome (which is fair enough, because these reports form only part of the whole process). However, I know of several cases where a project has received excellent assessor reports ("project of the highest quality", "excellent team of researchers",...), yet the final ranking is in the bottom 50% of unsuccessful grants. It is very hard to understand how this has happened and there really should be much more transparency in the decision process (e.g. written feedback from the panel, panel marks, ...)

6) Related to 5), applicants often spend a significant time on following the (micro-managed) guidelines when preparing an application - yet quite often assessor reports contain comments like "I wished for more details on ...". It is very difficult to please everyone: the panel requires a good top-level description, the experts require all the technical details, and the guidelines prescribe a rigid structure. Also related to 3), the competition is very high and I often get the feeling that grumpy colleagues mark down proposals (by asking subjective questions "I feel more details are required...") because they haven't been successful for some time, or because they have received such vague / dismissive comments in the past.

7) Based on my discussions with several college of experts members, it seems the final decision process involves Carrier 1 (the panel member who had been tasked to support the proposal) to explain the project and assessment to the panel, and then everyone votes. My understanding is that not every panel member has read the proposal, so many members rely on the presentation skills of Carrier 1. If they cannot convincingly sell the proposal, then it seems likely the vote is negative. This part of the decision process seems to be a serious bottleneck -- and this might be the reason for the discrepancy observed in 5).

8) The revision of the NIT: it has been sold as making the process simpler, and allowing researchers to better explain why their research is relevant. I don't see how any of this is true; in contrast, the additional DVCR sign off makes it more onerous -- and for technical disciplines (like pure mathematics), no assessor will look at the NIT anyway (this is related to 1). On the other hand, one spends hours on the NIT, including several rounds of feedback from the department, faculty (and university). This is just one of the many pieces that feels like a waste of time. This should be reviewed seriously.

Submission received

06 December 2022

Publishing statement

Yes, I would like my submission to be published but my and/or the organisation's details kept anonymous. Your submission will need to meet government accessibility requirements.