Anonymous #43

Related consultation
Submission received

Name (Individual/Organisation)

Anonymous #43

Responses

Q1. How could the purpose in the ARC Act be revised to reflect the current and future role of the ARC?

For example, should the ARC Act be amended to specify in legislation:
(a) the scope of research funding supported by the ARC
(b) the balance of Discovery and Linkage research programs
(c) the role of the ARC in actively shaping the research landscape in Australia
(d) any other functions?

If so, what scope, functions and role?

If not, please suggest alternative ways to clarify and define these functions.

The ARC Act needs to be changed to clarify the role of the ARC in supporting both fundamental and applied research in the sciences, social sciences, humanities and the Arts (and other areas not covered by the NHMRC funding). It should also ensure that the ARC remains as the national funder of independent basic research in Australia, and is funded to a level comparable to peer nations.

The Act should clarify the ARC’s decision-making power and leadership in the direction of research being undertaken. The ARC CEO should be delegated by the Minister to make funding decisions to protect against political interference, as is done in the NHMRC Act. Ministers should only be able to provide suggestions of a general nature which should not be taken as funding directions.

Particularly within HASS disciplines, there is a critical requirement to fund fundamental and basic research. As such, it is paramount that there is continued support for the Discovery research program.
There is room to add to the suite of existing fellowship grant programs. DECRA, Future Fellowships and Laureate programs should be retained and funding boosted, as these are critical programs to retain and attract global talent. A key observation is that DECRAs seem to be straying further from a being an effective post-doctoral scheme, as awards are increasingly going to individuals with substantial post PhD experience. Options should be explored to support newly completed PhD scholars.

ARC should be supporting innovative and challenging ideas, and creative thinking. This requires risk-taking and uncertainty. In recent times, the assessment process appears less tolerant of risk, leading to conservatism and a bias towards funding certain outcomes; there are other schemes/avenues for this kind of research, and the balance of ARC funding should be shifted back to fundamental research that is unlikely to be funded elsewhere because of the perceived risk.

Q2. Do you consider the current ARC governance model is adequate for the ARC to perform its functions?

If not, how could governance of the ARC be improved? For example, should the ARC Act be amended to incorporate a new governance model that establishes a Board on the model outlined in the consultation paper, or another model.

Please expand on your reasoning and/or provide alternative suggestions to enhance the governance, if you consider this to be important.

The current ARC governance model is not adequate along a number of dimensions. Particularly, its susceptibility to political influence, and the absence of oversight from an independent and appropriately skilled body. As proposed in the discussion paper, the ARC should have a board comprised of research leaders, representatives of Indigenous community and lay members. Factors needing consideration include the process for nominating potential members, voting, etc, i.e. members shouldn’t simply be elected by the Minister, otherwise the potential for political influence remains.

The board would oversee the scope of the role and decision-making power of the CEO, as in the NHMRC Act. This would ensure a balance between scientific drive and community values, providing assurance to the Australian public and reducing political interference in what should be scientific decisions.

A number of other governance changes are required to ensure the objectivity of funding outcome decisions. For example, College of Expert members should be precluded from seeking funding for the duration of their College membership. This would avoid situations that have historically arisen where funding is awarded to Experts during their tenure.

Q3. How could the Act be improved to ensure academic and research expertise is obtained and maintained to support the ARC?

How could this be done without the Act becoming overly prescriptive?

As alluded to in the previous question, a board comprised of research leaders given oversight of appointments could ensure that these are made on scientific merit rather than political leanings. Removing the political interference from scientific funding decisions will allow academics to be more engaged and invested in the outcomes.

The active engagement of researchers is required for peer-review processes to function smoothly and efficiently. At present, the workloads related to reviewing, participating in CoEs and ERA panels are absorbed by institutions and individuals, with an uncertain return. Consideration for compensation should be given to ensure high quality research leaders continue to make themselves available for these roles. Any steps the ARC could take to ease the administrative burden for individuals, and their institutions, would also be welcome.

It is important for public acceptance that the ARC, including its CEO and Directors, are of the highest calibre and have a current understanding of, and good standing within, the academy. Effort also needs to be made to ensure that the best calibre individuals are encouraged to make themselves available for work on the CoE, and that these individuals are selected. Attention should be paid to individual disciplines that have no or little representation on the CoE as this can lead to a loss of trust and confidence in the outcomes of research funding programs among researchers.

Q4. Should the ARC Act be amended to consolidate the pre-eminence or importance of peer review?

Please provide any specific suggestions you may have for amendment of the Act, and/or for non-legislative measures.

By removing the Ministerial discretion when allocating funding and instating a Board to oversee the decision-making of the CEO, peer-review would again become pre-eminent. Ensuring that funding decisions are based on peer-review assessments could be included under the responsibilities of the of the CEO and the Board in the ARC Act.

The pre-eminence of peer-review in all determinations in respect of research funding and research assessment exercises is an international and respected norm in determining academic research quality. Any departure from that may suggest, rightly or wrongly, that there is a political dimension to determinations. The appearance of a political dimension risks undermining the authority of the ARC with the research community and the general public.

Where grants are deemed to have met the required standards for funding (as per the relevant rules and guidelines) and recommended for funding after the peer-review process, they should not be overturned by the Minister. With any ministerial intervention that may impact a funding outcome, an account should be provided via parliament as outlined in the Consultation Paper.

Q5. Please provide suggestions on how the ARC, researchers and universities can better preserve and strengthen the social licence for public funding of research?

National Interest Test (NIT) statements should be read and understood in conjunction with the rest of the application. As such, we welcome the recent decision to incorporate NIT review of the as part of the peer-assessment process.

NIT statements do not suit all types of research or funding programs, particularly those aimed at supporting basic and fundamental research, such as the Discovery Program. Where research is fundamental in nature, conveying its basis in plain English while retaining critical scientific accuracy is difficult. Additionally, with this type of research, researchers should not have to predict ways that the research might be beneficial to the community, but should be able to focus on how the research adds to the field’s knowledge base; ensuring Australia’s place as a global leader in a research discipline should be acknowledged as being in the national interest.

A plain English statement of the basis and expected benefits of the proposed research (including adding to the knowledge in a field), that any given educated member of the public can understand, would be better suited for the variety of research supported by the ARC.

Having lay members as part of the Board overseeing the ARC activities would also represent a way to strengthen its social licence for public funding of research.

Q6. What elements of ARC processes or practices create administrative burdens and/or duplication of effort for researchers, research offices and research partners?

The competitive grants schemes application and review process requires extensive involvement. Some of the requirements are complex and particularly burdensome:

-ROPE statements could use providing clear guidance on what is and what is not relevant to issues like career interruptions, and required responses could be more concise.

-The overall length of application sections, and overly prescriptive formatting rules.

-NIT statements, as presented in previous section.

-Funding below the requested budgets at application leads to several re-workings of the budget and re-scoping of the projects, resulting in wasted time for academics and Research Offices.

-Numerous and frequent requests for academics to be involved in the review process.

Additionally, procedures for appeals or complaints against a funding decision are not clear. It can be confusing and time-consuming for academics and Research Offices to assess whether there is a case for appeal on the grounds of administrative error (which is the only type of appeal allowed under current rules).

Q7. What improvements could be made:

(a) to ARC processes to promote excellence, improve agility, and better facilitate globally collaborative research and partnerships while maintaining rigour, excellence and peer review at an international standard?

(b) to the ARC Act to give effect to these process improvements, or do you suggest other means?

Please include examples of success or best practice from other countries or communities if you have direct experience of these.

Competitive grants awarded by the ARC should fully fund research. At its inception, the Australian dual funding system was designed to have competitive federally funded research grants fund the direct costs of research, with remaining (indirect) costs be met by research block grants. In the current funding environment, Universities are forced to cross subsidise research with other funds, including international student fees which are now declining. For the ARC (and other Commonwealth government funding bodies) to truly promote excellence, funded grants should fully cover the costs of research, even if that results in fewer projects being funded. We expect this will be a focus of the upcoming Universities Accord discussions.

Changes to guidelines that would facilitate the review process while maintaining high standards and encouraging academic engagement with the process include:

- Moving to a two-stage application process, with an initial EoI process with simplified submission requirements identifying promising proposals that are invited to proceed with a full application. This will greatly improve the overall efficiency of the scheme by reducing the amount of effort spent in the preparation and assessment of detailed and lengthy applications that are not competitive. Two-stage processes are commonplace in other countries’ systems and in other grant schemes in Australia. This may lead to a greater number of applications (in the form of EoIs) but the overall effort would still be substantially lower.

It may be useful to trial this approach for Fellowships, where a heavy weight is applied to a single element (researcher track record) and where the application has a resource intensive pipeline in terms of support. An EOI may reduce the number of people who invest time in a process where their track record alone would likely preclude success.


-Simplifying the budget information required in applications. Budget lines could be estimated in broad categories, in contrast to the current requirements of a seemingly exact, highly detailed plan for costs which in reality will typically be modified in any case. Where the ARC reduces the budget from that requested, the ARC should provide reasoned guidance as to which elements of the budget should be cut.

-The ARC evaluation and decision process should be sped up. Shorter ARC turnaround times will allow research to be implemented more quickly and provide better options to retain research staff. Changes that incentivise involvement in CoE and peer-review will help ease workloads for each individual involved. We acknowledge and welcome recent efforts from the ARC to provide estimated times for the release of outcomes.

-ARC should provide detailed information for each application, including scores and ranking against all criteria by reviewers and the panel, as well as relevant qualitative information. Qualitative information would include comments on aspects that were crucial for the ARC’s funding decision, especially where minimum standards for funding are not met. Information on whether a re-submission of the project in modified form would be encouraged or not would be incredibly helpful. This could greatly improve the quality of resubmitted proposals and improve efficiency by allowing researchers to make a better informed decision about not pursuing some proposals further. Additionally, it will help cement the review and decision-making process as a transparent one.

-Implement a quality control review of assessors that includes historical assessment of average grades to identify situations where individual assessors are scoring outside normal bands.

-The use of international reviewers would strengthen the integrity of the grant allocation process. This is a regular element of European national funding schemes, for example. If executed correctly, this will not only significantly increase the assessor pool, but the quality thereof, and avoid the need for applications to be reviewed by people far outside the discipline.

-The ARC should provide improved guidance on the application of the Medical Research Policy and how this is judged in each submitted proposal. It is currently not clear at what stage in the peer review process a decision is made on the ineligibility of applications under this policy, and who makes this decision. Information pertinent to this is not available through the appeals process, as it is considered a committee decision, and not related to an administrative process. As such, there is no clear path to appeal a ineligibility decision under the Medical Research Policy, even when a clear (as per the guidelines) justification on how the works adheres to the rules is provided in the application. Some type of feedback would help academics and Research Offices better understand the Policy and provide better advice to potential applicants.

Q8. With respect to ERA and EI:

(a) Do you believe there is a need for a highly rigorous, retrospective excellence and impact assessment exercise, particularly in the absence of a link to funding?

(b) What other evaluation measures or approaches (e.g. data driven approaches) could be deployed to inform research standards and future academic capability that are relevant to all disciplines, without increasing the administrative burden?

(c) Should the ARC Act be amended to reference a research quality, engagement and impact assessment function, however conducted?

(d) If so, should that reference include the function of developing new methods in research assessment and keeping up with best practice and global insights?

We support a focus on quality and excellence, including in the absence of a link to funding. However, the frequency and requirements of each assessment can be significantly reduced and many parts of it automated. There are clear limitations to the existing evaluation framework:

- It is common practice to plan to publish in high ranked UK or US Journals, to increase the readership and likely citations. Often papers need to be changed to suit international audiences. It can also take much longer to publish in these highly ranked journals, slowing down the release of discoveries to the rest of academia and more broadly. Australian Public Policy is one area that have particularly suffered with the focus on publishing internationally relevant research. Additionally, using citation metrics devalues research of local or national importance.

Similarly, in many HASS disciplines where long-format peer-reviewed articles and books are the norm, research outcomes can also take long to be published. In such disciplines the ARC could produce a data set on a dashboard, giving institutions a time-period to amend/curate. This would still leave research fields where Non-Traditional Research Outputs are the norm requiring institutions to generate and submit data on outputs (as they do now). These steps would reduce burden on institutions in aggregate.

- Basing the evaluations on Field of Research (FoR) codes increases the administrative burden of these exercises and does not reflect the true nature of research. Research happens within academic units, it is led by strategy and programs tied to the unit. Today, many of the world’s big problems are being solved by multi-disciplinary teams, which are under-valued in the ERA and EI exercises. ERA also does not reflect other fields or emerging areas, an example of which is Cybernetics. Using the existing ERA methodology, the ANU School of Cybernetics is currently spread across multiple FoR codes including 46 Information and Computing, 44 Human Society and 51 Physical Sciences. Cybernetics research cannot be judged fairly in ERA, even though it promises to impact the way humans interact with technology.

-The use of FoR codes also confuses the evaluation of Engagement and Impact. Engagement occurs in the context of Departmental strategies within University programs, it is not restricted to one FoR. To comply with the reporting program, universities are forced into trying to make Department-level strategies appear deliberate across the institution. Furthermore, the impact of a given research outcome does not necessarily happen in the FoR where the outcome was first published. Expecting universities to report impact by FoR is a false delineation in many instances.

-Current definitions of impact ignore the potential for:
i) Impact to predate research output: when an academic makes a suggestion that is adopted by outsiders, highly impactful and the success / potential improvements to the initial idea forms the basis of a paper (e.g. HECS).
ii) Research to have impact that is a few steps removed: basic research that gives applied researchers the theoretical framework within which to undertake testing that demonstrates the value of changes in end user practice and, ultimately, precipitates this change). Despite having had impact, the basic research could not be deemed impactful under the current definitions.

Similarly, currently acceptable evidence of impact precludes many impactful pieces of HASS research from being submitted as impact cases. For example, academics whose research has shaped government policy are often not acknowledged as such in policy documents, consultation papers, et.c and, unless they are able to obtain written confirmation from other involved stakeholders, it is difficult to provide causal evidence.

ERA in its current format is a significant imposition on universities, and provides very little benefit on its current 3 year cycle. While a “data driven” or machine-led approach may cost less, there will be significant biases in the results. While AI (artificial intelligence) is reasonably accurate at identifying research that is in single disciplines, it is less precise the more multi-disciplinary the work is. Further, AIs cannot apportion across FoR codes as required by the current methodology. Additionally, citation benchmarks are developed using Journal FoR codes, while the AI would classify papers based on the article content. Thus, a “data driven” or machine learning-driven approach cannot yet provide even evaluation across all discipline areas, with HASS areas being greatly disadvantaged.

Universities generally review their research groups on a rolling cycle, to provide assurance to the Council and Executive that they are performing at the expected level. These exercises are valuable avenue for evaluating the impact of research outcomes against the University’s goals. External reviewers are engaged to deliver this work, and the review materials are prepared by the academic leaders supported by central services. Materials include a discussion of the strategy of the research group, reflection on its performance including metrics or other relevant indicators and, where impact is a strategic purpose of the unit, activities engaging with end-users and impact cases studies. The Review Committee evaluates the materials against the strategy, engages with academic leadership of the group and provides expert opinion, which often includes a statement about their excellence on the international or national stage. Further, reported recommendations can be taken on board and executed within the academic unit, leading to better and more tangible outcomes than receiving a number that applies across the whole institution as is currently the case with ERA results.

The Federal Government could tap into these existing processes to gain knowledge of areas producing research of international excellence, and those are working to solve national problems. These reviews could provide a self-certification that the research program is meeting expectations, providing assurance to the Government that public funds are being well spent.

Q9. With respect to the ARC’s capability to evaluate research excellence and impact:

(a) How can the ARC best use its expertise and capability in evaluating the outcomes and benefits of research to demonstrate the ongoing value and excellence of Australian research in different disciplines and/or in response to perceived problems?

(b) What elements would be important so that such a capability could inform potential collaborators and end-users, share best practice, and identify national gaps and opportunities?

(c) Would a data-driven methodology assist in fulfilling this purpose?

ARC is not well placed to evaluate all research areas. For example, Indigenous Studies research should be evaluated by groups already working with and accepted by Indigenous communities and already providing significant leadership in this area, such as AIATSIS. In reference to the previous section, if the ARC was to have a defined function to assess research quality, engagement and impact, this is a large limitation that would need to be addressed appropriately.

The ARC’s research evaluation skill set, if bypassing the limitations of the FoR structure, could be used to build a map of topic clusters that identifies expertise and could be used by potential collaborators and industry to see which institutions are undertaking particular research, linking back to the various open access repositories to enable the identification and distribution of world leading research.

Q10. Having regard to the Review’s Terms of Reference, the ARC Act itself, the function, structure and operation of the ARC, and the current and potential role of the ARC in fostering excellent Australian research of global significance, do you have any other comments or suggestions?

All suggested elements would help improve efficiency of the ARC processes and raise the quality of interactions between researchers and the ARC. They would help to counteract disengagement by researchers from the ARC system, which on the basis of anecdotal observation is widespread owing to the unnecessarily laborious requirements for applications and the lack of meaningful feedback about assessment outcomes.

Submission received

14 December 2022

Publishing statement

Yes, I would like my submission to be published but my and/or the organisation's details kept anonymous. Your submission will need to meet government accessibility requirements.