Elsevier

Related consultation
Submission received

Name (Individual/Organisation)

Elsevier

Responses

Q1. How could the purpose in the ARC Act be revised to reflect the current and future role of the ARC?

For example, should the ARC Act be amended to specify in legislation:
(a) the scope of research funding supported by the ARC
(b) the balance of Discovery and Linkage research programs
(c) the role of the ARC in actively shaping the research landscape in Australia
(d) any other functions?

If so, what scope, functions and role?

If not, please suggest alternative ways to clarify and define these functions.

Not appropriate for Elsevier to respond to this item

Q2. Do you consider the current ARC governance model is adequate for the ARC to perform its functions?

If not, how could governance of the ARC be improved? For example, should the ARC Act be amended to incorporate a new governance model that establishes a Board on the model outlined in the consultation paper, or another model.

Please expand on your reasoning and/or provide alternative suggestions to enhance the governance, if you consider this to be important.

Not appropriate for Elsevier to respond to this item

Q3. How could the Act be improved to ensure academic and research expertise is obtained and maintained to support the ARC?

How could this be done without the Act becoming overly prescriptive?

Not appropriate for Elsevier to respond to this item

Q4. Should the ARC Act be amended to consolidate the pre-eminence or importance of peer review?

Please provide any specific suggestions you may have for amendment of the Act, and/or for non-legislative measures.

Not appropriate for Elsevier to respond to this item

Q5. Please provide suggestions on how the ARC, researchers and universities can better preserve and strengthen the social licence for public funding of research?

We indeed observe a trend where Governments and their Funding bodies are looking at how to demonstrate the relevance of the research to the public (its social and economic impact) and ensure that stakeholders such as local communities are engaged in shaping and framing the research agenda. Whilst these two components are related, we will focus our response specifically on the latter aspect.

The engagement with stakeholders is often undertaken through deliberative actions at the strategic level to foster more open, consultative processes by which funding programmes are developed. It is still true that many of the consultation processes are focused on the research community (academic and corporate), but we do see that one of the objectives, even if implicit, of these processes is to strengthen the social license for funding. The processes may involve specific scenario techniques such as foresight (project Bohemia for Horizon Europe -https://research-and-innovation.ec.europa.eu/strategy/support-policy-making/shaping-eu-research-and-innovation-policy/foresight/bohemia_en ). The UK government, through the Government Office for Science, has been operating a foresight process for over 20 years (https://www.gov.uk/government/collections/foresight-projects). The studies are in-depth, typically taking 12 months to complete, involves wide consultation and are influential in setting funding priorities for the UK Research Councils (now UKRI), who are typically tasked, at least in part, with delivering the recommendations. There are also examples where lay input has been explicitly sought. For example, BBSRC and EPSRC in 2011 undertook a Public Dialogue process before formulating their joint programme in Synthetic Biology (https://www.ukri.org/publications/synthetic-biology-public-dialogue/).

In addition to these consultative processes aimed at eliciting wider input for shaping priorities, we also see a trend for major government agencies to increase the diversity of thought within their governing bodies or on their advisory structures. This is typically achieved through advertising for individuals with specific skills and backgrounds to apply to those bodies. Indeed, in some cases, specific advisory bodies may be set up with the sole objective of ensuring that the broader societal issues surrounding a funding agencies portfolio have a forum in which they can be discussed and a formal route into the governance structure.

In addition to their work on foresight, the European Union were also instrumental in developing the concept of Responsible Research and Innovation (RRI- https://en.wikipedia.org/wiki/Responsible_Research_and_Innovation) . RRI involves holding research to high ethical standards, ensuring gender equality in the scientific community, investing policymakers with the responsibility to avoid harmful effects of innovation, engaging the communities affected by innovation etc. In the Netherlands, the primary funding agency, the NWO, has established a Responsible Innovation research programme (NWO-MVI) to identify the ethical and societal aspects of technological innovations at an early stage so that these can be considered in the design process ( https://www.nwo.nl/en/researchprogrammes/responsible-innovation). Many of these initiatives serve to clarify the expectations public sector funders have of their communities and provide guidance on how those funders expect their communities to conduct their research.

In the UK, funders such as EPSRC have asked their research community to include in their research proposals commentary on how the research team can ensure their research follows the principles of “responsible innovation”. The EPSRC’s expectations are set out in an approach called AREA (Anticipate, Reflect, Engage, Act -https://www.ukri.org/about-us/epsrc/our-policies-and-standards/framework-for-responsible-innovation/) and researchers are encouraged to use that in formulating their research ideas and plans. EPSRC also allows the research teams to include requests for funding to support their responsible innovation plans. It must be stressed, though, that the assessment of the responsible innovation actions in research proposals was still undertaken through peer review.

Indeed, at this project level, we have not observed cases where wider stakeholder groups, such as community groups, are engaged in project prioritisation. Peer review processes involving the research community itself (in academia or business) are still the norm. However, we should note that in the US and Canada, the concept of “merit review” is more well-established and that points to the consideration of factors other than scientific excellence as critical assessment criteria. It has been suggested that the NSF “broader impacts” funding criteria are like the European Union RRI concept ( https://link.springer.com/article/10.1007/s11948-013-9480-1).

So, in summary, our observation is that the large public sector funders in many of the larger economies are increasingly engaged in considerations of social license. Their actions include establishing frameworks for eliciting input into their own strategic decision-making and encouraging their communities to adopt responsible research and innovation (and its equivalents).

Q6. What elements of ARC processes or practices create administrative burdens and/or duplication of effort for researchers, research offices and research partners?

Not appropriate for Elsevier to respond to this item

Q7. What improvements could be made:

(a) to ARC processes to promote excellence, improve agility, and better facilitate globally collaborative research and partnerships while maintaining rigour, excellence and peer review at an international standard?

(b) to the ARC Act to give effect to these process improvements, or do you suggest other means?

Please include examples of success or best practice from other countries or communities if you have direct experience of these.

a. Within other sections of our response, we have provided evidence of programs and projects that have led to building stronger global research partnerships and draw your attention to our responses.
b. Not appropriate for Elsevier to respond to this item

Q8. With respect to ERA and EI:

(a) Do you believe there is a need for a highly rigorous, retrospective excellence and impact assessment exercise, particularly in the absence of a link to funding?

(b) What other evaluation measures or approaches (e.g. data driven approaches) could be deployed to inform research standards and future academic capability that are relevant to all disciplines, without increasing the administrative burden?

(c) Should the ARC Act be amended to reference a research quality, engagement and impact assessment function, however conducted?

(d) If so, should that reference include the function of developing new methods in research assessment and keeping up with best practice and global insights?

a. This is a question where it is difficult to provide definitive advice, but we can draw some general observations that may be helpful. In those regions where the funding dynamic is based on competitive-based research proposals (e.g., USA, Canada), national assessments are much less prevalent. For those countries and regions where “block funding” of institutions forms part of the research ecosystem, national assessment exercises are becoming increasingly prevalent. This is often also related to a similar interest in developing more evidence-based policy approaches. National assessments are seen as providing solid evidence-based for broader decision and policy making.
A further important observation would be that while many national assessments have a direct or indirect link to funding, that is not the only value they drive. Even where those links are not apparent (e.g., UK’s Knowledge Exchange Framework), the framework seems to be a driver for institutions to take a more structured approach to manage the area being assessed to improve their relative performance. We observe the more successful institutions using the outcomes of such national assessments in their marketing to reinforce specific brand positions. In this respect, the national assessments can be a positive force for change even where they do not have a direct link to funding.

In summary, the drive for introducing a national assessment must reflect the broader policy environment in which it is being proposed. National assessments will only be a cost-effective policy instrument for some countries. However, where there is a need to encourage more systematic management of research and innovation actions and activities or to build a profile, they can be a very powerful intervention and one where we would still see momentum building across the world’s research and innovation funding communities.

b. Elsevier has been involved in three recent projects that illustrate how some governments and funders seek to improve the management of their research ecosystem without increasing the administrative burden in their research community.

In 2016, the Government of Japan included the objective of promoting evidence-based policymaking in funding scientific and technological research within its Fifth Basic Science and Technology Plan. Specifically, the Council on Science, Technology and Innovation within the Cabinet Office (CAO) of the Government of Japan is responsible for developing an Evidence-Based Policy Making system (EBPMS). The system, called e-CSTI, was introduced for government use in February 2020 and was later extended to national universities and national research and development agencies in July 2020.

In undertaking the development of the system, CAO was conscious of the need to improve the coverage of information regarding the Japanese research ecosystem whilst minimising the burden on the community. CAO contracted with Elsevier to integrate Japanese language research outputs, held by the Japan Science and Technology Agency (JST), with Scopus, Elsevier’s abstract and citation database. This enabled CAO to develop a much richer view of the Japanese research ecosystem, specifically covering research content previously unavailable in any systematic way. The e-CSTI system effectively supports evaluating the programmes and organisations within the Japanese research ecosystem. It provides greater visibility of the activities and outcomes within the nation’s portfolio. In achieving the integration of the local research data integration with Scopus, the research community was not required to undertake any additional data inputs. The ingestion, disambiguation and entity linking of that data was all achieved in an automated way. As a result of that success, the project has subsequently expanded and now integrated patent data from the world’s leading patent authorities and expanded to cover alternative metrics (through PlumX).

The second example concerns the Egyptian National Science and Technology Information Network (ENSTINET), responsible for using data to develop the Egyptian research ecosystem. ENSTINET had already set up an information repository, the Egyptian Knowledge Bank (EKB), but the data it contained required greater structure and broader context to support the priority use cases. Elsevier, therefore, undertook to take the research outputs within EKB, to process that to provide the additional structure, to link it with the data in Scopus and to generate metrics on the critical entities in the research ecosystem (journals, people, and organisations). The project is still ongoing, with Phase 1 due to complete in April 2024, at which point ENSTINET will have information available to help them evaluate their journal portfolio, identify experts in their research community and benchmark their research organisations. Again, the objectives of this project are being achieved without the need for the research community to provide additional inputs or validations. The work is an automated process using data that already exists.

Our final example concerns projects we have been conducting with the Australian Research Council. We are currently working on a project exploring the automated classification of research outputs using the 2020 Field of Research classifications. The initial outcomes of our research suggest that the automated approach is likely to be effective for a significant number of outputs, provided research abstracts are available as part of the input data.
We look forward to presenting the complete conclusions at the end of January 2023.

These examples illustrate what we at Elsevier believe will be an increasingly important feature of international research management. To be most effective and efficient, countries must optimise their research ecosystem holistically. That requires the successful integration of disparate datasets, in an automated way, with the key research entities, such as people and organisations, reliably and accurately combined in support of a wide range of use cases.

c. Not appropriate for Elsevier to respond to this item

d. Not appropriate for Elsevier to respond to this item

Q9. With respect to the ARC’s capability to evaluate research excellence and impact:

(a) How can the ARC best use its expertise and capability in evaluating the outcomes and benefits of research to demonstrate the ongoing value and excellence of Australian research in different disciplines and/or in response to perceived problems?

(b) What elements would be important so that such a capability could inform potential collaborators and end-users, share best practice, and identify national gaps and opportunities?

(c) Would a data-driven methodology assist in fulfilling this purpose?

a. The ARC is well-positioned to evolve the current evaluation frameworks for assessing research excellence, impact, and engagement. Through its international network and knowledge of research and assessment methodologies – related to quantitative and qualitative indicators, including peer review – the ARC can lead (at least co-lead) initiatives aiming at developing and implementing new scalable, sustainable, realistic, and transparent methodologies.
The ARC can drive a national agenda for establishing a nationwide infrastructure for evaluating research and impact. The word ‘infrastructure’ should be interpreted in the broadest context encompassing, but not limited to:
- A data platform that includes all data related to research input, throughput, output (traditional and non-traditional) and outcome (impact).
- A system that deploys relevant and up-to-date technologies allowing for exploiting the full potential of data by implementing the relevant indicators, attributing outputs to the appropriate entities and classifying these per the required level of disciplines. Such a system should also allow for best use by all relevant stakeholders: researchers, universities, other funding organisations, industry, and government (ministries)
- With data relevant to input, such as funding grants linked to output and outcomes, including those that are non-traditional (NTROs), the ARC and other funding agencies will be able to analyse and monitor the ROI and impact of their funding allocations and inform their future rounds. The knowledge graphs created based on the linked data in this data platform will also enable the unfolding of emerging research, inter- and multidisciplinary research, and innovation.
- Supported by a service-oriented architecture approach, the platform and system can facilitate data validation, interaction with university CRIS systems and repositories and integration with global databases required for international benchmarking and context.
- A platform allowing for a data-driven approach while facilitating qualitative review of research where needed

Such infrastructure can be supported by the SEER system or its evolution and will address stakeholders’ admin burden in collecting and organising research data. In question 8, we highlighted some examples of how similar data platforms are being developed and implemented by similar organisations in other parts of the world.

b. Large national initiatives impacting many stakeholders require the leadership from the ARC to ensure:
- The initiative is community endorsed. All stakeholders should be incentivized to embrace the challenges and objectives. Ensuring the engagement of evaluated entities will create a network effect on all end users. These end users will share best practices and contribute through solutions.
- Another essential element is transparency around the data sources, methodologies adopted, and caveats of chosen paths. It’s critical that end-users could reproduce indicators, access the data attributed to their institutions, and understand the processes used for collecting information and quality assurance measures to ensure data integrity.
- Simplicity is a crucial element in ensuring engagement from the wider community. The approaches for assessing research and impact should be simple enough for end users while error proneness minimises.
- Approaches should be chosen with a view to a long-term vision. A sustainable approach allows for continuity and for monitoring of evaluation results over a long period. A sustainable approach should allow for easy transition and evolution to other methods leveraging technology and research agenda changes.
- While approaches should take the specificities of the local context (country), it should align with the global research evaluation agenda seeking to normalise at the world level, thereby enabling international comparisons and benchmarking with ease.

c. Data across the research cycle (input, throughput, and output) is increasingly becoming available thanks to technology. In that sense, technology has evolved much faster than (research) communities have so far been able to adopt AI and ML technologies. These technological changes provide an excellent opportunity to automate many processes around data collection, transforming unstructured data into enriched linked data and enabling massive entity-based knowledge graphs. These knowledge graphs should absolutely assist in fulfilling the purpose of assessing research excellence and impact. Like with the responsible use of indicators, a data-driven approach should be supplemented with expert review, especially in areas where data is unavailable at scale. Data related to non-traditional research output and research outcomes are yet to be systematically generated and captured in a way like research output. In this area, an expert review will likely remain leading.

Q10. Having regard to the Review’s Terms of Reference, the ARC Act itself, the function, structure and operation of the ARC, and the current and potential role of the ARC in fostering excellent Australian research of global significance, do you have any other comments or suggestions?

Not appropriate for Elsevier to respond to this item

Submission received

14 December 2022

Publishing statement

Yes, I would like my submission to be published and my name and/or the name of the organisation to be published alongside the submission. Your submission will need to meet government accessibility requirements.