Gather the evidence you need

On this page:

This step takes the measures you have identified in your evaluation framework and identifies where you can gather evidence on each. You can gather evidence in different ways. Each way has its pros and cons, and it’s important to be aware of them to ensure your evaluation is robust.

By the end of this step you will have:

  • Identified what evidence and information you need.
  • Used existing evidence where possible and relevant.
  • Gathered additional evidence where you need to.

Identify what evidence and information you need

Now it all starts to get real and concrete. This can be a very rewarding step in the process as you start to gather real facts. There are some technical terms here that are clearly explained and will help your evaluation to be credible and useful for others.

Your evaluation framework outlines the information you need. Use the evaluation framework as a starting point for thinking about what kind of evidence you’ll need.

For example, if your evaluation framework has several questions about how engaged students were in a particular class, then collecting information from students about their experiences could be useful for several measures. Some more examples for each category of the evaluation framework include:

Evaluation component Example of measure(s) Potential evidence sources
Design Design decisions

Any paperwork / documentation from setting up the initiative
Staff who were involved in establishing the initiative

Implementation

Did rollout make sure the initiative reached the right number of participants (e.g. students / teachers)?
Did the initiative deliver on its intentions?

Compare plans for implementation with experiences of staff, business partners
Outputs

Things produced
People involved Attendance numbers
Website visits

Business partners may hold information on things produced
Attendance numbers collected by staff or business partner
Web page analytics, e.g. downloads, page visits

Impacts — engagement direct measures

Student attentiveness
Enrolment in STEM subjects

Asking students whether they enjoy the class, or observing the class
Subject and year level enrolments

Impacts — achievement direct measures

Test scores
Improvement in skills

School results
Teachers’ perspective on depth of skills

Impacts — inferences based on behaviours Demonstration of positive behaviours<

Observing teachers and students in class
Asking teachers and students how they think they behave

Impacts — inferences based on beliefs

Student satisfaction
Teacher satisfaction

Asking teachers and students about what they believe

You can use the Template: Information sources for your evaluation to enter the measures for your evaluation and identify potential evidence sources.

Baseline evidence

You will also need to consider what ‘baseline’ evidence you require to make decisions about change. Baseline evidence is the initial collection of information that exists on what things were like before the initiative, e.g. student achievement results. This evidence is used as a basis for comparison with other data later in the evaluation, or with later evaluations. It can tell you things that might happen because of the initiative. For example, baseline evidence might indicate that 30% of students in Year 9 science could describe a STEM career before the initiative, and when you gather the same information later you will be able to see how much that number has grown or shrunk.

Useful pieces of information to collect for the baseline evidence often include:

  • Any supporting information or paperwork used to set up the initiative. For example, a school struggling in a particular NAPLAN score.
  • For achievement: School results / test scores, difference of abilities in classes.
  • For engagement: Teachers’ reporting, class enrolments, absenteeism, number of students choosing to participate in or opt out of STEM activities and subjects.

The extent of information you collect to establish a baseline is up to you. However, if you want to link the initiative to causing a change or impact, you will want a good idea of what your school or students were like before the initiative, so you can identify the change easily.

Longitudinal evidence — how things change over time

If you are interested in gathering longitudinal evidence, or facts that cover a longer time period than your specific evaluation, you will need to run multiple rounds of fieldwork with the same cohort of students over periods of time. While this can be a rich source of information about the experiences and outcomes of these students, the process is also time and resource-intensive. If you can, it’s a good idea to get a university involved in this as a research program.

Use existing evidence where possible and relevant

Where you have options about your evidence sources, you can save time and energy by making the most of information you already have or can easily get:

  • Ask staff at schools or industry about costs of the initiative.
  • Look at documents around the start of the initiative for their expectations of time and budget.
  • Consider evidence that is automatically collected. For example, if your initiative is web-based you may be able to use web analytics. An introduction to analytics, and how to use them, is available on the Web strategy for everyone website.
  • Consider evidence that is already collected, but potentially for a difference purpose, e.g. student class attendance, student work samples or a question in a school census.
  • Think about things that you could easily count e.g. mentions on social media.

Tips for working with existing evidence:

  • Evidence can be harder to gather than you think. It’s important to make a plan early about what information you need, and how you might get it. You should also think about whether the evidence will be complete or have gaps, and how to ensure you get everything you need.
  • Evidence is not always available at the level you need it for. For example, evidence might be available at a national level but not a state level, or at a school level but not a year level or class level. This is a good example of when you might want to use proxies instead of direct measures.
  • Make sure the information is relevant to your timeframes. If you want to gather evidence about the impact of the initiative, ensure this covers the length of the initiative and the timeframe in which you would expect to see impacts.
  • If you collect existing data, it may be difficult to organise into the way you need it. For example, you may be able to access student work samples (with appropriate ethical approvals) but may have to work hard to draw conclusions from them. Factor this into your thinking about whether you want to gather this evidence.
  • There may be limited existing evidence. If this is the case, consider whether you could answer your evaluation question with the evidence you’re able to collect (see below for information on collecting your own evidence).
  • There may be more evidence available than you need, so you will need to exercise discretion. Prioritise evidence that most closely matches what you are trying to measure. Ignore the rest. A good rule of thumb is — if you can’t imagine a graph you want to draw based on the evidence you collect, it isn’t worth collecting.

Gather your own evidence where existing evidence is limited

There are several good reasons to capture new evidence:

  • Available evidence is insufficient to answer your key evaluation question.
  • You want to confirm or test what you find in existing evidence. 
  • You want to be able to go further / deeper than existing evidence allows.
  • You can set up a way of capturing evidence that future evaluations can use.

Consider gathering evidence where you are particularly interested in capturing information about the initiative’s qualities and people’s opinions of it — whether they feel more confident or knowledgeable.

Evaluations use four main tools to gather new evidence. The tools are also commonly called ‘fieldwork’.

Tool Surveys Interviews Focus groups Observations
Use when you want… To collect information from a large group of people Insightful answers that can’t easily be categorised, personal experiences, real-life examples To collect information that cannot be easily answered in survey form, particularly from larger groups To understand what’s happening ‘in practice’ or ‘on the ground’ in an initiative
Groups particularly suited to tool Staff, parents or students School leadership and teachers, business staff, students Students (collects several perspectives at once, less pressure on students) Students, or teachers / staff
Measures where tool is useful for gathering information Student or staff satisfaction or outcomes Effectiveness of inputs (e.g. PD) Commenting on student engagement and achievement Student engagement (at a more detailed level, e.g. why did you enjoy this activity over others?) Student engagement
Less useful when Wanting to capture deeper insights You are interested in hearing from many similar stakeholders and timeframes are short, e.g. a classroom of students You have stakeholders who may prefer anonymity You are trying to understand specific attitudes or perspectives about an initiative

Gathering your own evidence also has its limitations. It takes a lot of time and effort. You need to plan and co-ordinate the fieldwork, and you need to create material for the fieldwork e.g. interview / focus group guide.

Tips for gathering your own evidence:

  • Don’t over-commit. Use these tools where they are relevant and important to answer your evaluation question. Don’t try to do too much e.g. too many interviews, especially in a short timeframe.
  • Choose the right people to participate. Only pick people who are important and useful to answer your evaluation question. For example, interviewing the right people. If an evaluation is about equity in science achievement, you would interview both high and low-performing students.
  • Think how you will ensure a good representation of different views. You can either engage with lots of people and assume that you get diverse responses as a result, or deliberately pick a range of different people who represent different groups / perspectives. If your strategy is to engage with lots of people you might be interested in using statistical ways to decide how many people you need to engage with. See the article on Sample size: A practical introduction.
  • Give participants plenty of notice. Let them know about the evaluation and what you need from them as soon as possible. This will allow them to plan their time e.g. parental consent forms for students.
  • Make it as easy as possible for stakeholders. Don’t ask for more time than you need. Don’t ask survey questions that will be hard to answer.

You should also be aware of the key ways to make sure your evidence-gathering process is ethical and robust. Check you are compliant with ethical standards in educational research. You may need to go through an ethical approvals process before you complete research in schools. A national application form is now available, which includes directions to state and territory guidance.

You should also become familiar with common biases that frequently come up when gathering new evidence:

  • Sampling bias: You might choose a group of people (also known as a sample) to participate in research, but they don’t actually reflect the whole group of people who took part in the initiative. For example, interviewing only high-performing students to evaluate a program that serves both high-achieving and low-achieving students. You can overcome sampling bias by engaging with a fair representation of different people, as described in the list of tips above.
  • Self-selection bias: Some participants in research (e.g. people with strong views — either negative or positive) are more likely to agree to participate in research (especially surveys).
  • Acquiescence bias: Some participants in research (especially students) may be overly positive about an initiative when speaking to you. They may perceive you as an authority figure, and think they need to say the ‘right thing’. 
  • Gaming behaviours: Some participants in research may respond in a way that doesn’t reflect their true views on a question because they (consciously or unconsciously) want to influence the outcome of the evaluation. For example, if an initiative is enjoyable but has little educational value, students might exaggerate the educational value because they want the program to continue so they can take part again.
  • Cultural bias: You might form incorrect conclusions on the basis of cultural assumptions that you project onto people with different cultures. For example, a non- Aboriginal and Torres Strait Islander evaluator may fail to understand elements of a STEM initiative designed to engage Aboriginal and Torres Strait Islander students during a class observation.