In our experience, and from speaking to others, the why of evaluation is easy to establish, but the how can be challenging. This is because the work is dynamic and often long-term, and often the resourcing for evaluation is light.
An ongoing cycle of integrated learning is an essential feature of a place-based approach. Reflection and adaption occur throughout the work. However, it is useful at a final point to reflect back over a project in its entirety. It is important to jointly evaluate the overall process and impacts to capture the learnings and opportunities for future work. This should involve all the key stakeholders, including local participants.
Some of the methods we have used to review work include:
- Focus groups and reflective group brainstorms at the end of a funded project
A whiteboard or large pieces of paper, participatory facilitation, and a few good questions is all you need to generate a robust discussion. These allowed us to drill deeply into feedback and learnings, so we could check our understanding
- One-on-One interviews
Similar to focus groups, this gives the opportunity for drilling down into meaning in a more confidential and personal way. This might be good for those who are not able to come to face-to-face focus groups, or find it difficult to contribute to a group discussion
- Online surveys
Online surveys are generally quick to do, and can be sent to a wide range of people who may not be able to make it to face-to-face meetings
- Desktop review
Reviewing learnings via meeting minutes, notes from reflective conversations, and any reports generated relating to the work will assist in pulling together any data useful for reviewing and future planning.
Some broad questions that might support reviewing and evaluation activities include:
- Have we increased understanding of place-based practice?
- Have new collaborations and ways of working emerged?
- Have there been impacts for people using or affected by services and systems?
Evaluating place-based approaches is a challenge. Evaluation in general takes significant time and resources. In the context of place-based approaches, it can be even more challenging. Place-based approaches often seek to improve ways of working across a system, and at its higher level of complexity, overcome deeply entrenched and challenging issues. Creating an evidence base for systemic changes is notoriously difficult and it can take many years before shifts in population indicators take place. Shared data and measurement embedded in place-based approaches can support later evaluation, but also can take significant time and resources to create. It can also be challenging to identify all of the impacts that are associated with a place-based approach; shifts in the ways of thinking and models of working, increases in leadership capability, and visioning and ideas generation can have ripple effects across the community that can be hard to track.
It is clear that evaluation must occur in stages – providing resources to evaluate the quality of the features and process of place-based approaches (the overall facilitation, level of community engagement, greater ability to work together, appropriate governance structures, and commitment to continuous learning); to evaluate the improvements in systemic working and individually evaluate the many activities and projects that form part of systems change; and finally to review and track long-term outcomes indicators at the population level. This level of evaluation is costly – it is commonly recommended that between 10 per cent to 20 per cent of a program budget is set aside for continuous learning and evaluation.