Quick menu:

This site is showing content relating to all UK countries. Change this using the country filters below or select Ok to accept. This site uses cookies.

OK
  • Help and support

Evaluation methodology

Standardising impact measurement for well-being programmes
  • Print

Our Well-being programme evaluation enabled us to explore the best way to measure the success of well-being projects. Here we share the detail of our well-being methodology. It can give you an insight into how we carry out our evaluations.

Standardised measures

Producing standard measures of well-being enables us to evaluate and compare projects more easily. This is something we wanted to do for the National Well-being Evaluation. Our evaluation saw the creation of a set of tools, tested and validated questionnaires. We surveyed a robust, stratified sample and used project case studies to complement the evidence.

The use of both qualitative and quantitative data, together with findings from other research contributed to our robust final report.

Questionnaires

Our Well-being evaluation included an extensive survey across projects and programmes. To be able to compare and collate this information more easily, we used a uniform approach. We used tested and validated questions and scales, including:

  • International Physical Activity Questionnaire
  • Centre for Epidemiologic Studies Depression Scale
  • Warwick and Edinburgh Mental Well-being Scale.

Using a Core+ model, we could adapt questionnaires to collect the most useful information. Our standardised, core, questions were used across the research or were mirrored for specific groups. We could then supplement this with additional questions for specific projects and areas.

You can read and use our questionnaires and accompanying handbook for evaluation well-being projects under Publications below. Please note, however, that nef and CLES carried out the data checking and complex analysis for our report. We are not able to support you about the use of questionnaires and the handbook.

Sample design

We evaluated 17 Well-being portfolios and 2 Changing Spaces programmes. To select survey participants we used a stratified sampling method. After projects were selected, their project managers were briefed on how to randomly select participants. Doing this ensured the evaluation surveyed a representative group. That’s important as it means our results reflect the impact across the wider programme, not just on specific groups.

Survey use

Project managers surveyed participants three times:

  • Entry: At the start of the participant’s engagement with the project
  • Exit: when the participant’s time with the project was coming to an end
  • Follow-up: three to six months after leaving the project.

Tracking the same participants at each stage helped measure any changes to their well-being.

Sample size

Sufficient sample sizes are needed for robust research and to give us confidence in our findings. In total, 5,805 questionnaires were completed for the Well-being evaluation:

  • 3,269 entry questionnaires
  • 1,964 exit questionnaires
  • 572 follow-up questionnaires

The accuracy of samples is often called the ‘confidence interval’. The confidence intervals for our surveys were 1.71, 2.21 and 4.1. These are considered robust for our evaluation.

Case studies

Case studies allow us to explore certain areas and projects in more detail. For the Well-being evaluation, this focused on the connections between different strands of well-being and their role in raising individuals’ well-being levels. Case studies can also help reflect the diversity of projects within a funding programme. It can also ensure projects are included even where it might be difficult for them to administer questionnaires.

Publications

Evaluating well-being: How BIG has measured impact
Well-being evaluation tools: A 'how to' handbook

Questionnaires:

FEEDBACK