What is it?

Designing an evaluation approach and method that is realistic and fit for purpose.

Why do this?

  • Your evaluation method needs to be realistic and something that you can carry out within the constraints of your project. You are unlikely to have a limitless evaluation budget so your design needs to make the most effective use of the resources available

How might you do this?

  • Consider an evaluation design that is pragmatic about what it can achieve

    In order to provide value, you need to ensure that your evaluation will be:

    • Useful – does the evaluation serve a purpose? Will it be responsive to stakeholder information needs?
    • Feasible – what time, resources and expertise are available to you?
    • Accurate – what information do you need to make your decisions?
    • Ethical – will the evaluation cause harm or distress to your target audience?
  • Decide on which data collection methods you will use.

    Data collection is usually quantitative (questionnaires, street surveys, telephone interviews, or web-based surveys) or qualitative (focus groups, and individual in-depth interviews). While quantitative research provides ‘hard’ reliable evidence (what, when, how many and how much), qualitative research allows participants to speak for themselves (how and why), and is helpful in exploring complex and interrelated issues, such as people’s attitudes. In essence, quantitative research relates to facts, and qualitative to beliefs

  • It can be useful to combine both methods in your research design


  • Be pragmatic. You will always have to make choices about, and limit the number of variables that you will be able to measure
  • Remember, lots of stakeholders may hold information that will be useful for evaluation

    It is important to consider data collection as early as possible, so that the information you need is available in the form in which you need it

  • Remain objective and let the research to be seen as independent. Having governance procedures in place can help.

    For example, managing the evaluation can cause conflicts of interest (such as the pressure to be seen to have a successful project, clashing with the actual results). Forming a small advisory group of a few social scientists from a local university or college to advise on your evaluation is one option that can be considered. Alternatively, the evaluation can be commissioned externally


  • Clarity on the measurement indicators and the research methods and tools that you will use to gather information

Intended Outcome

  • Written specification of how the intervention will be evaluated, highlighting key performance indicators and the ways in which measurement information will be collected