Event summary prepared by Claire Currie.

 

Local and national perspectives on evaluation were brought together during a one-day conference hosted by the Nuffield Trust in June 2015.

Dr Martin Bardsley, Director of Research at the Nuffield Trust and conference Chair, explained in his opening address how there is increasing interest in evidence-based health policy and in seeing return on investment – and evaluation has a key role in this.

The conference brought people together to consider how to best foster collaboration between national evaluators and local areas.

This summary draws together the key points from the conference and looks at to what extent we have struck a balance between local and national evaluation.

The conference began with an overview of the current strategic direction of evaluation, and was followed by sessions outlining innovative methods of quantitative and qualitative evaluation, with local and national examples. 

 
Ten points to consider when planning an evaluation, from Professor Nicholas Mays

Ten points to consider when planning an evaluation, from Professor Nicholas Mays

Professor Nicholas Mays, Professor of Health Policy at the London School of Hygiene and Tropical Medicine, opened the day by giving an overview of the direction and purpose of evaluation. He cautioned against diving into evaluation without first thinking about the purpose, audience for the results, and potential impact of the findings. Points that he made included:

  • Evaluation can be important and persuasive in guiding policy direction, and evaluation of policy can provide substantial insight for future decisions – particularly pertinent when there is a plethora of new models of care.
  • There is a tendency to over-emphasise study design, methods and robust approaches, and often not enough attention is given to the context in which evaluation should be used.
  • There are three broad purposes of evaluation: piloting for the purpose of testing effectiveness (does it work?), implementation (how should it be done?) or for policy development. 

He also set out ten points to consider when planning an evaluation (see figure above as well as the full presentation).

Moving onto methods of evaluation, there were then two presentations given outlining innovative methods of quantitative evaluation.

Gustaf Edgren, Scientific Advisor, Health Navigator Ltd and Associate Professor of Epidemiology, Karolinska Institutet, Sweden, described a method of using randomised control trials (RCTs) in complex settings.

In Sweden, a small proportion of the population account for a large proportion of the health care costs (1% of the population constitute 30% of health care costs, and 10% constitute 75% health care costs). A case management intervention was planned which with the aim of reducing emergency department attendance by these ‘frequent flyers’. Gustaf and his colleagues wanted to use an RCT to produce good quality evidence for the intervention, but were constrained by the governance arrangements which meant they were unable to conduct research in a health care service setting. They overcame this by agreeing to use Zelen’s design of an RCT, rather than the traditional design (see slide and presentation).

 

While Gustaf recognised that using Zelen’s design may lead to ethical dilemmas concerning consent, he also saw the advantages of the method, in that it:

  • provides an alternative when it is not possible to conduct research in a clinical setting
  • presents a robust method in terms of generating good quality evidence
  • reflects the real-world effect of an intervention rather than testing effectiveness
  • allows for an iterative design by using frequent interim analyses, which enable modification to be made to the intervention – this meant that the programme was able to adapt in real time to optimise its effectiveness and was therefore more likely to achieve its objectives
  • is cheaper than a traditional RCT. 

The evaluation carried out by Gustaf and colleagues demonstrated that the intervention did indeed reduce emergency department attendances. A published study gives more detail of this work.


Cono Ariti, Senior Research Analyst at the Nuffield Trust, then described the retrospective matched control design used by the Nuffield Trust in conducting quantitative evaluation.

Cono first set out the problems associated with carrying out evaluation of health models and interventions in practice. He highlighted that while we know RCTs are the ‘gold standard’ to test effectiveness of an intervention, often this method is not feasible or ethical, and the generalisability is limited. However, if an observational study is undertaken, typically there is no natural experiment that exists and often no comparable group. Cono also explained that evaluation is often only thought of after an intervention has been implemented. These issues have led to the development of other, alternative and often innovative methods of evaluation.

 

A retrospective matched control design uses routine databases to match individuals who have received an intervention with a control. The advantages of this method are that it:

  • makes it possible to plan retrospectively
  • presents a robust method in terms of generating good quality evidence
  • reflects the real-world effect of an intervention rather than testing effectiveness
  • is cheaper than a traditional RCT.

To undertake this method locally, Cono set out some key questions to consider in the slide above.

The conference audience asked about how evaluators in local areas could be encouraged and trained to apply these methods more widely. The Nuffield Trust has written a concise guide on the steps that need to be undertaken to use this method of evaluation. Many of the initial steps can be carried out locally with minimal external support, but it may then be necessary to draw on statistical expertise to carry out the specific analytical methods.

Attention then turned to using qualitative methods of evaluation.

Professor Alicia O’Cathain, Professor of Health Services Research at the University of Sheffield, explained that qualitative methods can help answer the ‘why’ questions – which can be very useful for complex interventions. She pointed out that qualitative evaluation is about understanding not measuring, and that qualitative methods can fix problems with quantitative evaluation by:

  • explaining findings (context, mechanisms of action, implementation) when the quantitative evaluation has not identified an effect
  • preventing a quantitative evaluation going ahead by identifying problems at the pilot stage
  • answering ‘it works, but what is “it”’?.

Alicia also provided her thoughts on how to maximise the value of qualitative evaluation:

  1. Do it early, otherwise it’s about future trials.
  2. Publish learning for a specific trial or future trials.
  3. Think beyond interviews: there are a range of methods, for example non-participant observation.
  4. Try iterative, dynamic or participatory approaches at the feasibility phase.
  5. Evaluation is not just for complex interventions.
  6. Think about the range of work that could benefit from evaluation.

Alicia also highlighted the MRC framework for developing and evaluating complex interventions as a useful resource.


Ruth Thorlby, Acting Director of Policy at the Nuffield Trust, then presented an example of a highly complex mixed method evaluation looking at six volunteer projects across England. This ongoing piece of work is funded by the Cabinet Office.

Ruth conceptualised qualitative evaluation as investigating ‘what works, for whom and under what circumstances’ as described by Tilley.

The qualitative element of this evaluation sought to gather information on staff and patient experiences. This example highlight that although a realist approach to qualitative evaluation does not require extensive theoretical expertise, there are some important considerations, such as the need to:

  • focus on the theories behind the intervention, the reality of the teams/individuals implementing it, how it might change as it is implemented and the changes happening around it
  • focus on the visible and the hidden (underlying) factors which help explain findings
  • ensure any user experience tools are relevant and useable.

Ruth also highlighted the development of a new user reported tool to measure older people’s experiences of care coordination, which has taken 18 months to develop and has been funded by the Aetna Foundation. This work has been undertaken by the Nuffield Trust, Picker Institute Europe, The King’s Fund, the International Foundation for Integrated Care and National Voices. 


Professor Martin Marshall, Professor of Healthcare Improvement at University College London (UCL), then spoke about ‘researchers in residence’ as an example of a method that is moving research closer to practice. He explained that this is a particularly useful method of qualitative research because:

  • lots of research is available but is not accessible to those on the ground
  • evaluation findings are rarely useful to those on the ground
  • researchers and those working on the ground are interested in different questions – a researcher in residence helps bridge that gap.

The role of the researcher in residence is to:

  • be a core member of the operational team
  • bring a focus on negotiating rather than imposing evidence
  • bring specific expertise such as understanding evidence and its interpretation, theories of change, evaluation, and using data.

Dr Laura Eyre, who is currently a researcher in residence at UCL, said that this model focuses less on whether the programme works and more on how to use research evidence to optimise effectiveness of the programme. She described the role as ‘holding up a mirror’ to what is going on.

Read more about this model.

 


Several examples of approaches to evaluation by both local areas and national evaluators were then presented and discussed during parallel sessions. These were themed around:

  • Using innovative methods to obtain patient views, including:
    • Self-report questionnaires in work being led by the Nuffield Trust to evaluate stroke survivor and carer outcomes.
    • Using trained volunteers to conduct peer-to-peer interviews in Wakefield.
    • An evaluation of measuring patient activation in the NHS, being led by NHS England and the University of Leicester.
  • Locally available linked patient-level data – the successes and challenges experienced by three areas (Kent, Leicestershire and Waltham Forest East London and City (WELC)), as well as those by the Nuffield Trust as a national evaluator.
  • The value of using a mixed-method approach, such as in evaluating:
    • a complex integrated care system in North West London by the Nuffield Trust
    • a Prime Minister’s Challenge Fund scheme to improve access to services by the Nuffield Trust
    • a ‘real-life’ quality improvement programme in primary care by the North Thames CLAHRC (Collaboration in Leadership in Applied Health Research and Care).

In these parallel sessions an appetite for undertaking evaluation at local level was evident. A few of the opportunities of evaluation highlighted by local areas were:

  • A greater understanding of whether services were meeting anticipated aims, which should then inform commissioning.
  • Quantitative evaluation: locally linking data across a greater range of sources sooner than would be achieved on a national scale, which it is hoped will support integration plans. Examples where this is already underway were heard from Kent, Leicestershire and WELC.
  • Qualitative evaluation: can help embed patient involvement into practice as individuals keep in touch and can feed useful information into an organisation on an on-going basis, for example in Wakefield.

However, several barriers to local evaluation were also discussed:

  • The additional cost evaluation adds to implementation.
  • Reluctance to commit to long-term evaluation as results are needed more immediately to inform commissioning. 
  • Reluctance to commit to evaluation before being sure if the intervention will be successfully implemented, for example if relies on recruiting volunteers.

Quantitative methods:

  • Complex arrangements and processes are required for data linkage between sources, which demands time and perseverance to navigate.
  • There are stringent requirements concerning consent to enable individual level data to be used for evaluation (and EU legislative changes are also expected).
  • Expertise is required to analyse the data (one local area has put in place super users to support others).
  • Capacity is needed to undertake the work to set up locally linked data and any analytical work.
  • Poor data quality and completeness can limit the ability to undertake evaluation.

Qualitative methods:

  • Can be very time consuming.
  • There may be literacy and language barriers.
  • Expertise in a range of methods is needed.
  • Those who volunteer to participate may be unrepresentative of the population.
  • The method selected may be unsystematic and may reflect a small number of views.
  • If volunteers are used to conduct peer-to-peer interviews, recruitment, training and support are needed.
  • Interpreting data collected from individuals can cause ethical dilemmas.

These discussions also raised some suggestions of what support local areas need to undertake evaluation, including:

  • Help in breaking down the barriers that are widely experienced, such as governance arrangements.
  • Guidance on the choice of method to use to suit the purpose of the evaluation.
  • Dissemination of resources which can be used or adapted for use as local evaluation tools (including national audits).
  • Facilitation of a learning set through which experiences can be shared.
  • Encouraging prioritisation of evaluation through organisational structures.
  • Technical help – which will be needed at some point as local areas can only get so far in-house.
  • Evaluations being carried out of complex systems, interventions or multi-level programmes.
  • Guidance on how to translate evidence into actions that can be applied.

Charles Tallack, Head of Operational Research and Evaluation at NHS England, described the plans for a national evaluation of the Vanguard sites following the Five Year Forward View. The aim of the Vanguard programme is to tackle the care and quality gap, as described in the Five Year Forward View. In developing the approach of this national evaluation, Charles met with representatives from the Vanguard sites to gather their views and also a range of national evaluators. The purpose of the evaluation is to identify models of care that improve outcomes and efficiency, and that are replicable and can be shared and disseminated. 

As Charles described, it is hoped the Vanguard sites will contribute to the national evaluation by:

  • basing their model of care on a sound theory of change and a logic model
  • basing their model of care on evidence and best practice
  • describing clearly and accurately what they have done
  • working to a rapid cycle of evaluation and improvement to optimise the model of care
  • assessing the model of care against the local context.

Charles went on to describe that he expected the national role of the Vanguards in evaluation to be in:

  • providing support and sharing evaluation expertise
  • identifying a logic model
  • disseminating learning
  • leading on resolving issues that are common to all areas, where appropriate.

Some of the challenges with complex evaluation at national level include:

  • the need to agree a clear purpose for the evaluation, including whether it is aiming to evaluate implementation or policy development
  • understanding the line between research and evaluation to ensure that due ethical consideration is given to proposals
  • governance issues regarding data access and linkage
  • the lack of nationally available, relevant, routinely collected patient experience measures
  • the expectation that these new models will make a big difference over a short period of time – it usually takes a long time to get an answer as implementation needs to happen first.

However, local areas can develop plans for the national evaluation by working together to make sure the national evaluation gets it right by:

  • identifying local metrics that could potentially be used nationally
  • developing patient experience measures
  • identifying/suggesting more nuanced measures other than emergency admissions.

The vision

The final session of the conference was chaired by Nigel Edwards, Chief Executive of the Nuffield Trust. He facilitated a session to identify the learning from the day in order to identify how we can bridge the gap between evaluation principles and practice. The panel members were Professor Martin Marshall, Dr Elizabeth Orton, Consultant and Associate Professor in Public Health, Leicestershire County Council and The University of Nottingham, Charles Tallack and Dr Lesley Wye, National Institute for Health Research (NIHR) Knowledge Mobilisation Fellow, University of Bristol.

The discussion looked at the need to:

  • use the evaluation method most appropriate to meet the purpose of the evaluation
  • use evaluation findings to inform practice
  • share expertise, methods and skills required to undertake good quality evaluation
  • utilise expert capacity in evaluation in the most efficient way.

The panel discussed a number of challenges raised by the audience including:

  1. Lack of available training. While some training on evaluation methods is delivered as part of a Master’s in Public Health course or other formal qualifications, there is little training on the complexities of methods of evaluation. Training should include skills in both quantitative and qualitative evaluation.
  2. Capacity in the workforce. Identifying where the skills are in the local health and social care systems to carry out evaluation is difficult. Resources in public health teams are often stretched and there is variable capacity in Commissioning Support Units (CSUs). CLAHRCs and Academic Health Service Networks (AHSNs) are ideally placed but there is more demand than capacity. One conference delegate highlighted local authority community insight teams as a potentially untapped resource.
  3. Cost. The cost of outsourcing evaluations to external organisations with expertise in the area is often unaffordable for local areas. Therefore, as far as possible, it is preferred that evaluations are undertaken in-house. For evaluations of national programmes, there is the issue of who pays for the evaluation.
  4. Complexity of the system. Complex data governance arrangements are a huge barrier, both nationally and locally, in carrying out evaluations. The complex health and care system also creates barriers to developing working relationships, which are key to implementing change, and patient experience measures are under developed.
  5. Lack of available metrics aligned with policy direction. Despite the national direction being to deliver integrated patient-centred care, patient experience metrics are under developed and therefore evaluating progress in this area has been hindered.  

The panel discussion demonstrated that there is huge enthusiasm for evaluation of new models of care and health initiatives. But the challenge is how to capture that enthusiasm and help support local and national teams to carry out evaluations when appropriate, using robust methods and making best use of the resources available to them, and ultimately generate findings that will help support service development to improve patient care.

Suggestions of how the vision can be achieved

  • Hold honest discussions around the purpose and expectations of an evaluation locally when planning an evaluation, including what action will be taken if the findings are not positive.
  • Identify local resources that may be able to support evaluation, such as insight teams in local authorities and third sector analysts.
  • Develop local capacity in evaluative methods, for example by attending training courses on both quantitative and qualitative methods (e.g. some universities, CLAHRCs).
  • Develop networks which may be able to build capacity through shared learning. The South West do this already to some extent.  
  • Improve collaboration between national and local evaluators with the aim of optimising use of evaluation resources
  • Continue to develop methods to enable nuanced approaches to evaluation.
  • Consider whether a national role to provide leadership for evaluation would be useful.


Suggested citation: Currie, C. (2015) Evaluation of complex care summit: Event summary. Nuffield Trust.