11. Monitoring, quality assurance, evaluation and reporting

Chapter editors: Inger Uhlén, Jill Carlton, Jan Kik

a. Introduction: context and importance

Monitoring, quality assurance, evaluation and reporting are important parts of a screening programme. The[popup_anything id=”3354″]states that “Monitoring and evaluating screening programmes at regular intervals are essential.”1 Without good quality data, it is impossible to assess the effectiveness of a screening programme.

However, one of the most important conclusions of the EUSCREEN study is that, even in countries where screening programmes are otherwise well-organised, there appears to be a lack of data collection and data availability. Even when data are collected, these are often inaccessible and generally do not seem to be aggregated and analysed. Considering this unavailability of data, it is not surprising that quality assurance, evaluation and reporting are not being systematically undertaken in most countries, if at all.

Quality assurance is a continuous process that should take place at all steps in the screening pathway. Each step in the pathway can be compromised by insufficient quality. Evaluation takes place at the end of a screening cycle to assess the results of the programme, to determine whether the objectives have been delivered on and to identify possible problems and areas for improvement. Reporting also takes place at the end of a cycle, to account for the results of the programme and provide advice to policy makers based on these results. Effective monitoring of a screening programme is necessary to be able to carry out quality assurance, evaluation and reporting.

 

b. Monitoring

The importance of monitoring cannot be overstated. It is imperative that when implementing a screening programme monitoring of the programme is included.

A prerequisite for effective monitoring is high quality data collection: complete and consistent registration of all relevant steps in the screening process. These include the screening itself, and repeat screening if applicable, as well as referral and treatment (if applicable).

A good quality database is an absolute necessity, where all relevant data are registered by personnel capable of ensuring the quality of the data. The database is the central ‘hub’ of the screening programme in this respect, as illustrated in the flow chart.

Another general requirement for monitoring is that, within a programme, screening is performed everywhere according to the same protocol and data are registered in the same way in order to be comparable. Centralised management of the screening programme may make this easier.

Consistent and uniform registration of data is important. Throughout the programme, consistent terminology should be used, as well as consistent definitions. Screening results have to be written down in uniform values (specified in the screening protocol) and the database should only accept values in the correct format (for example only ‘1,0’ and not ‘1’ or ‘1.0’ or ‘one’).  The database should also use mandatory fields, dropdown fields and logic checks (to prevent errors such as a screening date prior to a child’s date of birth) as much as possible to further minimise the risk of data entry errors.

All data should be timely, complete, unique, valid, consistent and accurate. A checklist that can assist in assessing data quality can be found here.

It is very important that not only the results of screening are registered, but also the results of follow-up (diagnostic assessment). Without this, the quality of the screening cannot be assessed (rate of false positives). Additionally, the results of treatment have to be registered in order to make it possible to analyse the cost-effectiveness of a screening programme.

The above means that a screening programme should not only incorporate effective measures to ensure high follow-up, but also to ensure the results of follow-up are reported and registered. An example of a database structure, for vision screening, can be found in Appendix 4.

In EU countries, all data must be registered in compliance with the[popup_anything id=”3326″](see also chapter 4e on legal considerations).  Results have to be verifiable, though, in case of suspected mistakes or even fraud. This means results have to be traceable to screener/child by using codes with all personal identifying information omitted – codes and connected personal information have to be stored separate from the database.

 

Example

In the Netherlands, all organisations who perform neonatal hearing screening have access to the same database, CANG (Central Administration system Neonatal Hearing screening). The entire data registration process is digital: screening results are uploaded straight from the screening devices to the database. This process enables real time quality assurance. If applicable, results of diagnostic examinations and interventions are entered in this database as well. Because the data thus collected are uniform and reliable, these can subsequently be employed for evaluation and reporting purposes. An example of a report can be found here (in Dutch only).

 

It is also important to notice that a database is only as good as the data entered. These should be correct and complete. Great care should be taken to ensure that all data on screening, referrals and treatment are collected because without these data effective monitoring is not possible. This requires competent and knowledgeable administrative staff. It is always very important that everyone in the screening pathway not only knows which data should be registered, but also why these data should be registered. When people know the purpose of what they are doing, they will do it more conscientiously than when they have no idea why they have to do something.

 

c. Setting benchmarks and defining Key Performance Indicators (KPIs)

Benchmarks provide critical context for the goals of a screening programme. Benchmarks establish what ‘normal’ results would be for a specific screening programme (for example, newborn hearing screening) in a comparable context (for example, a country or region with a certain level of development, standard of living and population density). By comparing the results of a screening programme with the benchmarks it is possible to say whether the programme is performing as would be expected, or below or above expectations.

Benchmarks are measured through Key Performance Indicators (KPIs). KPIs are numeric measures of performance for specific parts of a screening programme. For most screening programmes KPIs would include coverage, referral rates, follow-up rates and treatment results. Some KPIs can be programme-specific, for example, a specific KPI for an[popup_anything id=”3462″]programme would be the number of children screened within a certain number of days after birth. An example of KPIs for a newborn hearing screening programme can be found here.

 

d. Quality assurance

Quality assurance is a means to assure and improve quality throughout the screening pathway. This includes the assessment of the delivered quality and well as identification of specific problems and barriers and measures to overcome these.

The screening protocol should provide written, verifiable standards for screening and referral. All professionals who are part of the screening pathway should be intimately familiar with these standards and training and feedback should be based on these standards. Responsibilities should be clearly defined and all professionals should be accountable for the part of the screening pathway that is their responsibility.

Regular data analysis is recommended to check for outliers in the data such as excessive referral rates. This can partly be automated by building alerts into the database that automatically message a supervisor when inconsistent data are entered (for example when a child with screening values that warrant referral is registered as having passed the test, or the opposite). Once the possibility of database (entry) error is ruled out, this should be investigated. The same goes for anomalies such as screeners with consistent very low or very high referral rates or erratic screening results.

All screeners should, at regular intervals, receive feedback on the quality of their referrals based on the results of the diagnostic examinations of the children they referred.

While quality assurance is obviously important, it should be kept in mind that an abundance of quality assurance can prove counterproductive. If too much emphasis is put on preventing children with the target condition from passing the test, for example, screeners may become anxious or defensive and, unconsciously, to go too far in erring on the side of caution, leading to very high rates of false positives. There should be a reasonable balance between[popup_anything id=”3350″]and[popup_anything id=”3351″] and care should also be taken to prevent screeners from becoming demoralised2.

There are no universally accepted definitions of quality assurance, but below are two examples of approaches to the concept:

New Zealand’s National Screening Unit identifies the following four dimensions of quality3:

  • equity and access: the extent to which people are able to receive a service on the basis of need, mindful of factors such as socioeconomic factors, ethnicity, age, impairment or gender
  • safety: the extent to which harm is kept to a minimum
  • efficiency: the extent to which a service gives the greatest possible benefit for the resources used
  • effectiveness: the extent to which a service achieves an expected and measurable benefit

Muir Gray and Austoker identified these five preconditions for successful quality assurance4:

  • the right culture
  • the existence of explicit standards of good performance
  • an information system that allows each professional and programme to compare their performance with that of others and with the explicit standards
  • authority to take action if a quality problem is identified
  • clear lines of responsibility in managing the process of quality assurance itself

 

 

e. Evaluation

Evaluation is assessing the performance of a screening programme in the broadest sense and determining how well it is achieving its goals. Evaluation is performed by analysing the available data and comparing the actual results with predetermined KPIs (the targeted results).

Evaluation can be expanded by complimentary quantitative or qualitative research, for example questionnaires for or interviews with professionals in the screening pathway, or parents of screened children. Such additional research can shed more light on issues identified through the data analysis. The extent of the evaluation is of course also dependent on the available means.
The information generated on effectiveness and cost-effectiveness of the programme can be used to inform policy makers and decision makers.

Although evaluation is not the same as quality assurance, appropriate evaluation will be able to assist with quality assurance and quality improvement by identifying aspects of the programme that need to be revised or improved. When changes in the screening programme are implemented, based on observed issues, the effects of these changes should also be monitored and evaluated, to assess if these changes achieved the desired effects.

 

 

Note that all aspects of a screening programme should be evaluated. Not just the screening itself, but also for example the communication of the programme to the public, the organisational structure,the reduction in prevalence of the targeted condition in the eligible population, and so forth.

 

f. Reporting and dissemination

The data analysis performed for evaluation can however also be used for reporting and dissemination. Reporting is providing an account of the results of the screening programme. This is done based on analysis of the data and can be concise or prolific, depending on the available data and the target audience.

Reporting

The primary audience of reporting will be governments and other stakeholders, especially those who contributed monetarily to the realisation of the programme. These parties will obviously have an interest in the results of the programme and, especially, its cost-effectiveness. Based on the results that are being reported on, policy advice can also be provided on the future of the screening programme. Once a screening programme has been established, reporting can be done periodically. This will of course make it possible to compare results from subsequent cycles and gain insight into whether the results of the programme are improving.

Dissemination

In addition to reporting aimed at governments and stakeholders, reports can be written for the general public. These may be adapted as far as content and language are concerned to make them accessible to a wider audience, and may serve to disseminate knowledge of screening among the general public. Reporting on the results of the screening programme to the general public also contributes to the public’s faith in and general acceptance of and support for the programme. This is important because public support is imperative to the success of a screening programme.

 

g. Checklist

Below is a short checklist with questions to address before commencing monitoring (and therefore before implementing a screening programme):

  • are there sufficient funds for monitoring?
  • are qualified personnel available to perform the monitoring tasks?
  • is it clear who is responsible for all aspects of monitoring?
  • is there a database of sufficient quality in place and, if technical problems should occur, is support available?
  • has the scope of the monitoring been defined (strictly based on collected data during the screening or with additional research to gain more insight in relevant processes)?
  • has enough attention been paid to making sure all data collection is compliant with relevant privacy legislation?
  • are all professionals who are part of the screening pathway aware of their responsibilities?
  • is there a clear “chain of command” or paper trail, so that policy makers and funders can have confidence in the data they use – but also know where it then goes (for example to ministries or publications)?
  1. WHO Regional Office for Europe (2020): Screening programmes: a short guide Increase effectiveness, maximize benefits and minimize harm, WHO Regional Office for Europe: Copenhagen.
  2. Muir Gray JA & Austoker J (1998): Quality assurance in screening programmes. British Medical Bulletin 54(4):983–992.
  3. National Screening Unit (2005): Improving Quality: A Framework for Screening Programmes in New Zealand. Auckland: National Screening Unit.
  4. Muir Gray JA & Austoker J (1998): Quality assurance in screening programmes. British Medical Bulletin 54(4):983–992.