About the MetaSENse database

 metasense-logo-august-2024

About the metaSENse database

The MetaSENse toolkit is designed to help teachers and school leaders make decisions on what targeted intervention can be implemented to help students with Special Educational Needs learn.

The toolkit does not make definitive claims about what might work but the information provided is based on existing evidence, from randomized control trials (RCTs)[1] and Quasi-Experimental Design (QeD)[2] studies, of what might work in different school contexts.

Targeted interventions (either Tier 2 or Tier 3) are those interventions that go beyond good quality teaching. Tier 2 interventions are often provided in small-group sessions in the classroom during independent work or during times that do not conflict with other critical content areas. Tier 3 provides intensive intervention sessions for individual students with more significant needs or whose needs are not sufficiently met by Tier 2 supports. The database only includes targeted interventions that are “manualised” and named (i.e., has a published and accessible manual). There are a lot of named targeted interventions that may help students with SEND learn but not all have been evaluated in research. Those included in the toolkit have been evaluated by research that have used RCTs or QeDs.

The toolkit currently focuses on studies that have evaluated what might work to help support reading, writing, mathematical abilities, science and overall attainment. However, if studies also evaluated other outcomes such as behaviour or wellbeing this will be mentioned.

The database is updated on a regular basis. If you are aware of a study that should be included in the MetaSENse database or spot any issues, please email us on:

Guide to using our toolkit

You can search for a particular named targeted intervention using the search function.  However, if you are not yet sure what you are looking for, you can search for various approaches using the filter settings. Once you have set the filters, a list of named approaches will be provided. We would recommend that you look beyond this list. This list links to a detailed page for each of the approaches which describes in more detail what it is, who the approach is targeted at, how it would work, and what impact you can expect. It will also describe the evidence for this approach. As each school is different, we suggest that you also draw on your professional experience to make an informed decision whether or not to adopt the approach. There are links provided to further information about the approach where available.

Please see our three demonstration videos:

  • Does a targeted approach like Cogmed really work?

  • What targeted interventions are successful in raising mathematical abilities in students with SEND in KS1?

  • What interventions may help students with ADHD?

The description of the approach is based on the information that is freely available. If you think any of this information is incorrect you can inform us via email on xxxxx.

Evidence Rating

The evidence of what works is measured in two ways.

  • Quality of the evidence:

The quality of the studies was assessed using adapted versions of the Joanna Briggs Institute quality assessment tools for quasi-experimental (Barker et al., 2024) and RCT (Barker et al., 2023) study designs. The RCT quality assessment tool included 12 questions and the QED tool included 10 (See Appendix below). Each question received a score of 0 (criteria not met), 1 (criteria partially met), or 2 (criteria fully met). The quality of the study is rated as either high, medium or low (see Table 1 below).

Table 1. Total score thresholds for study quality for RCTs and QEDs

Total Score threshold Low Quality Moderate quality High quality
RCT 0-9 10-17 18-24
QED 0-8 9-15 16-20

 

  • Impact of the intervention:

The impact of the intervention refers to the effects size of the study. Means and standard deviations were extracted from included studies to calculate a standardized mean difference effect size. All standardised mean difference effect sizes were captured as Hedges g to correct for potential sample size bias as most studies included fewer than 50 participants. Where studies did not report the necessary descriptive statistics but did report a measure of intervention effect (e.g., regression coefficients) we converted this into a standardised mean effect size (see full list of formulae in Lipsey & Wilson, 2001). 

Progress in months

The progress in months data is on the strength of the effect size using the conversion tables from the Teaching and Learning Toolkit (2018) from the Education Endowment Foundation that can be found here:

https://educationendowmentfoundation.org.uk/education-evidence/teaching-learning-toolkit [last accessed 10/06/2024].

Appendix Study quality assessment questions

These assessment questions were adapted from the Joanna Briggs Institute

Questions to assess RCTs

Q1. Was true randomization used for assignment of participants to treatment groups?

Q2. Was allocation to treatment groups concealed?

Q3. Were treatment groups similar at the baseline?

Q4. Were treatment groups treated identically other than the intervention of interest?

Q5. Were outcome assessors blind to treatment assignment?

Q6. Were outcomes measured in the same way for treatment groups?

Q7. Were outcomes measured in a reliable way?

Q8. Was follow up complete and if not, were differences between groups in terms of their follow up adequately described and analysed?

Q9. Were participants analysed in the groups to which they were randomised?

Q10. Was appropriate statistical analysis used?

Q11. Was the trial design appropriate and any deviations from the standard RCT design (individual randomization, parallel groups) accounted for in the conduct and analysis of the trial?

Q12. Was the implementation of the study described, and if so, was an acceptable level of fidelity achieved in the delivery of the intervention?

Assessment questions for QEDs

Q1. Is it clear in the study what is the ‘cause’ and what is the ‘effect’ (i.e., there is no confusion about which variable comes first)?

Q2. Were participants included in any comparisons similar?

Q3. Were the participants included in any comparisons receiving similar treatment/care, other than the exposure or intervention of interest?

Q4. Was there a control group?

Q5. Were there multiple measurements of the outcome both pre and post the intervention/exposure?

Q6 Were the outcomes of participants included in any comparisons measured in the same way?

Q7 Were outcomes measured in a reliable way?

Q8 Was follow up complete and if not, were differences between groups in terms of their follow up adequately described and analysed?

Q9 Was appropriate statistical analysis used?

Q10. Was the implementation of the study described, and if so, was an acceptable level of fidelity achieved in the delivery of the intervention?

Notes

[1] Randomised Control Trials (RCTs) are seen as the ‘gold standard’ way of evaluating what works. In RCTs participants are randomly assigned to one of two groups: the experimental group receiving the intervention or the control group which either receives the business-as-usual support in the classroom or another type of activity (named active control trial) that is not of interest.

[2] Quasi-experimental designs (QeDs) are studies in which two groups of subjects are matched based on one or more characteristics and one group receives the intervention, whilst the other does not and receives either business as usual or an active control intervention. The difference with RCTs is that in QEDs the groups are not randomly allocated.