Traditional Culture Encyclopedia - Traditional stories - How to make a systematic review and meta-analysis.

How to make a systematic review and meta-analysis.

English names of systematic reviews include systematic review, overview and meta-analysis. We use overview to represent the general term of system review, and meta-analysis to represent the quantitative system review. The implementation steps of systematic review include: drawing up the theme, searching, selecting and evaluating relevant materials (original documents), synthesizing relevant data or materials, and drawing conclusions. Systematic review can be used for treatment, etiology, diagnosis and prognosis.

The basic standard is 1. 1

Most clinical problems involve the treatment, etiology, diagnosis and prognosis of patients. Unless the review clearly describes the theme, we can only guess whether its main content is related to the diagnosis and treatment of patients. If you can't understand the theme of the review through the title or abstract, you'd better choose another article.

The selection criteria of materials should include the definition of patients, exposure factors, observation indicators or results, and the definition of research methods. The specific requirements of the materials can be found in the introduction of the previous articles on treatment, diagnosis, risk factors and prognosis.

In the review of the same topic, different patients, exposure factors or observation indicators will lead to different review results. If the author introduces the criteria for selecting materials and implements them, he can basically avoid the tendency to choose documents that support his original hypothesis because of his own experience.

1.2 secondary standard

A comprehensive search to collect the original documents that meet the selection criteria is an important part of the review. A comprehensive search should include searching related databases (such as MEDLINE and EMBASE), finding quotations from known literatures and consulting experts. The role of expert consultation can avoid the omission of unpublished documents that have not been printed, indexed and cited, and also avoid the "publication bias" caused by the omission of unpublished documents-the tendency of positive articles to be published more easily, which leads to the overestimation of the role of intervention factors. Unless this paper introduces the method of literature retrieval, it is difficult to judge whether the literature has been omitted.

Even for the review of all randomized controlled trials, it is still necessary to know whether the quality of the original study is good; Even if the results of the original research are consistent, it is still necessary to know the true validity of the original research.

At present, there is no recognized standard method to evaluate its authenticity and effectiveness. Among various evaluation methods, some are complicated inspection clauses, and some only contain three or four principles or requirements. You may wish to refer to the requirements of the previous articles in this series for evaluation.

Collecting documents, evaluating the authenticity and validity of the original research and selecting the data are important links in the review, but errors or deviations are more likely to occur. If two or more people can be arranged to do it independently, and the repeatability or consistency is good, then the evaluation results are more credible.

Even if strict inclusion criteria are established, there are still many differences in patients, exposure or intervention, outcome indicators and research methods in most systematic review materials (original studies). We must decide whether these different degrees (or properties) seriously affect the results of original research or the basis of data.

For quantitative data, one of the criteria to judge whether it can be integrated is that the effect relationship measured by each original study has the same meaning. In the quantitative systematic review, we can test whether the degree of difference between the research results is beyond the expected range caused by random factors, and the degree of difference. This statistical analysis is called "consistency test" (homogeneity test). The greater the significance of the consistency test, the less likely it is that the differences between the original studies are only caused by accidental factors, but the explanation of the "statistical significance" of inconsistency should be cautious. On the other hand, the test conclusion that there is no significant difference cannot rule out the existence of significant inconsistency. Therefore, even if there is no significant difference in the results of the consistency test, if the differences between the original research results have clinical significance, we still need to carefully explain the comprehensive results-the total results. However, even if there are significant differences between the results of the original studies, as long as all the original studies used are of high quality, the review results are still the best estimate of intervention or exposure.

2 the significance of the summary results

2. 1 systematically summarizes the meaning of the total result.

Clinical research (original) collects data from a single patient. The data obtained from the original research are systematically reviewed, and then analyzed and synthesized by quantitative (or qualitative) methods.

It is not a good method to synthesize the original research results simply by comparing the number of positive and negative studies in the original research. Systematic review According to the weight of sample size, each original research and the research with large sample size have great weight, so the total result is the weighted average of original research results. Sometimes the weight is given according to the quality of research, or the weight of inferior research is set to zero (elimination), to understand whether this arrangement or adjustment will cause significant changes in the total results.

Sometimes, the properties of each original research achievement index are the same, but the measurement methods or tools are different. For example, similar studies may use different methods to measure functional status. If the patient standard and intervention measures are the same, it is still worth estimating the average effect of intervention measures on functional status. One way to achieve this estimation is to synthesize the results of various original studies through the "effect size". The effect scale is the quotient of the difference between the measured values of the outcome indicators of the intervention group and the control group in a certain study divided by the standard deviation. Therefore, the weighted average effect of original research of many different methods can be calculated by the effect scale. You may find it difficult to explain or understand the clinical significance of the effect scale. You might as well convert it back into an index familiar with its diagnostic and therapeutic significance.

Usually, we hope that the result of systematic review is the result of quantitative synthesis, but quantitative synthesis is sometimes inappropriate because of the unexplained heterogeneity between the original research results or the poor quality of the original research. At this point, you can "list" the results of the original research through the chart.

The accuracy of the average effect value can be estimated by the confidence interval.

Because systematic review contains many individual studies, one of its advantages is that its results come from a variety of patients. If the results of individual studies are consistent, the results of systematic review are applicable to all types of patients included in these individual studies. Even so, we should leave room for the universality of the results: perhaps your onset age is older than the subjects included in the systematic review; If the drugs used in a single study are different, we should consider whether one drug is better than the other.

The latter problem involves subgroup analysis. One of the most important principles to judge whether to believe the results of subgroup analysis is to look at the conclusions obtained from the comparison between studies with suspicion. If there are the following situations, the hypothesis that there are differences between subgroups is more credible: (1) The therapeutic effect is very different; The difference in curative effect is very significant. Before the study, there were differences in hypotheses, and only one of several hypotheses was verified. The results of individual studies are consistent; There are differences in indirect evidence support. If it does not meet the above situation, the results of subgroup analysis are rarely true, and the total results of systematic review should be accepted, not the results of subgroup analysis.

It is more likely to get real and effective results or conclusions by reviewing specific topics and single results, but this does not mean that we can ignore the results that are not included in the review: all important clinical results need to be considered in clinical decision-making.

The principle of clinical decision-making is that the expected benefits must be greater than the potential risks and costs. This is clear when making treatment or prevention decisions. When introducing the etiology or prognosis to patients, there are also problems of benefits and harm.

[Oxmanad et al. translated JAMA,1994,272:1367-1371(English) Fu Ying, Wu Ting and Wang Chong]