亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Importance of reporting quality: An assessment of the COVID-19 meta-analysis laboratory hematology literature

        2020-12-17 02:14:16JohnFrater
        World Journal of Meta-Analysis 2020年4期

        John L Frater

        John L Frater, Department of Pathology and Immunology, Washington University School of Medicine, St. Louis, MO 63110, United States

        Abstract

        Key Words: COVID-19; Meta-analysis; Reporting quality

        INTRODUCTION

        Meta-analysis, the examination of data from multiple independent studies of the same subject, is a useful form of quantitative review that can provide improved statistical power compared to studies with smaller numbers of subjects and demonstrate the presence or lack of consensus regarding a specific scientific question[1]. In recent years, the number of published meta-analyses has increased, particularly in the realm of clinical medicine, and they have become important sources of information for practitioners, especially in areas where information is rapidly evolving.

        In pathology and laboratory medicine, meta-analyses are published less frequently compared to other areas of clinical medicine. Kinzler and Zhang, in their survey of the meta-analysis literature in pathology journals compared to medicine journals, note a significantly larger percentage of publication space dedicated to meta-analyses in medicine journals[1]. This is despite the proven high quality of meta-analyses in both journal categories, as evidenced by similar adjusted citation ratios (which they defined as article’s citation count divided by the mean citations for the meta-analysis, review, and original research articles published in the same journal the same year)[1].

        Because meta-analyses are an important source of information for clinicians and others, it is essential that they are formatted to easily allow the reader to assess their strengths and weaknesses. Several checklists have been established by national and international committees, including the Institutes of Medicine (IOM), Preferred Reporting Items for Systemic Reviews and Meta-analyses (PRISMA), and Metaanalyses of Observational Studies in Epidemiology (MOOSE)[2-4]. A recent survey by Liuet al[5]using the PRISMA criteria noted that the reporting quality for a sampling of medicine meta-analyses was higher than that of pathology meta-analyses. The overall reporting quality for laboratory hematology-focused meta-analyses was not specifically addressed[5].

        The coronavirus disease-2019 (COVID-19) pandemic, which originated in the city of Wuhan in the Hubei Province of China in December 2019 quickly spread to Europe and then to North America[6,7]. In an effort to study the disease and improve the world health community’s response, over 30000 papers have been added to the medical literature since December 2019, based on a search of the PubMed database for the keyword “COVID-19” conducted on July 16, 2020. In a situation such as this, it is essential for the practicing clinician to have access to reliable studies with good statistical power, hence the need for meta-analyses with high reporting quality. Laboratory hematology is an essential component of the medical response to COVID-19 since several biomarkers of infection derived from the complete blood count (CBC) and coagulation testing are of proven utility in assessing prognosis and likely outcome[8-10]. As in all quickly evolving fields, a large fraction of the accessible medical COVID medical literature appears in the form of preprint publications. These are manuscripts that are indexed in services such as Google Scholar, but have not yet completed the peer-review process. The purpose of this study is two-fold; to assess the reporting quality of COVID-19 meta-analyses focused on laboratory hematology and to compare the reporting quality of published studies of COVID-19 to the preprint literature.

        MATERIALS AND METHODS

        Study selection

        The study selection processes is summarized in Figure 1. A search was conducted in PubMed and Google Scholar using the search terms “COVID-19” OR “COVID”, “SARS-CoV-2”, OR “coronavirus” AND “meta-analysis”, which yielded 34 entries in PubMed and 3080 in Google Scholar (total = 3114 studies). Initial screening for letters to the editor, editorials, and non-meta-analysis reviews removed 3029 publications, with 85 entries remaining for further consideration. After removal of 27 duplicate entries, 58 publications remained. The full text of the remaining 58 studies were examined for content, and 39 studies that fell out of scope for further consideration were removed, leaving 19 studies for the analysis.

        Checklists

        The studies were se parated into published studies (n= 9, Table 1)[11-19]and manuscripts appearing in the preprint literature (n= 10, Table 1)[20-28]. For the purposes of this study, preprint literature refers to manuscripts discoverable in the Google Scholar database which have been submitted for publication and are assigned an identifier through a service such as doi.org or preprints.org but have not completed the peer-review process.

        The studies were then evaluated using the IOM, PRISMA, and MOOSE criteria. The IOM has compiled a list of 5 required elements that serve as recommended standards for meta-analysis (Table 2)[2]. The PRISMA group compiled a list of 27 checklist items to facilitate the assessment of the reporting quality of meta-analyses[3]. The MOOSE criteria consist of a 34-point checklist categorized under 5 divisions[4]. The criteria were evaluated for each study, and a numeric score was assigned based on the sum total of positive results for each element of the IOM, PRISMA and MOOSE checklists.

        Statistics

        The mean PRISMA and MOOSE scores for the accepted/published and preprint studies were compared using the student 2-tailt-test, with significance defined asP< 0.05. The PRISMA and MOOSE scores were compared using Pearson’s correlation coefficient. All statistics were calculated using Excel (Microsoft, Redmond, WA, United States).

        RESULTS

        Qualitative aspects of the identified studies

        Qualitative features of the studies are summarized in Table 1. Most cases (17 of 19, 89%) were from Chinese patient populations. For the remaining 2 studies, the national origin of the patient populations was not defined, but given the affiliations of the authors, the patient cohorts were also likely from China. The number of patients in each study was highly variable, ranging from 50 to 59254. The hematology data reported in the studies was heterogeneous. The most common evaluated tests were white blood cell count (15 studies), absolute lymphocyte count (15 studies), and platelet count (10 studies).

        Because of the limited number of reporting elements in the IOM checklist (Table 2), a comparison with the PRISMA (Table 3) and MOOSE (Table 4) checklists was not performed. The mean IOM score was 3.8/5 (76%) for all studies. The average scores for preprint (4.0/5, 80%) and accepted/ published (3.5, 70%) studies was similar, and there was no statistically significant difference between the two groups (P> 0.05). Reviewing the IOM required elements, the most common deficiencies were in explaining why a pooled estimate might be useful to decision makers and lack of sensitivity analysis.

        Due to the larger number of reporting elements in the PRISMA and MOOSE checklists a more robust comparison could be performed. The average PRISMA score for all studies was 20.3/27 (75%) (median = 22/27, 81%).The average scores of the accepted/published (mean = 20.4/27, 76% median = 21.5/27, 80%) and preprint (mean = 20.2/27, 75%, median = 22/27, 81%) groups were similar (studentt-test,P> 0.05). The most common elements which were lacking were checklist numbers 15 (methods:risk of bias across studies), 16 (methods: additional analyses), 22 (results: risk of bias across studies), and 23 (results: risk of bias across studies). The average MOOSE scores for all studies was 19.9/34, 60% (median = 20/34, 60%).The average scores of the accepted/published (mean = 20.6, 61% median = 21/34, 62%) and preprint (mean = 19.1, 56% median = 19, 56%) groups were similar (studentt-test,P> 0.05). The most common elements which were lacking were II.A [Qualifications of searchers (e.g.,librarians and investigators)], II.H (Method of addressing articles published in languages other than English, II.I (Method of handling abstracts and unpublished studies) and II.J (Description of any contact with authors).

        Table 1 Articles considered in the analysis

        Table 2 Institutes of Medicine recommended standards for meta-analysis

        Table 3 Preferred Reporting Items for Systemic Reviews and Meta-analyses checklist

        To determine the degree to which the PRISMA and MOOSE scores correlated, analysis using Pearson’s correlation coefficient was performed. The resulting coefficient, 0.39, suggests a weak positive correlation.

        Table 4 Meta-analyses of Observational Studies in Epidemiology criteria checklist

        2. Justification for exclusion (eg, exclusion of non–English-language citations)3/19 (16%)1/9 (11%)2/10 (20%)3. Assessment of quality of included studies 12/19 (63%)4/9 (44%)8/10 (80%)V. Reporting of conclusions A. Consideration of alternative explanations for observed results 1/19 (11%)0/9 (0)1/10 (10%)B. Generalization of the conclusions (i.e, appropriate for the data presented and within the domain of the literature review)19/19 (100%)9/9 (100%)10/10 (100%)C. Guidelines for future research 8/19 (42%)6/9 (66%)2/10 (20%)

        Figure 1 Study selection flow diagram.

        DISCUSSION

        The use of meta-analysis in the COVID-19 literature

        Narrative, nonquantitative review papers have existed in the medical literature for many years and are an important source for succinctly reported and up-to-date information for clinicians and others interested in patient care and other issues. In recognition of the importance of the evidence-based approach to the dissemination of medical information, authors added increasingly rigorous approaches to their publications to provide quantitative information, minimize bias, identify knowledge gaps in the regarding a subject, and provide guidance for further growth of the area of study. This trend resulted in the development of the meta-analysis[29].

        Meta-analysis is a modification and attempted improvement of more traditional forms of review publication Meta-analysis attempts to move beyond the narrative review process by adding numeric data synthesized from previously published data[30]. By combining data from more than one study, there is an obvious improvement in statistical power. Meta-analysis has been widely employed in the behavioral science and clinical medicine literatures but has been underutilized in the pathology and laboratory medicine literature. Kinzler and Zhang published a study comparing the use of meta-analysis in the diagnostic pathology literature compared to the clinical medicine literature and noted that meta-analyses comprised < 1% of diagnostic pathology articles compared to 4%-6% of the clinical medicine literature[1]. Despite their relatively low numbers, meta-analyses in the diagnostic pathology literature were highly cited, with a citation rate similar to that of meta-analyses appearing in the clinical medicine literature[1]. This finding is also noted in the current study: although numerous studies have been published addressing the laboratory hematologic aspects of COVID-19, the number of meta-analyses is low and comprises < 1% of the published literature in this area.

        To be successful, the meta-analysis must address several elements[29]: (1) The question must be stated unambiguously; (2) A search of the medical literature must be performed in a comprehensive way; (3) The articles identified by the search must be screened; (4) The appropriate data must be extracted from the selected papers; (5) An assessment of the quality of the information is performed, by a review of the contents of the manuscripts and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) criteria[30]; (6) Determine whether the data in each publication are heterogeneous; (7) Determine summary effect size as odds ratio and generate graphical depictions of data, for example as a forest plot; (8) Assess for publication bias using funnel plot or some other mechanism; and (9) Conduct subset analysis to look for subsets of groups that capture the summary effect.

        Purpose of reporting quality analysis and its limits

        Because of the complexity of design and execution of meta-analyses, there are numerous opportunities to introduce biases and other errors that may significantly alter the outcome. To make the reporting of data and statistical analysis in metaanalyses transparent to the reader and to clearly advertise the limits of the data used in the study, 3 checklist systems have been promulgated to list the major elements that researchers should use to structure their work.

        The first of these systems, the IOM checklist, was created by a committee by the United States Institutes of Medicine. This is a relatively simple 5-point checklist that broadly addresses the reporting of the planning and execution of meta-analyses[2]. The Institutes of Medicine, along with a large number of journals and other publishers, later endorsed the PRISMA statement, which addresses these issues in a more granular fashion[3]. Anther checklist, the MOOSE guidelines, may also be applied to evaluate reporting quality of systematic reviews including meta-analyses[4]. In the reported literature, PRISMA guidelines are utilized more frequently than MOOSE guidelines. In a survey of the medical literature by Fleminget al[31], the vast majority of publications used PRISMA guidelines, compared to MOOSE guidelines, which were cited in only 17% of reviews. Fleminget al[31]note that although there is a high degree of overlap between the MOOSE and PRISMA checklists, MOOSE provides more advice about features such as the search strategy and interpretation of the results of the review, both of which may introduce bias if not adequately addressed[31,32].

        In the current study the most common deficiencies were (1) lack of an articulated rationale for why a pooled analysis is necessary; (2) lack of detail of how to address the use of data that has not been peer reviewed; (3) a lack of sensitivity analysis; and (4) a lack of assessment of studies for bias. Although the rationale for why a meta-analysis is performed is generally obvious (e.g., improved statistical power, identification of a consensus/lack of consensus regarding a specific clinical question) it is not explicitly articulated in a significant number of studies included in this survey. The lack of transparency about the use of non-English language literature and preprint and other non-peer reviewed materials may be problematic, in particular in COVID-19 studies. Sensitivity analysis is a fundamental element of meta-analysis and provides an estimate of the appropriateness of the assumptions made by the analysis[29]. Bias can be introduced into a study in many ways, most commonly by publication bias, in which the medical literature has an underrepresentation of studies with negative findings[29].

        The overall reporting of quality in the pathology literature appears to lag behind that for clinical medicine[5]. Liuet al[5]compared the reporting quality of a group of diagnostic pathology meta-analyses to a group published in clinical medicine journals using the PRISMA checklist, and found a higher average PRISMA score for the medicine studies that was statistically significant (P< 0.01). The average PRISMA score for the COVID-19 meta-analyses in the current study (20.3/27, 75% of items addressed) is below that for both groups analyzed by Liuet al[5]. This reflects a significant weakness in the COVID-19 meta-analysis laboratory hematology literature, since the potential strengths of the meta-analysis approach as a force multiplier for evidence-based medicine requires good reporting quality[5].

        It is important to note the assessment of reporting quality is not synonymous with assessment of methodological quality of a meta-analysis. The purpose of reporting quality guidelines is to provide an appropriate framework to the authors of metaanalyses and other systematic reviews so that their data and statistical analysis is reported in an unambiguous way. The assessment of methodological quality is a separate exercise and can only proceed if the data can be unambiguously extracted from the publication. The methodological assessment of systematic reviews is addressed by other guidelines such as QUADAS and QUADAS-2[33]. Due to the apparent suboptimal average reporting quality of COVID-19 laboratory hematology meta-analyses literature, the ability of the reader to assess methodological quality is limited in many cases.

        Preprint literature and its reporting quality

        In academic publishing, a preprint is the version of a manuscript that has been submitted for publication but has not yet finished the peer review process. In recent years, publishers and others have electronically posted preprint manuscripts to rapidly disseminate scientific knowledge. In addition, studies that have been uploaded to dedicated servers but not submitted for peer review are also included in the category of preprints. Preprints are particularly useful in fields such as COVID-19, which are rapidly evolving and are of intense clinical and scientific interest.

        Since preprints are widely accessible, it would be important for readers to be aware of their quality compared to studies published in the peer review literature. Although it would be assumed that the reporting quality of the peer review process would be higher than the comparable preprint literature since the purpose of peer review is to permit scrutiny of one’s work by experts[34], there have apparently been no studies in the peer review literature that directly compare the reporting quality of clinical studies in the preprint and published literature. A single study in the preprint literature (Carneiroet al[35]) has attempted to address this question. The authors compared a sample of studies identified in the bioRXIV preprint server with studies identified in a Medline (PubMed interface) search. They also compared a group of preprint studies with their final versions. Carneiroet al[35]identified a small increase in quality in the published studies compared to the preprint group.

        In the current study, using the PRISMA and MOOSE criteria, a significant difference was not identified comparing the preprint and published studies in the COVID-19 meta-analysis literature. Taken together, these findings suggest that the peer review process itself does not guarantee an improvement in quality, and authors should take the initiative to conform to reporting quality norms.

        CONCLUSION

        This study represents an attempt to assess the overall reporting quality of the laboratory hematology COVID-19 meta-analysis literature. Using the IOM, PRISMA, and MOOSE, guidelines, there were consistent deficits in the reporting of bias and sensitivity. The results for the preprint and published literature were similar and suggest that the preprint literature on this subject is not decidedly inferior to the published literature. Because of the suboptimal reporting quality, it is important for clinicians and others to carefully assess the individual studies used in a given metaanalysis for evidence of bias or other methodological flaws that have not been reported by the authors. Although there is a positive correlation between the PRISMA and MOOSE guidelines, it is relatively weak. This implies that authors of meta-analyses should consider using both systems to increase the strength of the reporting quality of their studies.

        ARTICLE HIGHLIGHTS

        Research background

        Meta-analyses, which are underutilized in pathology and laboratory medicine,combine the data from multiple studies to produce a publication with increased statistical power. It is important for readers of meta-analyses to have the information in these studies reported in a transparent fashion. Hence the Institutes of Medicine(IOM), Preferred Reporting Items for Systemic Reviews and Meta-analyses (PRISMA),and Meta-analyses of Observational Studies in Epidemiology (MOOSE) checklists have been promulgated to standardize the reporting of meta-analyses.

        Research motivation

        Several parameters evaluated by the hematology laboratory have been identified as potential biomarkers of prognosis and outcome in the coronavirus disease 2019(COVID-19). The data from many of these studies have been pooled and published as meta-analyses. Many of these studies have been identified in the preprint literature(studies that have not yet completed peer review). The reporting quality of this body of work is unknown.

        Research objectives

        The purposes of this study were to 1) evaluate the reporting quality of laboratory hematology-focused COVID-19 meta-analyses using the IOM, PRISMA, and MOOSE checklists and 2) compare the reporting quality of published vs. preprint studies.

        Research methods

        Based on a search of the literature, 19 studies were selected for analysis (9 published studies and 10 preprint studies). The reporting quality of the studies was evaluated using the IOM, PRISMA, and MOOSE checklists.

        Research results

        The reporting quality of the published and preprint studies was similar, and was inferior in quality to that described in similar studies on reporting quality of metaanalyses published in the pathology and medicine literature.

        Research conclusions

        Readers of COVID-19 laboratory hematology meta-analyses should be cognizant of their reporting quality problems, and critically evaluate them before using their information for patient care.

        Research perspectives

        The issue of reporting quality is of critical importance, and the assessment of reporting quality has been underreported in the medical literature. Studies similar to this one will emphasize that the use of the IOM, PRISMA, and MOOSE checklists is a simple strategy to optimize the overall quality of meta-analyses.

        欧美在线观看www| 风韵丰满妇啪啪区老老熟女杏吧| 不打码在线观看一区二区三区视频| 一区二区中文字幕蜜桃| 久久99人妖视频国产| 国产性感丝袜在线观看| 国产成人午夜高潮毛片| 亚洲中文字幕在线第二页| 国产黄页网站在线观看免费视频| 亚洲中文字幕久久精品蜜桃| 素人激情福利视频| 精品极品一区二区三区| 妺妺窝人体色www在线| 国产av旡码专区亚洲av苍井空| 亚洲熟妇色xxxxx欧美老妇y| 婷婷色国产精品视频一区| 黑人免费一区二区三区| 侵犯了美丽丰满人妻中文字幕 | 无码人妻一区二区三区在线 | 国产日产韩国av在线| 亚洲av福利无码无一区二区| 久久久久久久性潮| 连续高潮喷水无码| 加勒比熟女精品一区二区av| 亚洲女同恋av中文一区二区| 欧美性高清另类videosex| 色婷婷综合久久久久中文| 久久这里只有精品9| 国产三级国产精品三级在专区| av成人一区二区三区| 久久亚洲精品成人av无码网站| 精品一区二区三区无码免费视频| 国产久视频国内精品999| 丰满人妻无套内射视频| 一本到在线观看视频| 久久免费的精品国产v∧| 性做久久久久久久| av一区二区三区高清在线看| 亚洲 小说区 图片区 都市| 亚洲国产成人无码av在线影院| 国产精品二区在线观看|