The total share of the match and kappa statistics was 93% and 0.81, respectively. In all six categories, Kappa ranged from 0.51 to 0.84, with the categories of asthma transfers and drug side effects showing only moderate agreement. Table 2 provides a summary of the results for quality agreements and general agreements on reliability within holes. Category-specific Kappa values homogeneity tests suggest statistically significant heterogeneity. It was therefore interesting to look for potential sources of heterogeneity. The data were aggregated and then stratified by Abstractor to compare the degree of match between time 1 and time 2 for each abstractor (Table 3). The agreement as a percentage of abstraction ranged from 79% to 98%, while intra-rater statistics of abstraction ranged from 0.21 to 0.94. Statistics of chi square heterogeneity for the difference between kappas calculated for the 10 separators were also statistically significant (2-238, p < 0.0001), indicating that individual differences in abstraction consistment may be the heterogeneity of all category-specific Kappa values. The intra-rater agreement was also studied for all abstracts, depending on when the diagram was abstract, either at the beginning of the study (for retrospective patient visits before the initial value) or for a 6-month follow-up.
Kappa`s statistics were 0.76 (IC 95% 0.73, 0.79) for abstract retrospective diagrams and 0.82 (IC 95%: 0.80, 0.85) for the 6-month follow-up action (data not presented in the tables). The results of this study showed that the chart-abstractioneurs who participated in the data collection for the primary Care Asthma Pilot project reliably extracted the information contained in the medical diagrams. Previous studies on the reliability of intra-rater and inter-advisor advisors have also shown moderate to significant reliability among Inter n advisors and advisors. Rater, related to medical charter action [6, 18]. While the majority of positions (27 out of 33, 82%) Kappa values above 0.61 are absolved by the same advisor or advisor within the same centre, indicating that the agreement within all advisors and on a point-by-point basis has been significantly excellent despite staff changes at some sites. For the intergovernmental agreement, 10 out of 33 positions (30%) with a kappa value of more than 0.61. Fewer positions showed significant inter-rater agreement in relation to the intra-rater agreement, which may be due to the content of the fictitious diagrams used for The Evaluation of Inter-Rater, and not to differences in the abstraction process or between the abstractionists themselves.