Using The Bland-Altman Method To Measure Agreement With Repeated Measures

Carstensen B, Simpson J, Gurrin LC. Statistical models for assessing compliance in study comparison methods with replication measures. Int J Biostat. 2008;4(1):16. Several mixed effects models, which vary in complexity, have also been proposed to quantify the dispersion of differences and thus calculate compliance limits. Some of these models are similar to the CCC in that they model the gross result data and contain terms of interaction. while other authors propose to model the differences directly [15, 21,22,23,24]. The relatively simple method that Parker et al. [15] recommends and that we use here (see Eq). (3)), models the differences directly by a linear model of mixed effect and is highly adaptable to different data structures. In fact, the method has the flexibility and versatility to satisfy complex structures of variability [25, 26]. Stevens NT, Steiner SH, MacKay RJ. Evaluation of the agreement between two measurement systems: an alternative to the limits of the agreement`s approach.

Med Res Stat Methods. 2017;26 (6):2487-504. One of the most important methods for classifying different methods is to divide them into standardized match indices that are modulated so that they are within a given domain (for example. B the CCC is resized in values between 1 and 1 and the CIA between 0 and 1), and those that allow a direct comparison with the initial scale of the data and that require the specification of a clinically acceptable difference (z.B LoA, ET -TDI methods). These groups of methods are commonly referred to as methods of agreement at scale or scale [2], and these are sometimes referred to as "pure agreement cues" [40]. Indeed, the CCC can be described more precisely as an appreciation of discernment and not as an agreement, since it is intended to calculate the proportion of the variance of a system explained by the subject/activity effect and does not require an indication of CAD [41]. This is not a "simple hint of agreement" [41]. The CCC has the disadvantage of relying heavily on the variability between subjects (and, in our case, the variability between activities) and would therefore reach a high value for a population with significant heterogeneity between subjects or activities, although the agreement within the subjects might be low [2, 11, 12]. If the differences between the subject and the activity are very small, it is unlikely that the CCC will reach a high value, even if an agreement within a device is appropriate.

With respect to the intra-class correlation coefficient (CCI), it is not related to the actual scale of measurement or the size of the error that might be clinically permissible, complicating interpretation [41]. As described in other documents [11, 12, 40], it is very easy to obtain an artificially high CCC value, and the manipulation of the registration can radically change the CCC`s estimate. Nevertheless, the components of the variance are automatically generated in R, which helps to interpret the general grouping indices. Barnhart HX. A review of the evaluation of the agreement. Wiley StatsRef: Statistics Referenz Online; 2018. 1-30. doi.org/10.1002/9781118445112.stat01671.pub2. Hamilton C, Stamey JD.

Use a predictive approach to assess the consistency between two measures on an ongoing basis. J Clin Monit Computit. 2009;23 (5):311-4. Rubio N, Parker RA, Drost EM, Pinnock H, Weir CJ, Hanley J, et al. Home respiratory frequency monitoring in people with chronic obstructive pulmonary disease: observational study on feasibility, acceptance and change after derazerbation.