So far, the discussion has considered that the majority is correct and that the minority evaluators are wrong in their scores and that all the evaluators have made a deliberate choice of rating. Jacob Cohen understood that this hypothesis could be wrong. Indeed, he explicitly stated that “in the typical situation, there is no criterion of `accuracy` of judgments” (5). Cohen proposes the possibility that at least for some variables, none of the evaluators were sure of the score to enter and that they simply made random assumptions. In this case, the agreement reached is an erroneous agreement. Cohens Kappa was designed to address this concern. The concept of “match between evaluators” is quite simple, and for many years the reliability of interraters has been measured as a percentage of agreement between data collectors. The 1.00 percent concordance value can be understood as the percentage of data that is false. In other words, if the percentage of concordance is 82, 1.00-0.82 = 0.18 and 18% is the amount of data that the research data is incorrect. Kappa is a form of correlation coefficient. Correlation coefficients cannot be interpreted directly, but a square correlation coefficient called a coefficient of determination (COD) is directly interpretable.
The CSB is explained as the amount of variation in the dependent variable, which can be explained by the independent variable. . . .