Cohen’s Kappa is used to measure the level of agreement between two raters or judges who each classify items into mutually exclusive categories.
The formula for Cohen’s kappa is calculated as:
k = (po – pe) / (1 – pe)
where:
- po: Relative observed agreement among raters
- pe: Hypothetical probability of chance agreement
Rather than just calculating the percentage of items that the raters agree on, Cohen’s Kappa attempts to account for the fact that the raters may happen to agree on some items purely by chance.
The value for Cohen’s Kappa always ranges between 0 and 1, with 0 indicating no agreement between the two raters and 1 indicating perfect agreement between the two raters.
The following table summarizes how to interpret different values for Cohen’s Kappa:
The following example shows how to calculate Cohen’s Kappa in Excel.
Example: Calculating Cohen’s Kappa in Excel
Suppose two art museum curators are asked to rate 70 paintings on whether they’re good enough to be shown in a new exhibit.
The following 2×2 table shows the results of the ratings:
The following screenshot shows how to calculate Cohen’s Kappa for the two raters, including the formulas used:
The p0 value represents the relative agreement between the raters. This is the proportion of total ratings that the raters both said “Yes” or both said “No” on.
This turns out to be 0.6429.
The pe value represents the probability that the raters could have agreed purely by chance.
This turns out to be 0.5.
The k value represents Cohen’s Kappa, which is calculated as:
- k = (po – pe) / (1 – pe)
- k = (0.6429 – 0.5) / (1 – 0.5)
- k = 0.2857
Cohen’s Kappa turns out to be 0.2857.
Based on the table from earlier, we would say that the two raters only had a “fair” level of agreement.
Additional Resources
The following tutorials offer additional resources on Cohen’s Kappa: