previous
Index
next
Map Pairs Analysis Tools I
Comparing images
A-Z
Map Pairs Analysis Tools III
Boolean operations


Crosstabulation

Crossing tables? Never heard! Now what's that for?? The following description of the functionality of the CROSSTAB module in IDRISI is the continuation of the last chapter about comparing. We use it solely for image comparison.
For those of you being unfamiliar with the crosstabulation-concept, I shall start with a basic introdution:

imagine two maps (raster images) which tell us about the landusage of a certain district for two consecutive years. As a basis for future planings we need to learn about the changes that occured in that district. Did forests change? Did they grow or has their area decreased? What classes do we meet this year where urban areas have been the last year? etc.
We could successfully answer to those questions by differencing with OVERLAY, doing RECLASSifications again and again ... but CROSSTAB kills more than two birds with one stone!
Look at our two landuse images(yes, they are simple, I know). The cellvalues are equivalent to landusage classes.

LANDUSE 95
LANDUSE 96
1
1
2
4
4
1
2
2
4
4
1
3
2
4
2
3
3
3
2
2
these matrices shall indicate
two images with 4 different classes
---
feed them both
into the CROSSTAB-module
1
2
2
2
4
1
3
2
2
2
3
3
3
2
2
3
3
3
3
2

Summary information from Crosstab analysis
Cross-tabulation of LANDUSE 95 (columns) against LANDUSE 96 (rows)
LANDUSE 95
LANDUSE 96
1
2
3
4
Total
1
2
0
0
0
2
2
1
4
0
4
9
3
1
3
4
0
8
4
0
0
0
1
1
Total
4
7
4
5
20
The bold italic style numbers in the second row are the classes from our 1995 image, those in the second column from the 1996 image. For the output table CROSSTAB simply counts and sorts combinations! Read the table like this:
'there are 4 cells in 1996 with class 2 that had the same class in 1995' or 'there are 4 cells in 1996 with class 3 that had the same class in 1995, but there is 1 cell with class 3 that had been class 1 and 3 cells that had been class 2 before'. You could also say: '4 cells with class 4 in 1995 shifted to class 2 in 1996, only 1 cell stayed same'.
The Totals tell the overall number per class and image. We had for example the double area (= cells) of class 1 in 1995. The red-colored diagonal-values are occurences of classes that didn't change.

CROSSTAB - Dialogbox The table explained above is the same that IDRISI outputs with Output type option 2 and 3 (see figure of dialogbox to the right). You see four options - the wider the choice the greater the trouble? Well, option 1 outputs an crossclassification image. Nothing to get into a sweat. Such an image recodes the cells equivalent to the crosstable-combinations. We skirt option 4 a bit later.
As the saying is: one pic' tells you more than a thousand words, so let's have a look at the crossclassification image from our simple landuse images outlined before (here I called them use95 and use96, they are the same as landuse 95 resp. landuse 96):
crossclassification image If you compare the image with the crosstable, its meaning will become clear. How many yellow areas (cells) do you see? Right! Only one. What does the legend (CROSSTAB produces legend and title automatically) say about the yellow? It has value 2 and stands for all cells that had class 1 in use95 but class 2 in use96.

Now try to identicate all the other combinations until the glorious aha-experience hits you.

Cross-Classification images give us a good idea of chances that happened during the time and also - at least roughly - about their directions. So this is a rather useful tool.
But wait, there's more, there is more! IDRISIs CROSSTAB computes the KAPPA Index of Agreement (KIA) for us if we check the box. It is also known as KHAT or KAPPA Coefficient of Agreement. This statistical measure was introduced by the psychologist COHEN in 1960 and adapted for accuracy assessment in the remote sensing fields by CONGALTON & MEAD (1983)*
KIA is a means to test two images, if their differences are due to 'chance' or 'real (dis)agreement'. It is often used to check for accuracy of classified satellite images versus some 'real' ground-truth data. IDRISI computes an overall as well as per-category indices.

How are these coefficients being calculated? Here's the bundle of formulas:

KAPPA - Conceptual Formula

Overall KAPPA**
KAPPA - Formular .... number of row in crossclassification table
xii ... number of combinations along the diagonal
xi+ ... total observations in row i
x+i ... total observations in column i
N ... total number of cells (the number in the lower right corner of our table, 20)

Computionally IDRISI acts as follows to get the overall Kappa:

observed accuracy = proportion of agreeing units = p0 =

(2 + 4 + 4 + 1) / 20 = 0.55

chance agreement = proportion of units for expected chance agreement = pc =
(2 * 4 / 20) + (9 * 7 / 20) + (8 * 4 / 20) + (1 * 5 / 20) = 0.27

KAPPA Index of Agreement = KIA = (p0 - pc) / (1 - pc) =
(0.55 - 0.27) / (1 - 0.27) = 0.383561644

For the per-category-KAPPA IDRISI follows the algorithm introduced to remote sensing by ROSENFIELD & FITZPATRICK-LINS (1986)***:

per category KAPPA pii ... proportion of units agreeing in row i / column i
pi+ ... proportion of units for expected chance agreement in row i
p+i ... proportion of units for expected chance agreement in column i

Again an calculation example should blow away mists! The KIA for class 2 and use96 as reference image:

pii = (4 / 20) = 0.2
pi+ = (9 / 20) = 0.45
p+i = (7 / 20) = 0.35
KIA = (0.2 - 0.45 * 0.35) / (0.45 - 0.45 * 0.35) = 0.145299145

The KAPPA ranges from 0 to 1. 0 means total agreement from chance, 1 perfect 'true' agreement. A value of let's say 0.145 could be thought of '14 percent better agreement than just by chance'.

I promised to touch option 4, so to complete that: IDRISI outputs similarity coefficients Chi Square (plus the degrees of freedom), CRAMER's V and the overall KAPPA. Check one of the popular statistics textbooks, if you are unfamiliar with these.


* Russell G. CONGALTON & Roy A. MEAD, 1983: A Quantitative Method to Test for Consistency and Correctness in Photointerpretation. - Photogrammetric Engineering and Remote Sensing 49,1: 69 - 74
** Thomas M. LILLESAND & Ralph W. KIEFER, 19943rd: Remote Sensing and Image Interpretation. John Wiley &. Sons, New York.
*** George H. ROSENFIELD & Katherine FITZPATRICK-LINS, 1986: A Coefficient of Agreement as a Measure of Thematic Classification Accuracy. - Photogrammetric Engineering and Remote Sensing 52,2: 223 - 227

previous
Index
next
Map Pairs Analysis Tools I
Comparing images
A-Z
Map Pairs Analysis Tools III
Boolean operations

last modified: | Comments to Eric J. LORUP idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis idrisi resource center idrisi gis idrisi gis