Skip to Search
Skip to Navigation
Skip to Content

Image Classification Methodology

Detailed Classification | Simplified Classification

Detailed Classification

Data Preparation
The 8-bit ADS40 dataset was used. Although a 16-bit dataset is available, the file size was too large (900+ Mb Vs 400+ Mb). Three bands from the color-infrared dataset were used (green, red, near-infrared) and one band from the true color dataset (blue). A vegetation index was created (normalized difference vegetation index that compares the near-infrared band and the red band) and added to the dataset. Because the file resulting from a nine-tile mosaic was much too large to work worth, the nine tile region was divided into three sections that covered 53 of the 55 parcels of interest as defined by the CLAM project. Only the center section, section 2, was classified using the detailed methodology. All three sections were classified using the simplified methodology.

Classification
The 5-band dataset for section 2 of the lower Connecticut River region of interest was loaded into eCognition. Experimentation with segment size, color and shape of objects revealed four levels. Only the raw data bands (blue, green, red, near-infrared) were used for image segmentation.

The first step in classification is to develop rules. The large variability within the region and within each class was overwhelming. To automate rule development, a program called See5 was used. See5 is data mining software that is designed to extract patterns from data. After providing the software with samples, it provides a decision tree.

The largest level was classified first. Preparing data for See5 involved converting the segments from eCognition to polygons and exporting them as a shapefile. The attributes of the exported polygons included layer means, standard deviations and generic shapes characteristics. For each land cover class, representative polygons were selected. The table files were opened in excel and prepared for See5. See5 was run using the samples and the result was a decision tree.

The See5 decision tree was implemented in eCognition by hand. The classification result was acceptable. The steps above were repeated for smaller segments. The steps included creating the segments, exporting them as a shapefile, selecting polygons representative of the major classes, formatting the table files in excel and running See5. The rules were implemented in eCognition for level 2. The results were decent however there were some problem areas. To improve the results, more samples were added to the See5 data and the decision tree was re-created.

The eCognition rules were tweaked to provide optimal classification results. Finally, hand editing corrected gross errors. Click here to see the result of the classification.

Simplified Classification

Data Preparation
The necessary ADS40 color infrared image tiles were imported into img format. A vegetation index was created for each tile and the layers were stacked. The dataset was used to create an eCognition project and segment.

Classification
The lower Quinnipiac River region was the first to be classified. It consisted of a full tile and a partial tile. The full tile was classified first, then the partial tile and then they were put together. Two segment levels were used with only four classes (developed, residential, dark developed and water). The distinction between developed classes is not important. The classification was exported and edited to correct for gross errors.

The three sections of the lower Connecticut River region of interest were also classified. For each section, the proper data was collected (3 bands plus a vegetation index). Each section was done separately with the following steps: (1) image segmentation, (2) import classification hierarchy from lower Quinnipiac River classification, (3) edit hierarchy where necessary, (4) export, (5) edit classification to fix errors.

Click here to view the results of the simplified classification for the lower Quinnipiac River region and the lower Connecticut River region.

Back to Top