background image

 

 

 

 

 

 

 

ENVI Zoom Tutorial: 

 

ENVI Feature Extraction with Supervised 

Classification 

 

 

 
 
 

Table of Contents 

O

VERVIEW OF 

T

HIS 

T

UTORIAL

.....................................................................................................................................2

 

Files Used in This Tutorial ..................................................................................................................................2

 

B

ACKGROUND

.........................................................................................................................................................2

 

The ENVI Feature Extraction Workflow................................................................................................................3

 

E

XTRACTING 

I

MPERVIOUS 

S

URFACES WITH 

S

UPERVISED 

C

LASSIFICATION

.................................................................................4

 

Opening and Displaying the Image .....................................................................................................................4

 

Segmenting the Image ......................................................................................................................................4

 

Supervised Classification ....................................................................................................................................7

 

Restoring Training Data .....................................................................................................................................8

 

Improving Classification Results..........................................................................................................................8

 

Saving your Changes to a New Training Data File .............................................................................................. 14

 

Creating a Shapefile of Impervious Surfaces ...................................................................................................... 14

 

Exiting ENVI Feature Extraction ........................................................................................................................ 15

 

R

EFERENCES

......................................................................................................................................................... 15

 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

 
Overview of This Tutorial 

This tutorial demonstrates how to extract impervious surfaces from a QuickBird multispectral image using supervised 

classification in ENVI Feature Extraction. 

Files Used in This Tutorial  

ENVI Resource DVD: envidata\feature_extraction 
 

File 

Description 

qb_colorado 

QuickBird multispectral image, Boulder, CO, USA, 

captured July 4, 2005 

qb_colorado.hdr 

Header file for above 

qb_colorado_supervised.xml 

Training data file for above 

 

  

QuickBird files are courtesy of DigitalGlobe and may not be reproduced without explicit permission from DigitalGlobe. 

 

Note: Some IDL and ENVI Zoom features take advantage of graphics hardware that supports the OpenGL 2.0 interface 

to improve rendering performance, if such hardware is present. Your video card should support OpenGL 2.0 or higher to 
take advantage of the graphics features in IDL and ENVI Zoom. Be sure to update your video card drivers with the most 

recent version, and set the ENVI Zoom preference Use Graphics Card to Accelerate Enhancement Tools to Yes

 

Background 

ENVI Feature Extraction is a module for extracting information from high-resolution panchromatic or multispectral 

imagery based on spatial, spectral, and texture characteristics. You can extract multiple features at a time such as 

vehicles, buildings, roads, bridges, rivers, lakes, and fields. ENVI Feature Extraction is designed to work with any type of 

image data in an optimized, user-friendly, and reproducible fashion so you can spend less time understanding processing 

details and more time interpreting results.  
 

ENVI Feature Extraction uses an object-based approach to classify imagery. Traditional remote sensing classification 

techniques are pixel-based, meaning that spectral information in each pixel is used to classify imagery. This technique 

works well with hyperspectral data, but it is not ideal for panchromatic or multispectral imagery. With high-resolution 

panchromatic or multispectral imagery, an object-based method offers more flexibility in the types of features to be 

extracted. An 

object is a region of interest with spatial, spectral (brightness and color), and/or texture characteristics that 

describe the region. 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

The ENVI Feature Extraction Workflow 

ENVI Feature Extraction is the combined process of segmenting an image into regions of pixels, computing attributes for 

each region to create objects, and classifying the objects (with rule-based or supervised classification) based on 

attributes, to extract features. The overall workflow is summarized in Figure 1. The workflow allows you to go back to 

previous steps if you want to change your settings. 

 

 

Figure 1: Feature Extraction Workflow 

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

Extracting Impervious Surfaces with Supervised Classification 

This tutorial simulates the workflow of a city planner who wants to identify all of the impervious surfaces in a 

neighborhood. Impervious surfaces include paved surfaces, rooftops, and other structures which replace naturally 

pervious soil with impervious materials. The total coverage by impervious surfaces in a given area affects urban air and 

water resources. City government officials often use the area of impervious surface on a given property as input into 

assessing its property tax. You will learn how to use ENVI Feature Extraction to extract impervious surfaces using 
supervised classification and save it to a polygon shapefile. 

 

Supervised classification in ENVI Feature Extraction is an iterative process. Best results are obtained by collecting a wide 

range of training samples, modifying classification parameters, and modifying computed attributes, all while previewing 

results on-the-fly. It is not meant to be a quick-and-dirty, simple, linear workflow when your imagery is highly textured 

and consists of many spatially and spectrally heterogeneous features. 
 

If you need more information about a particular step, click the blue “Tip” links in the Feature Extraction dialog to access 

ENVI Zoom Help. 

Opening and Displaying the Image 

1.  From the menu bar, select File → Open. The Open dialog appears. 
 
2.  Navigate to envidata\feature_extraction and open qb_colorado. This image is a QuickBird, pan-

sharpened, 0.6-m spatial resolution, subset saved to ENVI raster format. QuickBird captured this scene on July 4, 
2005. 

 

3.  From the menu bar, select Processing → Feature Extraction. The Select Input File dialog appears. 

 

4.  The file qb_colorado is selected by default. Click OK. You can create spectral and spatial subsets for use with 

ENVI Feature Extraction, but you will not use these features in this exercise. The Feature Extraction dialog 

appears. 

Segmenting the Image  

1.  Enable the Preview option to display a Preview Portal. ENVI Zoom segments the image into regions of pixels 

based on their spatial, spectral, and texture information. The Preview Portal shows you the current segmentation 

results for a portion of the image (Figure 2).  

 

You can use the Blend, Flicker, and Swipe tools on the Preview Portal toolbar to view the underlying layer. You 
can also use the Pan, Zoom, and Transparency tools on the main toolbar, although these are for display purposes 

only; they do not affect ENVI Feature Extraction results. You cannot adjust the Contrast, Brightness, Stretch, or 

Sharpen values in a Preview Portal. You can move the Preview Portal around the image or resize it to look at 

different areas. 

 

Tip: If the segments are too light to visualize in the Preview Portal, you can click in the Image window to select 
the image layer, then increase the transparency of the image (using the Transparency slider in the main toolbar). 

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

 

 

Preview Portal 

showing 

segmentation 
results

Figure 2: ENVI Zoom interface 

 

 

Later in this tutorial, you will restore a previously created training data file for use with supervised classification. 

This training data file was created using a Scale Level of 30, a Merge Level of 90, and no refinement. Training 
data files are tied to specific values for these parameters, so you must use these same segmentation parameters 

in the next few steps. 

 

For more background on choosing effective Scale Level, Merge Level, and Refine parameters, see the ENVI Zoom 

tutorial, 

ENVI Feature Extraction with Rule-Based Classification. 

 

2.  Type 30.0 in the Scale Level field, and click Next to segment the entire image using this value. ENVI Zoom 

creates a Region Means image, adds it to the Layer Manager, and displays it in the Image window. The new layer 
name is qb_coloradoRegionMeans. The Region Means image is a raster file that shows the results of the 
segmentation process. Each segment is assigned the mean band values of all the pixels that belong to that 

region. Feature Extraction proceeds to the Merge step (Step 2 of 4 of the Find Objects task). 

 

3.  Merging allows you to group similar adjacent segments by re-assembling over-segmented or highly textured 

results. Type 90.0 in the Merge Level field, and click Next. Feature Extraction proceeds to the Refine step 

(Step 3 of 4 of the Find Objects task). 

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

 

Values of 30 for Scale 

Level and 90 for 

Merge Level 
effectively delineate 

the boundaries of 

impervious surfaces 

such as roads, 

sidewalks, and 

rooftops. 

Figure 3: Preview Portal displayed over Region Means image 

 
 

4.  The Refine step is an optional, advanced step that uses a technique called 

thresholding to further adjust the 

segmentation of objects. Thresholding works the best with point objects that have a high contrast relative to their 

background (for example, bright aircraft against a dark tarmac). You do not need to perform any thresholding on 

the image to extract impervious surfaces. Accept the default selection of No Thresholding, and click Next

Feature Extraction proceeds to the Compute Attributes step (Step 4 of 4 of the Find Objects task). 

 

5.  For this exercise, you will compute all available attributes. Ensure that all attribute categories are selected, and 

click Next. ENVI Zoom computes the attributes; this process takes a few minutes to complete. These attributes 

will be available for supervised classification. If you choose not to compute selected attributes when using ENVI 

Feature Extraction, you will save time in this step but will be unable to use those attributes for classification. 

 

 

Figure 4: Compute Attributes step 

 

 

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

6.  Feature Extraction proceeds to the Extract Features task. Select Classify by selecting examples, and click 

Next

 

 

Figure 5: Selecting to perform supervised classification 

 

Supervised Classification 

Supervised classification is the process of using training data to assign objects of unknown identity to one or more known 

features. The more features and training samples you select, the better the results from supervised classification. Training 

data consist of objects that you select as representative samples of known features. 
 

The Extract Features task begins with one undefined feature (Feature_1, Figure 6). As you move the mouse around the 

Region Means image, the objects underlying your cursor are highlighted with the color assigned to that feature. You may 

need to click once in the Image window to activate this function. This is normally how you would begin the process of 

collecting training data. However, for this exercise, you will restore a previously created training data file for extracting 

impervious surfaces and further improve its classification. 
 

 

Figure 6: Supervised classification 

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

Restoring Training Data 

A training data file is an XML file that contains all the parameters used to classify a given image, including the Scale Level, 

Merge Level, Refine parameters, computed attributes, classification algorithm and associated parameters, attribute data, 

and training samples. The training data file you are about to open is a first attempt at extracting impervious surfaces. It 

contains three classes: Impervious, Trees, and Grass_Fields. It was created using all computed attributes and using the 

following classification parameters: 

 

   Classification Algorithm: Support Vector Machine 

   Kernel Type: Polynomial 

   Degree of Kernel Polynomial: 2 

   Bias in Kernel Function: 1.00 

   Gamma in Kernel Function: 1.03 

   Penalty Parameter: 200.00 

   Classification Probability Threshold: 0.00 

 

Simple trial-and-error was used to arrive at these values, which produced a reasonable classification. You will learn more 

about this process later in this tutorial. 

 

1.  Click the Restore Training Data button 

. The Restore Training Data dialog appears. 

 
2.  Navigate to envidata\feature_extraction, select qb_colorado_supervised.xml, and click Open

ENVI Zoom restores and displays the previously created training data.  

 

Notice the various colored objects in the image. A red object, for example, is a training sample representing the 
Impervious feature. The Feature Extraction dialog lists each feature with its representative color. From this dialog, 

can you tell how many training samples were collected for the Impervious feature? 

 

Although you are interested in extracting only one feature (impervious surfaces), you still need to collect training 

data for several different features to obtain the best results. The more features and training samples you provide, 

the more choices the classification tool has in classifying objects. The minimum number of features needed to 
perform classification is two. 

Improving Classification Results 

If you were to use this training data file without any modifications and you exported the Impervious feature to a 
shapefile, the results would be reasonable. But the classification is still not completely accurate. Following are some 

helpful tips for improving the classification and obtaining more accurate impervious surface boundaries. 

 

1.  Click the Preview option in the Feature Extraction dialog. A Preview Portal appears with the current classification 

results. As you make changes to the training data, attributes, and classification parameters in the next several 

steps, the classification results will update in real time. Move the Preview Portal around the image or resize it to 
view results for different areas. 

 

2.  In the Layer Manager (middle-left part of the ENVI Zoom interface), drag the layer named qb_colorado above 

qb_coloradomergedRegionMeans

. The original Quickbird image is now the top-most layer and provides a 

better visual representation of the scene than the Region Means image. 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

 

Figure 7: Moving a layer to the top 

 

Understanding Errors of Commission and Omission in Supervised Classification 

3.  In the upper-right corner of the ENVI Zoom interface, select PixelX,PixelY from the Go To drop-down list. Enter 

710,284 in the Go To field. Press Enter

  

 

 

 

The image display centers over this pixel coordinate. 

 

4.  Center the Preview Portal over the house and dirt roads in the center part of the image (see Figure 8).  

 

Note: To move the Preview Portal yourself from now on, you may need to select RasterPortal in the Layer 

Manager to make the Preview Portal the active layer. 

 

5.  Using the Transparency slider on the main toolbar, set the transparency to 100% so you can view the original 

image and training samples (Figure 8). You will see that no training samples were taken from the houses or dirt 

roads (just the surrounding grasses/fields, shown as a yellowish color).  

 

 

 

Dirt roads

Figure 8: Preview Portal (100% transparency) centered over dirt roads 

  

6.  Set the transparency back to 0% in the Preview Portal so you can see the current classification results (Figure 9). 

Notice how the current training data set incorrectly classifies the dirt roads as Impervious. In terms of the 

impervious feature, this is an example of an 

error of commission or a false positive. In other words, the dirt roads 

are not actually impervious surfaces, but they are classified as such. 

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

 

Dirt roads 
incorrectly 

classified as 

Impervious 

Figure 9: Preview Portal (0% transparency) centered over dirt roads 

 
7.  Set the transparency of the Preview Portal back to 100%. 

 

8.  Enter 700,1175 in the Go To field in the upper-right corner of the ENVI Zoom interface. Press Enter.  

 

 

 

The image display centers over this pixel coordinate. 

 

9.  Move the Preview Portal over the paved trail shown in Figure 10. Notice that some training data was previously 

collected from most of the trail (shown in red). 

 

 

 

Paved trail with 

training data 
already collected 

(shown in red) 

Figure 10: Preview Portal (100% transparency) centered over paved trail 

 

 

10.  Set the transparency of the Preview Portal to 0% and notice how a section of the trail is misclassified as 

Fields/Grasses (Figure 11). In terms of the Impervious feature, this is an 

error of ommission or false negative. In 

this case, you want this object to be classified as Impervious, but it is classified as something else. 

 

10 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

 

This is an 

impervious 

surface that is 

incorrectly 

classified as 

grasses/fields 

(yellow). 

Figure 11: Preview Portal (0% transparency) centered over paved trail 

 

 

In the next series of steps, you will learn two methods for correcting errors of commission and omission: (1) adding a 

new feature called “Dirt Road” and (2) collecting more training data. 

Adding a New Feature and Collecting Training Data 

1.  In the Feature Extraction dialog, click the Add Feature button 

. A new feature called “Feature_4” is 

added to the list. It contains no objects because you have not yet collected training data for this feature. 

 
2.  Right-click on Feature_4 and select Properties. The Properties dialog appears. 

 

3.  Change the Feature Name to Dirt Road, and click OK. You can also change the color if you wish. The 

Feature List should look similar to the following: 

 

 

Figure 12: Feature List 

 

 

4.  Go back to the upper part of the image with the house and dirt road you examined earlier (see Figure 8). 
 

5.  The Dirt Road feature is selected by default. If the Preview Portal is still open, move it out of the way 

before continuing since you cannot select training data through the Preview Portal. Click once in the 

Image window to activate the process of collecting training data for this feature. As you move around the 

image, the objects underlying the cursor are highlighted with the color assigned to the Dirt Road Feature. 

 

6.  Click to select objects representing the dirt road, as shown in Figure 13. The color of the objects change 

to the feature color, and the feature name updates to show the number of objects you added. Move 

around the entire image and choose a variety of different objects that represent dirt roads. 

 

11 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

 

Figure 13: Collecting training data (blue) for dirt roads 

 

 

Here are a few more tips for selecting objects as training data: 

 

To select multiple objects at once, click and drag the cursor to draw a box around the objects. 
ENVI Zoom assigns all of the segments that are completely enclosed within the selection box to 

the feature. Ctrl-Z undoes the box selection. Be careful using the selection box because you can 

easily select too many features, which will slow the performance of the Preview Portal. 

 

To remove an individual object from the selection, click on the object. The feature name updates 

to show one less object. 

 

To see the boundaries of the individual objects, enable the Show Boundaries option in the 

Feature Extraction dialog. 

 

 

 

7.  Notice how the Preview Portal updates to show the new classification results in other parts of the image 

each time you select a new object and add to the training data set. 

  

8.  Center the Preview Portal over the area with the dirt roads. By adding the Dirt Road feature, did the 

classification improve with regard to impervious surfaces? 

 

9.  In the Feature Extraction dialog, select the Impervious feature. 

 
10.  Experiment with selecting more training data for the Impervious feature, and possibly adding a new 

feature of your choice. Evaluate how these changes affect the classification of impervious surfaces. 

 

Modifying Attributes 

 

1.  Click the Attributes tab in the Feature Extraction dialog. The attributes you computed earlier in the Compute 

Attribute step are used to further classify features. The training data file that you restored was created using all 

computed attributes (shown in the Selected Attributes list). Some attributes are more useful than others when 

differentiating objects. Classification results may not be as accurate when you use all attributes equally because 

the irrelevant attributes can introduce noise into the results. 

 

2.  Click the Auto Select Attributes button 

. ENVI Zoom selects the best attributes to use for classifying 

features. The underlying logic is based on Yang (2007). See the “References” section of this tutorial for more 

information. 

 

Did this improve the classification of impervious surfaces? If not, try experimenting with your own set of 

attributes, by following these steps: 

3.  Select one or more attributes from the Selected Attributes list, then click the Unselect button 

 to remove 

them from consideration. Again, the Preview Portal updates to show the changes to the classification. 

 

12 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

4.  To select individual attributes for classification, expand the Spectral, Texture, and Spatial folders to see their 

respective attributes. Each attribute is shown with an 

 icon. (The “Customized” folder contains the Color Space 

and Band Ratio attributes.) Click the Select button 

 to add the attribute to the Selected Attribute list. 

 

5.  Experiment with different combinations of spatial, spectral, texture, and customized attributes to determine the 

best results for classifying impervious surfaces. If you do not have time to select your own attributes, the Auto 

Select Attributes button often provides good results. 

 

Modifying Classification Parameters 

1.  In the Feature Extraction dialog, click the Algorithm tab. 

 

2.  From the Classification Algorithm drop-down list, select K Nearest Neighbor

 

3.  Click the Update button and examine the classification results in the Preview Portal. How did changing the 

supervised classification algorithm affect the classification of impervious surfaces? 

 

4.  Experiment with the two classification algorithms (Support Vector Machine and K Nearest Neighbor), and try 

different values for each of their associated parameters. Evaluate how these changes affect the classification of 

impervious surfaces, by clicking the Update button to update the Preview Portal. Following are some tips on 
using the parameters. If you want to skip this background information, proceed to Step 5. 

 

K Nearest Neighbor 

K Parameter: This is the number of neighbors considered during classification. K nearest distances are 

used as a majority vote to determine which class the target belongs to. For example, suppose you have four 

classes and you set the K Parameter to 3. ENVI Zoom returns the distances from the target to the three 
nearest neighbors in the training dataset. In this example, assume that the distances are 5.0 (class A), 6.0 

(class A), and 3.0 (class B). The target is assigned to class A because it found two close neighbors in class A 

that “out-vote” the one from class B, even though the class B neighbor is actually closer. Larger values tend 

to reduce the effect of noise and outliers, but they may cause inaccurate classification. Typically, values of 3, 

5, or 7 work well. This is a useful feature of K Nearest Neighbor classification because it can reject outliers or 

noise in the training samples.

 

 

Support Vector Machine 

SVM is a classification system derived from statistical learning theory that provides good classification results from 

complex and noisy data. For more information, see “Applying Support Vector Machine Classification” in ENVI 

Help, or see Hsu, Chang, and Lin (2007). 

 

Kernel Type: The SVM algorithm provides a choice of four kernel types: Linear, Polynomial, Radial Basis 

Function, and Sigmoid. All of these are different ways of mathematically representing a kernel function. 

The Radial Basis Function kernel type (default) works well in most cases. 

 

Linear: K(xi,xj) = xiTxj 

 

 

 

Polynomial: K(xi,xj) = ( xiTxj + r)d,   > 0 
Radial Basis Function (RBF): K(xi,xj) = exp(- ||xi – xj||2),   > 0 
Sigmoid: K(xi,xj) = tanh( xiTxj + r) 
 

The Bias in Kernel Function parameter represents the   value, which is used for all kernel types except 

Linear. The Gamma in Kernel Function parameter represents the r value, which is used for Polynomial 

and Sigmoid kernels. 

 

Degree of Kernel Polynomial: Increasing this parameter more accurately delineates the boundary 
between classes. A value of 1 represents a first-degree polynomial function, which is essentially a straight 

line between two classes. (Or you could use a linear kernel too.) So this value works well when you have 

13 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

two very distinctive classes. In most cases, however, you will be working with imagery that has a high 

degree of variation and mixed pixels. Increasing the polynomial value causes the algorithm to more 

accurately follow the contours between classes, but you risk fitting the classification to noise.   

 

Penalty Parameter: The penalty parameter controls the trade-off between allowing training errors and 

forcing rigid margins. The more you increase this value, the more the parameter suppresses training data 

from “jumping” classes as you make changes to other parameters. Increasing this value also increases the 

cost of misclassifying points and causes ENVI to create a more accurate model that may not generalize 

well. Enter a floating point value greater than 0. 

Classification Probability Threshold: Use this parameter to set the probability that is required for the 

SVM classifier to classify a pixel. Pixels where all rule probabilities are less than this threshold are 

unclassified. The range of values is 0.0 to 1.0. Increasing this value results in more unclassified pixels. 

 

5.  To restore the default values for all of the parameters, click the Reset button. 

Saving your Changes to a New Training Data File 

If you significantly improved the delineation of impervious surface boundaries by adding features, selecting more training 

data, experimenting with different attributes, and modifying classification parameters, you can choose to save your 

updated training data set to a new training data file: 
 

1.  In the Feature Extraction dialog, click the Features tab. 

 

2.  Click the Save Training Data As button 

. The Training Data dialog appears. 

 

3.  Select an output location and a new file name. Do not overwrite the training data file you restored earlier. This 

allows you to save an “iteration” of a training data set that you like in case you want to make further changes 

later. Click OK

 

If you ever want to revert back to the classification results from the original training data file, you can click the Restore 
Training Data
 button and select qb_colorado_supervised.xml. 

 

Creating a Shapefile of Impervious Surfaces 

1.  Click Next in the Feature Extraction dialog. ENVI Zoom classifies the entire image. Feature Extraction proceeds to 

the Export step. 

 
2.  The Export Vector Results option is selected by default so that you can output each feature to separate 

shapefiles. Because you are only interested in extracting impervious surfaces, leave the Impervious option 

checked and un-select all of the other features. 

 

3.  Feature Extraction provides an option to smooth your vector shapefiles using the Douglas-Peucker line-

simplification algorithm (see the “References” section of this tutorial for more information). Line simplification 

works best with highly curved features such as rivers and roads. Select the Smoothing option and leave the 

default Smoothing Threshold value of 1 for this exercise. 

 

4.  Select an output directory to save your shapefile. By default, the shapefile will be named according to the 

associated Feature name. 

 

5.  Ensure the Display Datasets After Export option is enabled. 

 

6.  Click Next. ENVI Zoom creates a shapefile of the Impervious feature, adds it as a new vector layer to the Layer 

Manager, and displays it in the Image window. 

 

14 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

background image

 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 

 

 

7.  In the Layer Manager, right-click on the shapefile name and select Properties. The Properties dialog appears. 

 

8.  Double-click inside the Fill Interior field, and select True

 

9.  Choose a Fill Color, and close the dialog. The polygon shapefile is filled with color, which makes the boundaries 

of impervious surfaces easier to discern. 

Exiting ENVI Feature Extraction 

1.  After you export your classification results, you are presented with a summary of the processing options and 

settings you used throughout the Feature Extraction workflow. The Report tab lists the details of your settings, 

and the Statistics tab gives a summary of your feature name, feature count, and area statistics for the polygon 

shapefile you created. You can save this information to a text file by clicking the Save Text Report button. 

 

2.  After viewing the processing summary, you can click Finish to exit the Feature Extraction workflow. Or, click 

Previous to go back to the Export step and change the output options for classification results.  

 

If you click Previous, any output that you created is removed from the Data Manager and Layer Manager. If you 

click Next from the Export step without making any changes, Feature Extraction will not re-create the output. 

You must make at least one change in the Export step for Feature Extraction to create new shapefiles and/or 

classification images. 

References 

Arnold, C. L., and C. J. Gibbons. (1996). Impervious surface coverage: the emergence of a key environmental indicator. 

Journal of the American Planning Association, Vol. 62. 

 
Douglas, D. H., and T. K. Peucker. (1973). Algorithms for the reduction of the number of points required to represent a 

digitized line or its caricature. 

Cartographica, Vol. 10, No. 2, pp. 112-122. 

 

Hsu, C.-W., Chang, C.-C., and Lin, C.-J. (2007). “A practical guide to support vector classification.” National Taiwan 

University. URL 

http://ntu.csie.org/~cjlin/papers/guide/guide.pdf

  

Yang, Z. (2007). 

An interval based attribute ranking technique. Unpublished report, ITT Visual Information Solutions. A 

copy of this paper is available from ITT Visual Information Solutions Technical Support. 

 

15 

ENVI Zoom Tutorial: ENVI Feature Extraction with Supervised Classification 


Document Outline