8.1.1. batch

class hs_process.batch(base_dir=None, search_ext='.bip', dir_level=0)[source]

Bases: object

Class for batch processing hyperspectral image data. Makes use of segment, spatial_mod, and spec_mod to batch process many datacubes in a given directory. Supports options to save full datacubes, geotiff renders, as well as summary statistics and/or reports for the various tools.

Note

It may be a good idea to review and understand the defaults, hsio, hstools, segment, spatial_mod, and spec_mod classes prior to using the batch module.

Methods Summary

cube_to_spectra([fname_list, base_dir, …])

Calculates the mean and standard deviation for each cube in fname_list and writes the result to a “.spec” file.

segment_band_math([fname_list, base_dir, …])

Batch processing tool to perform band math on multiple datacubes in the same way.

segment_create_mask([fname_list, base_dir, …])

Batch processing tool to create a masked array on many datacubes.

spatial_crop([fname_sheet, base_dir, …])

Iterates through a spreadsheet that provides necessary information about how each image should be cropped and how it should be saved.

spectra_combine([fname_list, base_dir, …])

Batch processing tool to gather all pixels from every image in a directory, compute the mean and standard deviation, and save as a single spectra (i.e., a spectra file is equivalent to a single spectral pixel with no spatial information).

spectra_to_csv([fname_list, base_dir, …])

Reads all the .spec files in a direcory and saves their reflectance information to a .csv.

spectra_to_df([fname_list, base_dir, …])

Reads all the .spec files in a direcory and returns their data as a pandas.DataFrame object.

spectral_clip([fname_list, base_dir, …])

Batch processing tool to spectrally clip multiple datacubes in the same way.

spectral_smooth([fname_list, base_dir, …])

Batch processing tool to spectrally smooth multiple datacubes in the same way.

Methods Documentation

cube_to_spectra(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='cube_to_spec', name_append='cube-to-spec', geotiff=True, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Calculates the mean and standard deviation for each cube in fname_list and writes the result to a “.spec” file.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally clip; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed datacubes; if set to None, a folder named according to the folder_name parameter is added to base_dir

  • folder_name (str) – folder to add to base_dir_out to save all the processed datacubes (default: ‘cube_to_spec’).

  • name_append (str) – name to append to the filename (default: ‘cube-to-spec’).

  • geotiff (bool) – whether to save the masked RGB image as a geotiff alongside the masked datacube.

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See hsio.set_io_defaults() for more information on each of the settings.

Note

The following batch example builds on the API example results of the spatial_mod.crop_many_gdf function. Please complete the spatial_mod.crop_many_gdf example to be sure your directory (i.e., base_dir) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir, search_ext='.bip')  # searches for all files in ``base_dir`` with a ".bip" file extension

Use batch.cube_to_spectra to calculate the mean and standard deviation across all pixels for each of the datacubes in base_dir.

>>> hsbatch.cube_to_spectra(base_dir=base_dir, geotiff=False, out_force=True)
Calculating mean spectra: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\cube_to_spec\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-cube-to-spec-mean.spec
Calculating mean spectra: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1012.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\cube_to_spec\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1012-cube-to-spec-mean.spec
Calculating mean spectra: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1013.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\cube_to_spec\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1013-cube-to-spec-mean.spec
...

Use seaborn to visualize the spectra of plots 1011, 1012, and 1013. Notice how hsbatch.io.name_plot is utilized to retrieve the plot ID, and how the “history” tag is referenced from the metadata to determine the number of pixels whose reflectance was averaged to create the mean spectra. Also remember that pixels across the original input image likely represent a combination of soil, vegetation, and shadow.

>>> import seaborn as sns
>>> import re
>>> fname_list = [r'F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\cube_to_spec\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-cube-to-spec-mean.spec',
                  r'F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\cube_to_spec\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1012-cube-to-spec-mean.spec',
                  r'F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\cube_to_spec\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1013-cube-to-spec-mean.spec']
>>> colors = ['red', 'green', 'blue']
>>> for fname, color in zip(fname_list, colors):
>>>     hsbatch.io.read_spec(fname)
>>>     meta_bands = list(hsbatch.io.tools.meta_bands.values())
>>>     data = hsbatch.io.spyfile_spec.load().flatten() * 100
>>>     hist = hsbatch.io.spyfile_spec.metadata['history']
>>>     pix_n = re.search('<pixel number: (.*)>', hist).group(1)
>>>     ax = sns.lineplot(x=meta_bands, y=data, color=color, label='Plot '+hsbatch.io.name_plot+' (n='+pix_n+')')
>>> ax.set_xlabel('Wavelength (nm)', weight='bold')
>>> ax.set_ylabel('Reflectance (%)', weight='bold')
>>> ax.set_title(r'API Example: `batch.cube_to_spectra`', weight='bold')
../_images/cube_to_spectra.png
segment_band_math(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='band_math', name_append='band-math', geotiff=True, method='ndi', wl1=None, wl2=None, wl3=None, b1=None, b2=None, b3=None, list_range=True, plot_out=True, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to perform band math on multiple datacubes in the same way. batch.segment_band_math is typically used prior to batch.segment_create_mask to generate the images/directory required for the masking process.

Parameters
  • method (str) – Must be one of “ndi” (normalized difference index), “ratio” (simple ratio index), “derivative” (deriviative-type index), or “mcari2” (modified chlorophyll absorption index2). Indicates what kind of band math should be performed on the input datacube. The “ndi” method leverages segment.band_math_ndi(), the “ratio” method leverages segment.band_math_ratio(), and the “derivative” method leverages segment.band_math_derivative(). Please see the segment documentation for more information (default: “ndi”).

  • wl1 (int, float, or list) – the wavelength (or set of wavelengths) to be used as the first parameter of the band math index; if list, then consolidates all bands between two wavelength values by calculating the mean pixel value across all bands in that range (default: None).

  • wl2 (int, float, or list) – the wavelength (or set of wavelengths) to be used as the second parameter of the band math index; if list, then consolidates all bands between two wavelength values by calculating the mean pixel value across all bands in that range (default: None).

  • b1 (int, float, or list) – the band (or set of bands) to be used as the first parameter of the band math index; if list, then consolidates all bands between two band values by calculating the mean pixel value across all bands in that range (default: None).

  • b2 (int, float, or list) – the band (or set of bands) to be used as the second parameter of the band math index; if list, then consolidates all bands between two band values by calculating the mean pixel value across all bands in that range (default: None).

  • list_range (bool) – Whether bands/wavelengths passed as a list is interpreted as a range of bands (True) or for each individual band in the list (False). If list_range is True, b1/wl1 and b2/wl2 should be lists with two items, and all bands/wavelegths between the two values will be used (default: True).

  • plot_out (bool) – whether to save a histogram of the band math result (default: True).

  • geotiff (bool) – whether to save the masked RGB image as a geotiff alongside the masked datacube.

Note

The following batch example builds on the API example results of the spatial_mod.crop_many_gdf function. Please complete the spatial_mod.crop_many_gdf example to be sure your directory (i.e., base_dir) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir, search_ext='.bip')  # searches for all files in ``base_dir`` with a ".bip" file extension

Use batch.segment_band_math to compute the MCARI2 (Modified Chlorophyll Absorption Ratio Index Improved; Haboudane et al., 2004) spectral index for each of the datacubes in base_dir. See Harris Geospatial for more information about the MCARI2 spectral index and references to other spectral indices.

>>> folder_name = 'band_math_mcari2-800-670-550'  # folder name can be modified to be more descriptive in what type of band math is being performed
>>> method = 'mcari2'  # must be one of "ndi", "ratio", "derivative", or "mcari2"
>>> wl1 = 800
>>> wl2 = 670
>>> wl3 = 550
>>> hsbatch.segment_band_math(base_dir=base_dir, folder_name=folder_name,
                              name_append='band-math', geotiff=True,
                              method=method, wl1=wl1, wl2=wl2, wl3=wl3,
                              plot_out=True, out_force=True)
Bands used (``b1``): [198]
Bands used (``b2``): [135]
Bands used (``b3``): [77]
Wavelengths used (``b1``): [799.0016]
Wavelengths used (``b2``): [669.6752]
Wavelengths used (``b3``): [550.6128]
Saving F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdfand_math_mcari2-800-670-550\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-band-math-mcari2-800-670-550.bip
...

batch.segment_band_math creates a new folder in base_dir (in this case the new directory is F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdfand_math_mcari2-800-670-550) which contains several data products. The first is band-math-stats.csv: a spreadsheet containing summary statistics for each of the image cubes that were processed via batch.segment_band_math; stats include pixel count, mean, standard deviation, median, and percentiles across all image pixels.

Second is a geotiff file for each of the image cubes after the band math processing. This can be opened in QGIS to visualize in a spatial reference system, or can be opened using any software that supports floating point .tif files.

../_images/segment_band_math_plot_611-band-math-mcari2-800-670-550_tif.png

Third is the band math raster saved in the .hdr file format. Note that the data conained here should be the same as in the .tif file, so it’s a matter of preference as to what may be more useful. This single band .hdr can also be opend in QGIS.

Fourth is a histogram of the band math data contained in the image. The histogram illustrates the 90th percentile value, which may be useful in the segmentation step (e.g., see batch.segment_create_mask).

../_images/segment_band_math_plot_611-band-math-mcari2-800-670-550.png
segment_create_mask(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, mask_dir=None, base_dir_out=None, folder_name='mask', name_append='mask', geotiff=True, mask_thresh=None, mask_percentile=None, mask_side='lower', out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to create a masked array on many datacubes. batch.segment_create_mask is typically used after batch.segment_band_math to mask all the datacubes in a directory based on the result of the band math process.

Parameters
  • mask_thresh (float or int) – The value for which to mask the array; should be used with side parameter (default: None).

  • mask_percentile (float or int) – The percentile of pixels to mask; if percentile``=95 and ``side``='lower', the lowest 95% of pixels will be masked following the band math operation (default: ``None; range: 0-100).

  • mask_side (str) – The side of the threshold or percentile for which to apply the mask. Must be either ‘lower’ or ‘upper’; if ‘lower’, everything below the threshold/percentile will be masked (default: ‘lower’).

  • geotiff (bool) – whether to save the masked RGB image as a geotiff alongside the masked datacube.

Note

The following batch example builds on the API example results of spatial_mod.crop_many_gdf and batch.segment_band_math. Please complete each of those API examples to be sure your directories (i.e., base_dir, and mask_dir) are populated with image files. The following example will be masking datacubes located in: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf based on MCARI2 images located in: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\band_math_mcari2-800-670-550

Example

Load and initialize the batch module, ensuring base_dir is a valid directory

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir, search_ext='.bip')  # searches for all files in ``base_dir`` with a ".bip" file extension

There must be a single-band image that will be used to determine which datacube pixels are to be masked (determined via the mask_dir parameter). Point to the directory that contains the MCARI2 images.

>>> mask_dir = os.path.join(base_dir, 'band_math_mcari2-800-670-550')
>>> print(os.path.isdir(mask_dir))
True

Indicate how the MCARI2 images should be used to determine which hyperspectal pixels are to be masked. The available parameters for controlling this are mask_thresh, mask_percentile, and mask_side. We will mask out all pixels that fall below the MCARI2 90th percentile.

>>> mask_percentile = 90
>>> mask_side = 'lower'

Finally, indicate the folder to save the masked datacubes and perform the batch masking via batch.segment_create_mask

>>> folder_name = 'mask_mcari2_90th'
>>> hsbatch.segment_create_mask(base_dir=base_dir, mask_dir=mask_dir,
                                folder_name=folder_name,
                                name_append='mask-mcari2-90th', geotiff=True,
                                mask_percentile=mask_percentile,
                                mask_side=mask_side)
Saving F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-mask-mcari2-90th.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-mask-mcari2-90th-spec-mean.spec
...
../_images/segment_create_mask_inline.png

batch.segment_create_mask creates a new folder in base_dir named according to the folder_name parameter (in this case the new directory is F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th) which contains several data products. The first is mask-stats.csv: a spreadsheet containing the band math threshold value for each image file. In this example, the MCARI2 value corresponding to the 90th percentile is listed.

fname

plot_id

lower-pctl-90

1011

0.83222

1012

0.81112

1013

0.74394

…etc.

Second is a geotiff file for each of the image cubes after the masking procedure. This can be opened in QGIS to visualize in a spatial reference system, or can be opened using any software that supports floating point .tif files. The masked pixels are saved as null values and should render transparently.

../_images/segment_create_mask_geotiff.png

Third is the full hyperspectral datacube, also with the masked pixels saved as null values. Note that the only pixels remaining are the 10% with the highest MCARI2 values.

../_images/segment_create_mask_datacube.png

Fourth is the mean spectra across the unmasked datacube pixels. This is illustrated above by the green line plot (the light green shadow represents the standard deviation for each band).

spatial_crop(fname_sheet=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='spatial_crop', name_append='spatial-crop', geotiff=True, method='single', gdf=None, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Iterates through a spreadsheet that provides necessary information about how each image should be cropped and how it should be saved.

If gdf is passed (a geopandas.GoeDataFrame polygon file), the cropped images will be shifted to the center of appropriate “plot” polygon.

Parameters
  • fname_sheet (fname, pandas.DataFrame, or None, optional) – The filename of the spreadsheed that provides the necessary information for fine-tuning the batch process cropping. See below for more information about the required and optional contents of fname_sheet and how to properly format it. Optionally, fname_sheet can be a Pandas.DataFrame. If left to None, base_dir and gdf must be passed.

  • base_dir (str, optional) – directory path to search for files to spatially crop; if fname_sheet is not None, base_dir will be ignored (default: None).

  • base_dir_out (str, optional) – output directory of the cropped image (default: None).

  • folder_name (str, optional) – folder to add to base_dir_out to save all the processed datacubes (default: ‘spatial_crop’).

  • name_append (str, optional) – name to append to the filename (default: ‘spatial-crop’).

  • geotiff (bool, optional) – whether to save an RGB image as a geotiff alongside the cropped datacube.

  • method (str, optional) – Must be one of “single” or “many_gdf”. Indicates whether a single plot should be cropped from the input datacube or if many/multiple plots should be cropped from the input datacube. The “single” method leverages spatial_mod.crop_single() and the “many_gdf” method leverages spatial_mod.crop_many_gdf(). Please see the spatial_mod documentation for more information (default: “single”).

  • gdf (geopandas.GeoDataFrame, optional) – the plot names and polygon geometery of each of the plots; ‘plot’ must be used as a column name to identify each of the plots, and should be an integer.

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See hsio.set_io_defaults() for more information on each of the settings.

Tips and Tricks for fname_sheet when gdf is not passed

If gdf is not passed, fname_sheet may have the following required column headings that correspond to the relevant parameters in spatial_mod.crop_single() and spatial_mod.crop_many_gdf():

  1. “directory”

  2. “name_short”

  3. “name_long”

  4. “ext”

  5. “pix_e_ul”

  6. “pix_n_ul”.

With this minimum input, batch.spatial_crop will read in each image, crop from the upper left pixel (determined as pix_e_ul/pix_n_ul) to the lower right pixel calculated based on crop_e_pix/crop_n_pix (which is the width of the cropped area in units of pixels).

Note

crop_e_pix and crop_n_pix have default values (see defaults.crop_defaults()), but they can also be passed specifically for each datacube by including appropriate columns in fname_sheet (which takes precedence over defaults.crop_defaults).

fname_sheet may also have the following optional column headings:

  1. “crop_e_pix”

  2. “crop_n_pix”

  3. “crop_e_m”

  4. “crop_n_m”

  5. “buf_e_pix”

  6. “buf_n_pix”

  7. “buf_e_m”

  8. “buf_n_m”

  9. “plot_id”

More fname_sheet Tips and Tricks

  1. These optional inputs passed via fname_sheet allow more control over exactly how the images are to be cropped. For a more detailed explanation of the information that many of these columns are intended to contain, see the documentation for spatial_mod.crop_single() and spatial_mod.crop_many_gdf(). Those parameters not referenced should be apparent in the API examples and tutorials.

  2. If the column names are different in fname_sheet than described here, defaults.spat_crop_cols() can be modified to indicate which columns correspond to the relevant information.

  3. Any other columns can be added to fname_sheet, but batch.spatial_crop() does not use them in any way.

Note

The following batch example only actually processes a single hyperspectral image. If more datacubes were present in base_dir, however, batch.spatial_crop would process all datacubes that were available.

Note

This example uses spatial_mod.crop_many_gdf to crop many plots from a datacube using a polygon geometry file describing the spatial extent of each plot.

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> import geopandas as gpd
>>> import pandas as pd
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir, search_ext='.bip', dir_level=0)  # searches for all files in ``base_dir`` with a ".bip" file extension

Load the plot geometry as a geopandas.GeoDataFrame

>>> fname_gdf = r'F:\nigo0024\Documents\hs_process_demo\plot_bounds_small\plot_bounds.shp'
>>> gdf = gpd.read_file(fname_gdf)

Perform the spatial cropping using the “many_gdf” method. Note that nothing is being bassed to fname_sheet here, so batch.spatial_crop is simply going to attempt to crop all plots contained within gdf that overlap with any datacubes in base_dir. This option does not allow for any flexibility regarding minor adjustments to the cropping procedure (e.g., offset to the plot location in the datacube relative to the location in the gdf), but it is the most straightforward way to run batch.spatial_crop because it does not depend on anything to be passed to fname_sheet. It does, however, allow you to adjust the plot buffer relative to gdf via hsbatch.io.defaults.crop_defaults

>>> hsbatch.io.defaults.crop_defaults.buf_e_m = 2
>>> hsbatch.io.defaults.crop_defaults.buf_n_m = 0.5
>>> hsbatch.io.set_io_defaults(force=True)
>>> hsbatch.spatial_crop(base_dir=base_dir, method='many_gdf',
                         gdf=gdf)
Spatially cropping: F:\nigo0024\Documents\hs_process_demo\Wells_rep2_20180628_16h56m_pika_gige_7-Convert Radiance Cube to Reflectance from Measured Reference Spectrum.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_crop\Wells_rep2_20180628_16h56m_pika_gige_7_1018-spatial-crop.bip
Spatially cropping: F:\nigo0024\Documents\hs_process_demo\Wells_rep2_20180628_16h56m_pika_gige_7-Convert Radiance Cube to Reflectance from Measured Reference Spectrum.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_crop\Wells_rep2_20180628_16h56m_pika_gige_7_918-spatial-crop.bip
../_images/spatial_crop_inline.png

A new folder was created in base_dir - F:\nigo0024\Documents\hs_process_demo\spatial_crop - that contains the cropped datacubes and the cropped geotiff images. The Plot ID from the gdf is used to name each datacube according to its plot ID. The geotiff images can be opened in QGIS to visualize the images after cropping them.

../_images/spatial_crop_tifs.png

The cropped images were brightened in QGIS to emphasize the cropped boundaries. The plot boundaries are overlaid for reference (notice the 2.0 m buffer on the East/West ends and the 0.5 m buffer on the North/South sides).

spectra_combine(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to gather all pixels from every image in a directory, compute the mean and standard deviation, and save as a single spectra (i.e., a spectra file is equivalent to a single spectral pixel with no spatial information).

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally clip; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed datacubes; if set to None, a folder named according to the folder_name parameter is added to base_dir (default: None).

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See ``hsio.set_io_defaults() for more information on each of the settings.

Note

The following example will load in several small hyperspectral radiance datacubes (not reflectance) that were previously cropped manually (via Spectronon software). These datacubes represent the radiance values of grey reference panels that were placed in the field to provide data necessary for converting radiance imagery to reflectance. These particular datacubes were extracted from several different images captured within ~10 minutes of each other.

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\cube_ref_panels'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir)

Combine all the radiance datacubes in the directory via batch.spectra_combine.

>>> hsbatch.spectra_combine(base_dir=base_dir, search_ext='bip',
                            dir_level=0)
Combining datacubes/spectra into a single mean spectra.
Number of input datacubes/spectra: 7
Total number of pixels: 1516
Saving F:\nigo0024\Documents\hs_process_demo\cube_ref_panels\spec_mean_spy.spec

Visualize the combined spectra by opening in Spectronon. The solid line represents the mean radiance spectra across all pixels and images in base_dir, and the lighter, slightly transparent line represents the standard deviation of the radiance across all pixels and images in base_dir.

../_images/spectra_combine.png

Notice the lower signal at the oxygen absorption region (near 770 nm). After converting datacubes to reflectance, it may be desireable to spectrally clip this region (see spec_mod.spectral_clip())

spectra_to_csv(fname_list=None, base_dir=None, search_ext='spec', dir_level=0, base_dir_out=None)[source]

Reads all the .spec files in a direcory and saves their reflectance information to a .csv. batch.spectra_to_csv is identical to batch.spectra_to_df except a .csv file is saved rather than returning a pandas.DataFrame.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally clip; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed datacubes; if set to None, a folder named according to the folder_name parameter is added to base_dir

Note

The following example builds on the API example results of batch.segment_band_math() and batch.segment_create_mask()_. Please complete each of those API examples to be sure your directory (i.e., ``F:nigo0024Documentshs_process_demospatial_modcrop_many_gdfmask_mcari2_90th`) is populated with image files.

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir)

Read all the .spec files in base_dir and save them to a .csv file.

>>> hsbatch.spectra_to_csv(base_dir=base_dir, search_ext='spec',
                           dir_level=0)
Writing mean spectra to a .csv file.
Number of input datacubes/spectra: 40
Output file location: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th\stats-spectra.csv

When stats-spectra.csv is opened in Microsoft Excel, we can see that each row is a .spec file from a different plot, and each column is a particular spectral band/wavelength.

../_images/spectra_to_csv.png
spectra_to_df(fname_list=None, base_dir=None, search_ext='spec', dir_level=0)[source]

Reads all the .spec files in a direcory and returns their data as a pandas.DataFrame object. batch.spectra_to_df is identical to batch.spectra_to_csv except a pandas.DataFrame is returned rather than saving a .csv file.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally clip; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

Note

The following example builds on the API example results of batch.segment_band_math() and batch.segment_create_mask()_. Please complete each of those API examples to be sure your directory (i.e., ``F:nigo0024Documentshs_process_demospatial_modcrop_many_gdfmask_mcari2_90th`) is populated with image files.

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir)

Read all the .spec files in base_dir and load them to df_spec, a pandas.DataFrame.

>>> df_spec = hsbatch.spectra_to_df(base_dir=base_dir, search_ext='spec',
                                    dir_level=0)
Writing mean spectra to a ``pandas.DataFrame``.
Number of input datacubes/spectra: 40

When visualizing df_spe in Spyder, we can see that each row is a .spec file from a different plot, and each column is a particular spectral band.

../_images/spectra_to_df.png

It is somewhat confusing to conceptualize spectral data by band number (as opposed to the wavelenth it represents). hs_process.hs_tools.get_band can be used to retrieve spectral data for all plots via indexing by wavelength. Say we need to access reflectance at 710 nm for each plot.

>>> df_710nm = df_spec[['fname', 'plot_id', hsbatch.io.tools.get_band(710)]]
../_images/spectra_to_df_710nm.png
spectral_clip(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='spec_clip', name_append='spec-clip', wl_bands=[[0, 420], [760, 776], [813, 827], [880, 1000]], out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to spectrally clip multiple datacubes in the same way.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally clip; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed datacubes; if set to None, a folder named according to the folder_name parameter is added to base_dir

  • folder_name (str) – folder to add to base_dir_out to save all the processed datacubes (default: ‘spec-clip’).

  • name_append (str) – name to append to the filename (default: ‘spec-clip’).

  • wl_bands (list or list of lists) – minimum and maximum wavelenths to clip from image; if multiple groups of wavelengths should be cut, this should be a list of lists. For example, wl_bands=[760, 776] will clip all bands greater than 760.0 nm and less than 776.0 nm; wl_bands = [[0, 420], [760, 776], [813, 827], [880, 1000]] will clip all band less than 420.0 nm, bands greater than 760.0 nm and less than 776.0 nm, bands greater than 813.0 nm and less than 827.0 nm, and bands greater than 880 nm (default).

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See ``hsio.set_io_defaults() for more information on each of the settings.

Note

The following batch example builds on the API example results of the batch.spatial_crop function. Please complete the batch.spatial_crop example to be sure your directory (i.e., base_dir) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: F:\nigo0024\Documents\hs_process_demo\spatial_crop

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_crop'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir, search_ext='.bip')  # searches for all files in ``base_dir`` with a ".bip" file extension

Use batch.spectral_clip to clip all spectral bands below 420 nm and above 880 nm, as well as the bands near the oxygen absorption (i.e., 760-776 nm) and water absorption (i.e., 813-827 nm) regions.

>>> hsbatch.spectral_clip(base_dir=base_dir, folder_name='spec_clip',
                          wl_bands=[[0, 420], [760, 776], [813, 827], [880, 1000]])
Processing 40 files. If this is not what is expected, please check if files have already undergone processing. If existing files should be overwritten, be sure to set the ``out_force`` parameter.
Spectrally clipping: F:\nigo0024\Documents\hs_process_demo\spatial_crop\Wells_rep2_20180628_16h56m_pika_gige_7_1011-spatial-crop.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_crop\spec_clip\Wells_rep2_20180628_16h56m_pika_gige_7_1011-spec-clip.bip
Spectrally clipping: F:\nigo0024\Documents\hs_process_demo\spatial_crop\Wells_rep2_20180628_16h56m_pika_gige_7_1012-spatial-crop.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_crop\spec_clip\Wells_rep2_20180628_16h56m_pika_gige_7_1012-spec-clip.bip
...

Use seaborn to visualize the spectra of a single pixel in one of the processed images.

>>> import seaborn as sns
>>> fname = os.path.join(base_dir, 'Wells_rep2_20180628_16h56m_pika_gige_7_1011-spatial-crop.bip')
>>> hsbatch.io.read_cube(fname)
>>> spy_mem = hsbatch.io.spyfile.open_memmap()  # datacube before clipping
>>> meta_bands = list(hsbatch.io.tools.meta_bands.values())
>>> fname = os.path.join(base_dir, 'spec_clip', 'Wells_rep2_20180628_16h56m_pika_gige_7_1011-spec-clip.bip')
>>> hsbatch.io.read_cube(fname)
>>> spy_mem_clip = hsbatch.io.spyfile.open_memmap()  # datacube after clipping
>>> meta_bands_clip = list(hsbatch.io.tools.meta_bands.values())
>>> ax = sns.lineplot(x=meta_bands, y=spy_mem[26][29], label='Before spectral clipping', linewidth=3)
>>> ax = sns.lineplot(x=meta_bands_clip, y=spy_mem_clip[26][29], label='After spectral clipping', ax=ax)
>>> ax.set_xlabel('Wavelength (nm)', weight='bold')
>>> ax.set_ylabel('Reflectance (%)', weight='bold')
>>> ax.set_title(r'API Example: `batch.spectral_clip`', weight='bold')
../_images/spectral_clip_plot.png

Notice the spectral areas that were clipped, namely the oxygen and water absorption regions (~770 and ~820 nm, respectively). There is perhaps a lower signal:noise ratio in these regions, which was the merit for clipping those bands out.

spectral_smooth(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='spec_smooth', name_append='spec-smooth', window_size=11, order=2, stats=False, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to spectrally smooth multiple datacubes in the same way.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally clip; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed datacubes; if set to None, a folder named according to the folder_name parameter is added to base_dir

  • folder_name (str) – folder to add to base_dir_out to save all the processed datacubes (default: ‘spec-smooth’).

  • name_append (str) – name to append to the filename (default: ‘spec-smooth’).

  • window_size (int) – the length of the window; must be an odd integer number (default: 11).

  • order (int) – the order of the polynomial used in the filtering; must be less than window_size - 1 (default: 2).

  • stats (bool) – whether to compute some basic descriptive statistics (mean, st. dev., and coefficient of variation) of the smoothed data array (default: False)

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See ``hsio.set_io_defaults() for more information on each of the settings.

Note

The following batch example builds on the API example results of the batch.spatial_crop function. Please complete the batch.spatial_crop example to be sure your directory (i.e., base_dir) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: F:\nigo0024\Documents\hs_process_demo\spatial_crop

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_crop'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir, search_ext='.bip')  # searches for all files in ``base_dir`` with a ".bip" file extension

Use batch.spectral_smooth to perform a Savitzky-Golay smoothing operation on each image/pixel in base_dir. The window_size and order can be adjusted to achieve desired smoothing results.

>>> hsbatch.spectral_smooth(base_dir=base_dir, folder_name='spec_smooth',
                            window_size=11, order=2)
Processing 40 files. If this is not what is expected, please check if files have already undergone processing. If existing files should be overwritten, be sure to set the ``out_force`` parameter.
Spectrally smoothing: F:\nigo0024\Documents\hs_process_demo\spatial_crop\Wells_rep2_20180628_16h56m_pika_gige_7_1011-spatial-crop.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_crop\spec_smooth\Wells_rep2_20180628_16h56m_pika_gige_7_1011-spec-smooth.bip
Spectrally smoothing: F:\nigo0024\Documents\hs_process_demo\spatial_crop\Wells_rep2_20180628_16h56m_pika_gige_7_1012-spatial-crop.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_crop\spec_smooth\Wells_rep2_20180628_16h56m_pika_gige_7_1012-spec-smooth.bip
...

Use seaborn to visualize the spectra of a single pixel in one of the processed images.

>>> import seaborn as sns
>>> fname = os.path.join(base_dir, 'Wells_rep2_20180628_16h56m_pika_gige_7_1011-spatial-crop.bip')
>>> hsbatch.io.read_cube(fname)
>>> spy_mem = hsbatch.io.spyfile.open_memmap()  # datacube before smoothing
>>> meta_bands = list(hsbatch.io.tools.meta_bands.values())
>>> fname = os.path.join(base_dir, 'spec_smooth', 'Wells_rep2_20180628_16h56m_pika_gige_7_1011-spec-smooth.bip')
>>> hsbatch.io.read_cube(fname)
>>> spy_mem_clip = hsbatch.io.spyfile.open_memmap()  # datacube after smoothing
>>> meta_bands_clip = list(hsbatch.io.tools.meta_bands.values())
>>> ax = sns.lineplot(x=meta_bands, y=spy_mem[26][29], label='Before spectral smoothing', linewidth=3)
>>> ax = sns.lineplot(x=meta_bands_clip, y=spy_mem_clip[26][29], label='After spectral smoothing', ax=ax)
>>> ax.set_xlabel('Wavelength (nm)', weight='bold')
>>> ax.set_ylabel('Reflectance (%)', weight='bold')
>>> ax.set_title(r'API Example: `batch.spectral_smooth`', weight='bold')
../_images/spectral_smooth_plot.png

Notice how the “choppiness” of the spectral curve is lessened after the smoothing operation. There are spectral regions that perhaps had a lower signal:noise ratio and did not do particularlly well at smoothing (i.e., < 410 nm, ~770 nm, and ~820 nm). It may be wise to perform batch.spectral_smooth after batch.spectral_clip.