8.1.1. batch

class hs_process.batch(base_dir=None, search_ext='.bip', dir_level=0, lock=None, progress_bar=False)[source]

Bases: object

Class for batch processing hyperspectral image data. Makes use of segment, spatial_mod, and spec_mod to batch process many datacubes in a given directory. Supports options to save full datacubes, geotiff renders, as well as summary statistics and/or reports for the various tools.

Note

It may be a good idea to review and understand the defaults, hsio, hstools, segment, spatial_mod, and spec_mod classes prior to using the batch module.

Methods Summary

cube_to_spectra([fname_list, base_dir, …])

Calculates the mean and standard deviation for each cube in fname_list and writes the result to a “.spec” file.

segment_composite_band([fname_list, …])

Batch processing tool to create a composite band on multiple datacubes in the same way.

segment_band_math([fname_list, base_dir, …])

Batch processing tool to perform band math on multiple datacubes in the same way.

segment_create_mask([fname_list, base_dir, …])

Batch processing tool to create a masked array on many datacubes.

spatial_crop([fname_sheet, base_dir, …])

Iterates through a spreadsheet that provides necessary information about how each image should be cropped and how it should be saved.

spectra_combine([fname_list, base_dir, …])

Batch processing tool to gather all pixels from every image in a directory, compute the mean and standard deviation, and save as a single spectra (i.e., a spectra file is equivalent to a single spectral pixel with no spatial information).

spectra_derivative([fname_list, base_dir, …])

Batch processing tool to calculate the numeric spectral derivative for multiple spectra.

spectra_to_csv([fname_list, base_dir, …])

Reads all the .spec files in a direcory and saves their reflectance information to a .csv.

spectra_to_df([fname_list, base_dir, …])

Reads all the .spec files in a direcory and returns their data as a pandas.DataFrame object.

spectral_clip([fname_list, base_dir, …])

Batch processing tool to spectrally clip multiple datacubes in the same way.

spectral_mimic([fname_list, base_dir, …])

Batch processing tool to spectrally mimic a multispectral sensor for multiple datacubes in the same way.

spectral_resample([fname_list, base_dir, …])

Batch processing tool to spectrally resample (a.k.a.

spectral_smooth([fname_list, base_dir, …])

Batch processing tool to spectrally smooth multiple datacubes in the same way.

Methods Documentation

cube_to_spectra(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='cube_to_spec', name_append='cube-to-spec', write_geotiff=True, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Calculates the mean and standard deviation for each cube in fname_list and writes the result to a “.spec” file.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally clip; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed spectra; if set to None, a folder named according to the folder_name parameter is added to base_dir

  • folder_name (str) – folder to add to base_dir_out to save all the processed datacubes (default: ‘cube_to_spec’).

  • name_append (str) – name to append to the filename (default: ‘cube-to-spec’).

  • write_geotiff (bool) – whether to save the masked RGB image as a geotiff alongside the masked datacube.

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See hsio.set_io_defaults() for more information on each of the settings.

Note

The following batch example builds on the API example results of the spatial_mod.crop_many_gdf function. Please complete the spatial_mod.crop_many_gdf example to be sure your directory (i.e., base_dir) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> data_dir = r'F:\nigo0024\Documents\hs_process_demo'
>>> base_dir = os.path.join(data_dir, 'spatial_mod', 'crop_many_gdf')
>>> print(os.path.isdir(base_dir))
>>> hsbatch = batch(base_dir, search_ext='.bip', progress_bar=True)  # searches for all files in ``base_dir`` with a ".bip" file extension
True

Use batch.cube_to_spectra to calculate the mean and standard deviation across all pixels for each of the datacubes in base_dir.

>>> hsbatch.cube_to_spectra(base_dir=base_dir, write_geotiff=False, out_force=True)
Processing file 39/40: 100%|██████████| 40/40 [00:03<00:00, 13.28it/s]------------------------------------------------| 0.0%

Use seaborn to visualize the spectra of plots 1011, 1012, and 1013. Notice how hsbatch.io.name_plot is utilized to retrieve the plot ID, and how the “history” tag is referenced from the metadata to determine the number of pixels whose reflectance was averaged to create the mean spectra. Also remember that pixels across the original input image likely represent a combination of soil, vegetation, and shadow.

>>> import seaborn as sns
>>> import re
>>> fname_list = [os.path.join(base_dir, 'cube_to_spec', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-cube-to-spec-mean.spec'),
                  os.path.join(base_dir, 'cube_to_spec', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1012-cube-to-spec-mean.spec'),
                  os.path.join(base_dir, 'cube_to_spec', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1013-cube-to-spec-mean.spec')]
>>> ax = None
>>> for fname in fname_list:
>>>     hsbatch.io.read_spec(fname)
>>>     meta_bands = list(hsbatch.io.tools.meta_bands.values())
>>>     data = hsbatch.io.spyfile_spec.load().flatten() * 100
>>>     hist = hsbatch.io.spyfile_spec.metadata['history']
>>>     pix_n = re.search('<pixel number: (.*)>', hist).group(1)
>>>     if ax is None:
>>>         ax = sns.lineplot(x=meta_bands, y=data, label='Plot '+hsbatch.io.name_plot+' (n='+pix_n+')')
>>>     else:
>>>         ax = sns.lineplot(x=meta_bands, y=data, label='Plot '+hsbatch.io.name_plot+' (n='+pix_n+')', ax=ax)
>>> ax.set_xlabel('Wavelength (nm)', weight='bold')
>>> ax.set_ylabel('Reflectance (%)', weight='bold')
>>> ax.set_title(r'API Example: `batch.cube_to_spectra`', weight='bold')
api/img/batch/cube_to_spectra.png
segment_composite_band(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='composite_band', name_append='composite-band', write_geotiff=True, wl1=None, b1=None, list_range=True, plot_out=True, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to create a composite band on multiple datacubes in the same way. batch.segment_composite_band is typically used prior to batch.segment_create_mask to generate the images/directory required for the masking process.

Parameters
  • wl1 (int, float, or list) – the wavelength (or set of wavelengths) to be used as the first parameter of the band math index; if list, then consolidates all bands between two wavelength values by calculating the mean pixel value across all bands in that range (default: None).

  • b1 (int, float, or list) – the band (or set of bands) to be used as the first parameter of the band math index; if list, then consolidates all bands between two band values by calculating the mean pixel value across all bands in that range (default: None).

  • list_range (bool) – Whether bands/wavelengths passed as a list is interpreted as a range of bands (True) or for each individual band in the list (False). If list_range is True, b1/wl1 and b2/wl2 should be lists with two items, and all bands/wavelegths between the two values will be used (default: True).

  • plot_out (bool) – whether to save a histogram of the band math result (default: True).

  • write_geotiff (bool) – whether to save the masked RGB image as a geotiff alongside the masked datacube.

segment_band_math(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='band_math', name_append='band-math', write_geotiff=True, method='ndi', wl1=None, wl2=None, wl3=None, b1=None, b2=None, b3=None, list_range=True, plot_out=True, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to perform band math on multiple datacubes in the same way. batch.segment_band_math is typically used prior to batch.segment_create_mask to generate the images/directory required for the masking process.

Parameters
  • method (str) – Must be one of “ndi” (normalized difference index), “ratio” (simple ratio index), “derivative” (deriviative-type index), or “mcari2” (modified chlorophyll absorption index2). Indicates what kind of band math should be performed on the input datacube. The “ndi” method leverages segment.band_math_ndi(), the “ratio” method leverages segment.band_math_ratio(), and the “derivative” method leverages segment.band_math_derivative(). Please see the segment documentation for more information (default: “ndi”).

  • wl1 (int, float, or list) – the wavelength (or set of wavelengths) to be used as the first parameter of the band math index; if list, then consolidates all bands between two wavelength values by calculating the mean pixel value across all bands in that range (default: None).

  • wl2 (int, float, or list) – the wavelength (or set of wavelengths) to be used as the second parameter of the band math index; if list, then consolidates all bands between two wavelength values by calculating the mean pixel value across all bands in that range (default: None).

  • b1 (int, float, or list) – the band (or set of bands) to be used as the first parameter of the band math index; if list, then consolidates all bands between two band values by calculating the mean pixel value across all bands in that range (default: None).

  • b2 (int, float, or list) – the band (or set of bands) to be used as the second parameter of the band math index; if list, then consolidates all bands between two band values by calculating the mean pixel value across all bands in that range (default: None).

  • list_range (bool) – Whether bands/wavelengths passed as a list is interpreted as a range of bands (True) or for each individual band in the list (False). If list_range is True, b1/wl1 and b2/wl2 should be lists with two items, and all bands/wavelegths between the two values will be used (default: True).

  • plot_out (bool) – whether to save a histogram of the band math result (default: True).

  • write_geotiff (bool) – whether to save the masked RGB image as a geotiff alongside the masked datacube.

Note

The following batch example builds on the API example results of the spatial_mod.crop_many_gdf function. Please complete the spatial_mod.crop_many_gdf example to be sure your directory (i.e., base_dir) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir, search_ext='.bip')  # searches for all files in ``base_dir`` with a ".bip" file extension

Use batch.segment_band_math to compute the MCARI2 (Modified Chlorophyll Absorption Ratio Index Improved; Haboudane et al., 2004) spectral index for each of the datacubes in base_dir. See Harris Geospatial for more information about the MCARI2 spectral index and references to other spectral indices.

>>> folder_name = 'band_math_mcari2-800-670-550'  # folder name can be modified to be more descriptive in what type of band math is being performed
>>> method = 'mcari2'  # must be one of "ndi", "ratio", "derivative", or "mcari2"
>>> wl1 = 800
>>> wl2 = 670
>>> wl3 = 550
>>> hsbatch.segment_band_math(base_dir=base_dir, folder_name=folder_name,
                              name_append='band-math', write_geotiff=True,
                              method=method, wl1=wl1, wl2=wl2, wl3=wl3,
                              plot_out=True, out_force=True)
Bands used (``b1``): [198]
Bands used (``b2``): [135]
Bands used (``b3``): [77]
Wavelengths used (``b1``): [799.0016]
Wavelengths used (``b2``): [669.6752]
Wavelengths used (``b3``): [550.6128]
Saving F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdfand_math_mcari2-800-670-550\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-band-math-mcari2-800-670-550.bip
...

batch.segment_band_math creates a new folder in base_dir (in this case the new directory is F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdfand_math_mcari2-800-670-550) which contains several data products. The first is band-math-stats.csv: a spreadsheet containing summary statistics for each of the image cubes that were processed via batch.segment_band_math; stats include pixel count, mean, standard deviation, median, and percentiles across all image pixels.

Second is a geotiff file for each of the image cubes after the band math processing. This can be opened in QGIS to visualize in a spatial reference system, or can be opened using any software that supports floating point .tif files.

api/img/batch/segment_band_math_plot_611-band-math-mcari2-800-670-550_tif.png

Third is the band math raster saved in the .hdr file format. Note that the data conained here should be the same as in the .tif file, so it’s a matter of preference as to what may be more useful. This single band .hdr can also be opend in QGIS.

Fourth is a histogram of the band math data contained in the image. The histogram illustrates the 90th percentile value, which may be useful in the segmentation step (e.g., see batch.segment_create_mask).

api/img/batch/segment_band_math_plot_611-band-math-mcari2-800-670-550.png
segment_create_mask(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, mask_dir=None, base_dir_out=None, folder_name='mask', name_append='mask', write_datacube=True, write_spec=True, write_geotiff=True, mask_thresh=None, mask_percentile=None, mask_side='lower', out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to create a masked array on many datacubes. batch.segment_create_mask is typically used after batch.segment_band_math to mask all the datacubes in a directory based on the result of the band math process.

Parameters
  • mask_thresh (float or int) – The value for which to mask the array; should be used with side parameter (default: None).

  • mask_percentile (float or int) – The percentile of pixels to mask; if percentile``=95 and ``side``='lower', the lowest 95% of pixels will be masked following the band math operation (default: ``None; range: 0-100).

  • mask_side (str) – The side of the threshold for which to apply the mask. Must be either ‘lower’, ‘upper’, ‘outside’, or None; if ‘lower’, everything below the threshold will be masked; if ‘outside’, the thresh / percentile parameter must be list-like with two values indicating the lower and upper bounds - anything outside of these values will be masked out; if None, only the values that exactly match the threshold will be masked (default: ‘lower’).

  • geotiff (bool) – whether to save the masked RGB image as a geotiff alongside the masked datacube.

Note

The following batch example builds on the API example results of spatial_mod.crop_many_gdf and batch.segment_band_math. Please complete each of those API examples to be sure your directories (i.e., base_dir, and mask_dir) are populated with image files. The following example will be masking datacubes located in: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf based on MCARI2 images located in: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\band_math_mcari2-800-670-550

Example

Load and initialize the batch module, ensuring base_dir is a valid directory

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir, search_ext='.bip')  # searches for all files in ``base_dir`` with a ".bip" file extension

There must be a single-band image that will be used to determine which datacube pixels are to be masked (determined via the mask_dir parameter). Point to the directory that contains the MCARI2 images.

>>> mask_dir = os.path.join(base_dir, 'band_math_mcari2-800-670-550')
>>> print(os.path.isdir(mask_dir))
True

Indicate how the MCARI2 images should be used to determine which hyperspectal pixels are to be masked. The available parameters for controlling this are mask_thresh, mask_percentile, and mask_side. We will mask out all pixels that fall below the MCARI2 90th percentile.

>>> mask_percentile = 90
>>> mask_side = 'lower'

Finally, indicate the folder to save the masked datacubes and perform the batch masking via batch.segment_create_mask

>>> folder_name = 'mask_mcari2_90th'
>>> hsbatch.segment_create_mask(base_dir=base_dir, mask_dir=mask_dir,
                                folder_name=folder_name,
                                name_append='mask-mcari2-90th', write_geotiff=True,
                                mask_percentile=mask_percentile,
                                mask_side=mask_side)
Saving F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-mask-mcari2-90th.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th\Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-mask-mcari2-90th-spec-mean.spec
...
api/img/batch/segment_create_mask_inline.png

batch.segment_create_mask creates a new folder in base_dir named according to the folder_name parameter (in this case the new directory is F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th) which contains several data products. The first is mask-stats.csv: a spreadsheet containing the band math threshold value for each image file. In this example, the MCARI2 value corresponding to the 90th percentile is listed.

fname

plot_id

lower-pctl-90

1011

0.83222

1012

0.81112

1013

0.74394

…etc.

Second is a geotiff file for each of the image cubes after the masking procedure. This can be opened in QGIS to visualize in a spatial reference system, or can be opened using any software that supports floating point .tif files. The masked pixels are saved as null values and should render transparently.

api/img/batch/segment_create_mask_geotiff.png

Third is the full hyperspectral datacube, also with the masked pixels saved as null values. Note that the only pixels remaining are the 10% with the highest MCARI2 values.

api/img/batch/segment_create_mask_datacube.png

Fourth is the mean spectra across the unmasked datacube pixels. This is illustrated above by the green line plot (the light green shadow represents the standard deviation for each band).

spatial_crop(fname_sheet=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='spatial_crop', name_append='spatial-crop', write_geotiff=True, method='single', gdf=None, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Iterates through a spreadsheet that provides necessary information about how each image should be cropped and how it should be saved.

If gdf is passed (a geopandas.GoeDataFrame polygon file), the cropped images will be shifted to the center of appropriate ‘plot_id’ polygon.

Parameters
  • fname_sheet (fname, pandas.DataFrame, or None, optional) – The filename of the spreadsheed that provides the necessary information for fine-tuning the batch process cropping. See below for more information about the required and optional contents of fname_sheet and how to properly format it. Optionally, fname_sheet can be a Pandas.DataFrame. If left to None, base_dir and gdf must be passed.

  • base_dir (str, optional) – directory path to search for files to spatially crop; if fname_sheet is not None, base_dir will be ignored (default: None).

  • base_dir_out (str, optional) – output directory of the cropped image (default: None).

  • folder_name (str, optional) – folder to add to base_dir_out to save all the processed datacubes (default: ‘spatial_crop’).

  • name_append (str, optional) – name to append to the filename (default: ‘spatial-crop’).

  • write_geotiff (bool, optional) – whether to save an RGB image as a geotiff alongside the cropped datacube.

  • method (str, optional) – Must be one of “single” or “many_gdf”. Indicates whether a single plot should be cropped from the input datacube or if many/multiple plots should be cropped from the input datacube. The “single” method leverages spatial_mod.crop_single() and the “many_gdf” method leverages spatial_mod.crop_many_gdf(). Please see the spatial_mod documentation for more information (default: “single”).

  • gdf (geopandas.GeoDataFrame, optional) – the plot names and polygon geometery of each of the plots; ‘plot_id’ must be used as a column name to identify each of the plots, and should be an integer.

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See hsio.set_io_defaults() for more information on each of the settings.

Tips and Tricks for fname_sheet when gdf is not passed

If gdf is not passed, fname_sheet may have the following required column headings that correspond to the relevant parameters in spatial_mod.crop_single() and spatial_mod.crop_many_gdf():

  1. “directory”

  2. “name_short”

  3. “name_long”

  4. “ext”

  5. “pix_e_ul”

  6. “pix_n_ul”.

With this minimum input, batch.spatial_crop will read in each image, crop from the upper left pixel (determined as pix_e_ul/pix_n_ul) to the lower right pixel calculated based on crop_e_pix/crop_n_pix (which is the width of the cropped area in units of pixels).

Note

crop_e_pix and crop_n_pix have default values (see defaults.crop_defaults()), but they can also be passed specifically for each datacube by including appropriate columns in fname_sheet (which takes precedence over defaults.crop_defaults).

fname_sheet may also have the following optional column headings:

  1. “crop_e_pix”

  2. “crop_n_pix”

  3. “crop_e_m”

  4. “crop_n_m”

  5. “buf_e_pix”

  6. “buf_n_pix”

  7. “buf_e_m”

  8. “buf_n_m”

  9. “gdf_shft_e_m”

  10. “gdf_shft_n_m”

  11. “plot_id_ref”

  12. “study”

  13. “date”

More fname_sheet Tips and Tricks

  1. These optional inputs passed via fname_sheet allow more control over exactly how the images are to be cropped. For a more detailed explanation of the information that many of these columns are intended to contain, see the documentation for spatial_mod.crop_single() and spatial_mod.crop_many_gdf(). Those parameters not referenced should be apparent in the API examples and tutorials.

  2. If the column names are different in fname_sheet than described here, defaults.spat_crop_cols() can be modified to indicate which columns correspond to the relevant information.

  3. The date and study columns do not impact how the datacubes are to be cropped, but if this information exists, batch.spatial_crop adds it to the filename of the cropped datacube. This can be used to avoid overwriting datacubes with similar names, and is especially useful when processing imagery from many dates and/or studies/locations and saving them in the same directory. If “study”, “date”, and “plot_id” are all passed, this information is used to formulate the output file name; e.g., study_wells_date_20180628_plot_527-spatial-crop.bip. If either “study” or “date” is missing, the populated variables wil be appended to the end of the hsio.name_short string; e.g., plot_9_3_pika_gige_1_plot_527-spatial-crop.bip.

  4. Any other columns can be added to fname_sheet, but batch.spatial_crop() does not use them in any way.

Note

The following batch example only actually processes a single hyperspectral image. If more datacubes were present in base_dir, however, batch.spatial_crop would process all datacubes that were available.

Note

This example uses spatial_mod.crop_many_gdf to crop many plots from a datacube using a polygon geometry file describing the spatial extent of each plot.

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> import geopandas as gpd
>>> import pandas as pd
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo'
>>> print(os.path.isdir(base_dir))
>>> hsbatch = batch(base_dir, search_ext='.bip', dir_level=0,
                    progress_bar=True)  # searches for all files in ``base_dir`` with a ".bip" file extension
True

Load the plot geometry as a geopandas.GeoDataFrame

>>> fname_gdf = r'F:\nigo0024\Documents\hs_process_demo\plot_bounds.geojson'
>>> gdf = gpd.read_file(fname_gdf)

Perform the spatial cropping using the “many_gdf” method. Note that nothing is being passed to fname_sheet here, so batch.spatial_crop is simply going to attempt to crop all plots contained within gdf that overlap with any datacubes in base_dir.

Passing fname_sheet directly is definitely more flexible for customization. However, some customization is possible while not passing fname_sheet. In the example below, we set an easting and northing buffer, as well as limit the number of plots to crop to 40. These defaults trickle through to spatial_mod.crop_many_gdf(), so by setting them on the batch object, they will be recognized when calculating crop boundaries from gdf.

>>> hsbatch.io.defaults.crop_defaults.buf_e_m = 2  # Sets buffer in the easting direction (units of meters)
>>> hsbatch.io.defaults.crop_defaults.buf_n_m = 0.5
>>> hsbatch.io.defaults.crop_defaults.n_plots = 40  # We can limit the number of plots to process from gdf
>>> hsbatch.spatial_crop(base_dir=base_dir, method='many_gdf',
                         gdf=gdf, out_force=True)

Because fname_list was passed instead of fname_sheet, there is not a way to infer the study name and date. Therefore, “study” and “date” will be omitted from the output file name. If you would like output file names to include “study” and “date”, please pass fname_sheet with “study” and “date” columns.

Processing file 39/40: 100%|██████████| 40/40 [00:02<00:00, 17.47it/s]

api/img/batch/spatial_crop_inline.png

A new folder was created in base_dir - F:\nigo0024\Documents\hs_process_demo\spatial_crop - that contains the cropped datacubes and the cropped geotiff images. The Plot ID from the gdf is used to name each datacube according to its plot ID. The geotiff images can be opened in QGIS to visualize the images after cropping them.

api/img/batch/spatial_crop_tifs.png

The cropped images were brightened in QGIS to emphasize the cropped boundaries. The plot boundaries are overlaid for reference (notice the 2.0 m buffer on the East/West ends and the 0.5 m buffer on the North/South sides).

spectra_combine(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to gather all pixels from every image in a directory, compute the mean and standard deviation, and save as a single spectra (i.e., a spectra file is equivalent to a single spectral pixel with no spatial information).

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally clip; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed datacubes; if set to None, a folder named according to the folder_name parameter is added to base_dir (default: None).

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See ``hsio.set_io_defaults() for more information on each of the settings.

Note

The following example will load in several small hyperspectral radiance datacubes (not reflectance) that were previously cropped manually (via Spectronon software). These datacubes represent the radiance values of grey reference panels that were placed in the field to provide data necessary for converting radiance imagery to reflectance. These particular datacubes were extracted from several different images captured within ~10 minutes of each other.

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\cube_ref_panels'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir)

Combine all the radiance datacubes in the directory via batch.spectra_combine.

>>> hsbatch.spectra_combine(base_dir=base_dir, search_ext='bip',
                            dir_level=0)
Combining datacubes/spectra into a single mean spectra.
Number of input datacubes/spectra: 7
Total number of pixels: 1516
Saving F:\nigo0024\Documents\hs_process_demo\cube_ref_panels\spec_mean_spy.spec

Visualize the combined spectra by opening in Spectronon. The solid line represents the mean radiance spectra across all pixels and images in base_dir, and the lighter, slightly transparent line represents the standard deviation of the radiance across all pixels and images in base_dir.

api/img/batch/spectra_combine.png

Notice the lower signal at the oxygen absorption region (near 770 nm). After converting datacubes to reflectance, it may be desireable to spectrally clip this region (see spec_mod.spectral_clip())

spectra_derivative(fname_list=None, base_dir=None, search_ext='spec', dir_level=0, base_dir_out=None, folder_name='spec_derivative', name_append='spec-derivative', order=1, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to calculate the numeric spectral derivative for multiple spectra.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to process; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed spectra; if set to None, a folder named according to the folder_name parameter is added to base_dir

  • folder_name (str) – folder to add to base_dir_out to save all the processed datacubes (default: ‘spec_derivative’).

  • name_append (str) – name to append to the filename (default: ‘spec-derivative’).

  • order (int) – The order of the derivative (default: 1).

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See ``hsio.set_io_defaults() for more information on each of the settings.

Note

The following batch example builds on the API example results of the batch.cube_to_spectra function. Please complete both the spatial_mod.crop_many_gdf and batch.cube_to_spectra examples to be sure your directory (i.e., base_dir) is populated with multiple hyperspectral spectra. The following example will be using spectra located in the following directory: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\cube_to_spec

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> data_dir = r'F:\nigo0024\Documents\hs_process_demo'
>>> base_dir = os.path.join(data_dir, 'spatial_mod', 'crop_many_gdf', 'cube_to_spec')
>>> print(os.path.isdir(base_dir))
>>> hsbatch = batch(base_dir, search_ext='.spec', progress_bar=True)

Use batch.spectra_derivative to calculate the central finite difference (i.e., the numeric spectral derivative) for each of the .spec files in base_dir.

>>> order = 1
>>> hsbatch.spectra_derivative(base_dir=base_dir, order=order, out_force=True)

Use seaborn to visualize the derivative spectra of plots 1011, 1012, and 1013.

>>> import seaborn as sns
>>> import re
>>> fname_list = [os.path.join(base_dir, 'spec_derivative', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-spec-derivative-order-{0}.spec'.format(order)),
                  os.path.join(base_dir, 'spec_derivative', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1012-spec-derivative-order-{0}.spec'.format(order)),
                  os.path.join(base_dir, 'spec_derivative', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1013-spec-derivative-order-{0}.spec'.format(order))]
>>> ax = None
>>> for fname in fname_list:
>>>     hsbatch.io.read_spec(fname)
>>>     meta_bands = list(hsbatch.io.tools.meta_bands.values())
>>>     data = hsbatch.io.spyfile_spec.open_memmap().flatten() * 100
>>>     hist = hsbatch.io.spyfile_spec.metadata['history']
>>>     pix_n = re.search('<pixel number: (?s)(.*)>] ->', hist).group(1)
>>>     if ax is None:
>>>         ax = sns.lineplot(meta_bands, 0, color='gray')
>>>         ax = sns.lineplot(x=meta_bands, y=data, label='Plot '+hsbatch.io.name_plot+' (n='+pix_n+')')
>>>     else:
>>>         ax = sns.lineplot(x=meta_bands, y=data, label='Plot '+hsbatch.io.name_plot+' (n='+pix_n+')', ax=ax)
>>> ax.set(ylim=(-1, 1))
>>> ax.set_xlabel('Wavelength (nm)', weight='bold')
>>> ax.set_ylabel('Derivative reflectance (%)', weight='bold')
>>> ax.set_title(r'API Example: `batch.spectra_derivative`', weight='bold')
api/img/batch/spectra_derivative.png
spectra_to_csv(fname_list=None, base_dir=None, search_ext='spec', dir_level=0, base_dir_out=None, name='stats-spectra', multithread=False)[source]

Reads all the .spec files in a direcory and saves their reflectance information to a .csv. batch.spectra_to_csv is identical to batch.spectra_to_df except a .csv file is saved rather than returning a pandas.DataFrame.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally clip; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed datacubes; if set to None, file is saved to base_dir

  • name (str) – The output filename (default: “stats-spectra”).

  • multithread (bool) – Whether to leverage multi-thread processing when reading the .spec files. Setting to True should speed up the time it takes to read all .spec files.

Note

The following example builds on the API example results of batch.segment_band_math() and batch.segment_create_mask()_. Please complete each of those API examples to be sure your directory (i.e., ``F:nigo0024Documentshs_process_demospatial_modcrop_many_gdfmask_mcari2_90th`) is populated with image files.

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir)

Read all the .spec files in base_dir and save them to a .csv file.

>>> hsbatch.spectra_to_csv(base_dir=base_dir, search_ext='spec',
                           dir_level=0)
Writing mean spectra to a .csv file.
Number of input datacubes/spectra: 40
Output file location: F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th\stats-spectra.csv

When stats-spectra.csv is opened in Microsoft Excel, we can see that each row is a .spec file from a different plot, and each column is a particular spectral band/wavelength.

api/img/batch/spectra_to_csv.png
spectra_to_df(fname_list=None, base_dir=None, search_ext='spec', dir_level=0, multithread=False)[source]

Reads all the .spec files in a direcory and returns their data as a pandas.DataFrame object. batch.spectra_to_df is identical to batch.spectra_to_csv except a pandas.DataFrame is returned rather than saving a .csv file.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally clip; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • multithread (bool) – Whether to leverage multi-thread processing when reading the .spec files. Setting to True should speed up the time it takes to read all .spec files.

Note

The following example builds on the API example results of batch.segment_band_math() and batch.segment_create_mask()_. Please complete each of those API examples to be sure your directory (i.e., ``F:nigo0024Documentshs_process_demospatial_modcrop_many_gdfmask_mcari2_90th`) is populated with image files.

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir)

Read all the .spec files in base_dir and load them to df_spec, a pandas.DataFrame.

>>> df_spec = hsbatch.spectra_to_df(base_dir=base_dir, search_ext='spec',
                                    dir_level=0)
Writing mean spectra to a ``pandas.DataFrame``.
Number of input datacubes/spectra: 40

When visualizing df_spe in Spyder, we can see that each row is a .spec file from a different plot, and each column is a particular spectral band.

api/img/batch/spectra_to_df.png

It is somewhat confusing to conceptualize spectral data by band number (as opposed to the wavelenth it represents). hs_process.hs_tools.get_band can be used to retrieve spectral data for all plots via indexing by wavelength. Say we need to access reflectance at 710 nm for each plot.

>>> df_710nm = df_spec[['fname', 'plot_id', hsbatch.io.tools.get_band(710)]]
api/img/batch/spectra_to_df_710nm.png
spectral_clip(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='spec_clip', name_append='spec-clip', wl_bands=[[0, 420], [760, 776], [813, 827], [880, 1000]], out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to spectrally clip multiple datacubes in the same way.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally clip; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed datacubes; if set to None, a folder named according to the folder_name parameter is added to base_dir

  • folder_name (str) – folder to add to base_dir_out to save all the processed datacubes (default: ‘spec-clip’).

  • name_append (str) – name to append to the filename (default: ‘spec-clip’).

  • wl_bands (list or list of lists) – minimum and maximum wavelenths to clip from image; if multiple groups of wavelengths should be cut, this should be a list of lists. For example, wl_bands=[760, 776] will clip all bands greater than 760.0 nm and less than 776.0 nm; wl_bands = [[0, 420], [760, 776], [813, 827], [880, 1000]] will clip all band less than 420.0 nm, bands greater than 760.0 nm and less than 776.0 nm, bands greater than 813.0 nm and less than 827.0 nm, and bands greater than 880 nm (default).

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See ``hsio.set_io_defaults() for more information on each of the settings.

Note

The following batch example builds on the API example results of the batch.spatial_crop function. Please complete the batch.spatial_crop example to be sure your directory (i.e., base_dir) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: F:\nigo0024\Documents\hs_process_demo\spatial_crop

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_crop'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir, search_ext='.bip', progress_bar=True)  # searches for all files in ``base_dir`` with a ".bip" file extension

Use batch.spectral_clip to clip all spectral bands below 420 nm and above 880 nm, as well as the bands near the oxygen absorption (i.e., 760-776 nm) and water absorption (i.e., 813-827 nm) regions.

>>> hsbatch.spectral_clip(base_dir=base_dir, folder_name='spec_clip',
                          wl_bands=[[0, 420], [760, 776], [813, 827], [880, 1000]],
                          out_force=True)
Processing 40 files. If this is not what is expected, please check if files have already undergone processing. If existing files should be overwritten, be sure to set the ``out_force`` parameter.
Processing file 39/40: 100%|██████████| 40/40 [00:01<00:00, 26.68it/s]

Use seaborn to visualize the spectra of a single pixel in one of the processed images.

>>> import seaborn as sns
>>> fname = os.path.join(base_dir, 'Wells_rep2_20180628_16h56m_pika_gige_7_1011-spatial-crop.bip')
>>> hsbatch.io.read_cube(fname)
>>> spy_mem = hsbatch.io.spyfile.open_memmap()  # datacube before clipping
>>> meta_bands = list(hsbatch.io.tools.meta_bands.values())
>>> fname = os.path.join(base_dir, 'spec_clip', 'Wells_rep2_20180628_16h56m_pika_gige_7_1011-spec-clip.bip')
>>> hsbatch.io.read_cube(fname)
>>> spy_mem_clip = hsbatch.io.spyfile.open_memmap()  # datacube after clipping
>>> meta_bands_clip = list(hsbatch.io.tools.meta_bands.values())
>>> ax = sns.lineplot(x=meta_bands, y=spy_mem[26][29], label='Before spectral clipping', linewidth=3)
>>> ax = sns.lineplot(x=meta_bands_clip, y=spy_mem_clip[26][29], label='After spectral clipping', ax=ax)
>>> ax.set_xlabel('Wavelength (nm)', weight='bold')
>>> ax.set_ylabel('Reflectance (%)', weight='bold')
>>> ax.set_title(r'API Example: `batch.spectral_clip`', weight='bold')
api/img/batch/spectral_clip_plot.png

Notice the spectral areas that were clipped, namely the oxygen and water absorption regions (~770 and ~820 nm, respectively). There is perhaps a lower signal:noise ratio in these regions, which was the merit for clipping those bands out.

spectral_mimic(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='spec_mimic', name_append='spec-mimic', sensor='sentinel-2a', df_band_response=None, col_wl='wl_nm', center_wl='peak', out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to spectrally mimic a multispectral sensor for multiple datacubes in the same way.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally resample; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed datacubes; if set to None, a folder named according to the folder_name parameter is added to base_dir

  • folder_name (str) – folder to add to base_dir_out to save all the processed datacubes (default: ‘spec_bin’).

  • name_append (str) – name to append to the filename (default: ‘spec-bin’).

  • sensor (str) – Should be one of [“sentera_6x”, “micasense_rededge_3”, “sentinel-2a”, “sentinel-2b”, “custom”]; if “custom”, df_band_response and col_wl must be passed.

  • df_band_response (pd.DataFrame) – A DataFrame that contains the transmissivity (%) for each sensor band (as columns) mapped to the continuous wavelength values (as rows). Required if sensor is “custom”, ignored otherwise.

  • col_wl (str) – The column of df_band_response denoting the wavlengths (default: ‘wl_nm’).

  • center_wl (str) – Indicates how the center wavelength of each band is determined. If center_wl is “peak”, the point at which transmissivity is at its maximum is used as the center wavelength. If center_wl is “weighted”, the weighted average is used to compute the center wavelength. Must be one of [“peak”, “weighted”] (default: "peak").

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See ``hsio.set_io_defaults() for more information on each of the settings.

Note

The following batch example builds on the API example results of the batch.spatial_crop function. Please complete the batch.spatial_crop example to be sure your directory (i.e., base_dir) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: F:\nigo0024\Documents\hs_process_demo\spatial_crop

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_crop'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir, search_ext='.bip', progress_bar=True)  # searches for all files in ``base_dir`` with a ".bip" file extension

Use batch.spectral_mimic to spectrally mimic the Sentinel-2A multispectral satellite sensor.

>>> hsbatch.spectral_mimic(
    base_dir=base_dir, folder_name='spec_mimic',
    name_append='sentinel-2a',
    sensor='sentinel-2a', center_wl='weighted')
Processing 40 files. If existing files should be overwritten, be sure to set the ``out_force`` parameter.
Processing file 39/40: 100%|██████████| 40/40 [00:04<00:00,  8.85it/s]

Use seaborn to visualize the spectra of a single pixel in one of the processed images.

>>> import seaborn as sns
>>> fname = os.path.join(base_dir, 'Wells_rep2_20180628_16h56m_pika_gige_7_1011-spatial-crop.bip')
>>> hsbatch.io.read_cube(fname)
>>> spy_mem = hsbatch.io.spyfile.open_memmap()  # datacube before mimicking
>>> meta_bands = list(hsbatch.io.tools.meta_bands.values())
>>> fname = os.path.join(base_dir, 'spec_mimic', 'Wells_rep2_20180628_16h56m_pika_gige_7_1011-sentinel-2a.bip')
>>> hsbatch.io.read_cube(fname)
>>> spy_mem_sen2a = hsbatch.io.spyfile.open_memmap()  # datacube after mimicking
>>> meta_bands_sen2a = list(hsbatch.io.tools.meta_bands.values())
>>> ax = sns.lineplot(x=meta_bands, y=spy_mem[26][29], label='Hyperspectral (Pika II)', linewidth=3)
>>> ax = sns.lineplot(x=meta_bands_sen2a, y=spy_mem_sen2a[26][29], label='Sentinel-2A "mimic"', marker='o', ms=6, ax=ax)
>>> ax.set_xlabel('Wavelength (nm)', weight='bold')
>>> ax.set_ylabel('Reflectance (%)', weight='bold')
>>> ax.set_title(r'API Example: `batch.spectral_mimic`', weight='bold')
api/img/batch/spectral_mimic_sentinel-2a_plot.png
spectral_resample(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='spec_bin', name_append='spec-bin', bandwidth=None, bins_n=None, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to spectrally resample (a.k.a. “bin”) multiple datacubes in the same way.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally resample; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed datacubes; if set to None, a folder named according to the folder_name parameter is added to base_dir

  • folder_name (str) – folder to add to base_dir_out to save all the processed datacubes (default: ‘spec_bin’).

  • name_append (str) – name to append to the filename (default: ‘spec-bin’).

  • bandwidth (float or int) – The bandwidth of the bands after spectral resampling is complete (units should be consistent with that of the .hdr file). Setting bandwidth to 10 will consolidate bands that fall within every 10 nm interval.

  • bins_n (int) – The number of bins (i.e., “bands”) to achieve after spectral resampling is complete. Ignored if bandwidth is not None.

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See ``hsio.set_io_defaults() for more information on each of the settings.

Note

The following batch example builds on the API example results of the batch.spatial_crop function. Please complete the batch.spatial_crop example to be sure your directory (i.e., base_dir) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: F:\nigo0024\Documents\hs_process_demo\spatial_crop

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_crop'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir, search_ext='.bip', progress_bar=True)  # searches for all files in ``base_dir`` with a ".bip" file extension

Use batch.spectral_resample to bin (“group”) all spectral bands into 20 nm bandwidth bands (from ~2.3 nm bandwidth originally) on a per-pixel basis.

>>> hsbatch.spectral_resample(
    base_dir=base_dir, folder_name='spec_bin',
    name_append='spec-bin-20', bandwidth=20)
Processing 40 files. If existing files should be overwritten, be sure to set the ``out_force`` parameter.
Processing file 39/40: 100%|██████████| 40/40 [00:00<00:00, 48.31it/s]
...

Use seaborn to visualize the spectra of a single pixel in one of the processed images.

>>> import seaborn as sns
>>> fname = os.path.join(base_dir, 'Wells_rep2_20180628_16h56m_pika_gige_7_1011-spatial-crop.bip')
>>> hsbatch.io.read_cube(fname)
>>> spy_mem = hsbatch.io.spyfile.open_memmap()  # datacube before resampling
>>> meta_bands = list(hsbatch.io.tools.meta_bands.values())
>>> fname = os.path.join(base_dir, 'spec_bin', 'Wells_rep2_20180628_16h56m_pika_gige_7_1011-spec-bin-20.bip')
>>> hsbatch.io.read_cube(fname)
>>> spy_mem_bin = hsbatch.io.spyfile.open_memmap()  # datacube after resampling
>>> meta_bands_bin = list(hsbatch.io.tools.meta_bands.values())
>>> ax = sns.lineplot(x=meta_bands, y=spy_mem[26][29], label='Hyperspectral (Pika II)', linewidth=3)
>>> ax = sns.lineplot(x=meta_bands_bin, y=spy_mem_bin[26][29], label='Spectral resample (20 nm)', marker='o', ms=6, ax=ax)
>>> ax.set_xlabel('Wavelength (nm)', weight='bold')
>>> ax.set_ylabel('Reflectance (%)', weight='bold')
>>> ax.set_title(r'API Example: `batch.spectral_resample`', weight='bold')
api/img/batch/spectral_resample-20nm_plot.png
spectral_smooth(fname_list=None, base_dir=None, search_ext='bip', dir_level=0, base_dir_out=None, folder_name='spec_smooth', name_append='spec-smooth', window_size=11, order=2, stats=False, out_dtype=False, out_force=None, out_ext=False, out_interleave=False, out_byteorder=False)[source]

Batch processing tool to spectrally smooth multiple datacubes in the same way.

Parameters
  • fname_list (list, optional) – list of filenames to process; if left to None, will look at base_dir, search_ext, and dir_level parameters for files to process (default: None).

  • base_dir (str, optional) – directory path to search for files to spectrally clip; if fname_list is not None, base_dir will be ignored (default: None).

  • search_ext (str) – file format/extension to search for in all directories and subdirectories to determine which files to process; if fname_list is not None, search_ext will be ignored (default: ‘bip’).

  • dir_level (int) – The number of directory levels to search; if None, searches all directory levels (default: 0).

  • base_dir_out (str) – directory path to save all processed datacubes; if set to None, a folder named according to the folder_name parameter is added to base_dir

  • folder_name (str) – folder to add to base_dir_out to save all the processed datacubes (default: ‘spec-smooth’).

  • name_append (str) – name to append to the filename (default: ‘spec-smooth’).

  • window_size (int) – the length of the window; must be an odd integer number (default: 11).

  • order (int) – the order of the polynomial used in the filtering; must be less than window_size - 1 (default: 2).

  • stats (bool) – whether to compute some basic descriptive statistics (mean, st. dev., and coefficient of variation) of the smoothed data array (default: False)

  • out_XXX – Settings for saving the output files can be adjusted here if desired. They are stored in batch.io.defaults, and are therefore accessible at a high level. See ``hsio.set_io_defaults() for more information on each of the settings.

Note

The following batch example builds on the API example results of the batch.spatial_crop function. Please complete the batch.spatial_crop example to be sure your directory (i.e., base_dir) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: F:\nigo0024\Documents\hs_process_demo\spatial_crop

Example

Load and initialize the batch module, checking to be sure the directory exists.

>>> import os
>>> from hs_process import batch
>>> base_dir = r'F:\nigo0024\Documents\hs_process_demo\spatial_crop'
>>> print(os.path.isdir(base_dir))
True
>>> hsbatch = batch(base_dir, search_ext='.bip')  # searches for all files in ``base_dir`` with a ".bip" file extension

Use batch.spectral_smooth to perform a Savitzky-Golay smoothing operation on each image/pixel in base_dir. The window_size and order can be adjusted to achieve desired smoothing results.

>>> hsbatch.spectral_smooth(base_dir=base_dir, folder_name='spec_smooth',
                            window_size=11, order=2)
Processing 40 files. If this is not what is expected, please check if files have already undergone processing. If existing files should be overwritten, be sure to set the ``out_force`` parameter.
Spectrally smoothing: F:\nigo0024\Documents\hs_process_demo\spatial_crop\Wells_rep2_20180628_16h56m_pika_gige_7_1011-spatial-crop.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_crop\spec_smooth\Wells_rep2_20180628_16h56m_pika_gige_7_1011-spec-smooth.bip
Spectrally smoothing: F:\nigo0024\Documents\hs_process_demo\spatial_crop\Wells_rep2_20180628_16h56m_pika_gige_7_1012-spatial-crop.bip
Saving F:\nigo0024\Documents\hs_process_demo\spatial_crop\spec_smooth\Wells_rep2_20180628_16h56m_pika_gige_7_1012-spec-smooth.bip
...

Use seaborn to visualize the spectra of a single pixel in one of the processed images.

>>> import seaborn as sns
>>> fname = os.path.join(base_dir, 'Wells_rep2_20180628_16h56m_pika_gige_7_1011-spatial-crop.bip')
>>> hsbatch.io.read_cube(fname)
>>> spy_mem = hsbatch.io.spyfile.open_memmap()  # datacube before smoothing
>>> meta_bands = list(hsbatch.io.tools.meta_bands.values())
>>> fname = os.path.join(base_dir, 'spec_smooth', 'Wells_rep2_20180628_16h56m_pika_gige_7_1011-spec-smooth.bip')
>>> hsbatch.io.read_cube(fname)
>>> spy_mem_clip = hsbatch.io.spyfile.open_memmap()  # datacube after smoothing
>>> meta_bands_clip = list(hsbatch.io.tools.meta_bands.values())
>>> ax = sns.lineplot(x=meta_bands, y=spy_mem[26][29], label='Before spectral smoothing', linewidth=3)
>>> ax = sns.lineplot(x=meta_bands_clip, y=spy_mem_clip[26][29], label='After spectral smoothing', ax=ax)
>>> ax.set_xlabel('Wavelength (nm)', weight='bold')
>>> ax.set_ylabel('Reflectance (%)', weight='bold')
>>> ax.set_title(r'API Example: `batch.spectral_smooth`', weight='bold')
api/img/batch/spectral_smooth_plot.png

Notice how the “choppiness” of the spectral curve is lessened after the smoothing operation. There are spectral regions that perhaps had a lower signal:noise ratio and did not do particularlly well at smoothing (i.e., < 410 nm, ~770 nm, and ~820 nm). It may be wise to perform batch.spectral_smooth after batch.spectral_clip.