Check out the Hyperspy Workshop May 13-17, 2024 Online

correlate#

AcceleratedIndexationGenerator.correlate(n_largest=5, include_phases=None, **kwargs)[source]#

Correlates the library of simulated diffraction patterns with the electron diffraction signal.

Parameters:
  • n_largest (int, optional) – Number of best solutions to return, in order of descending match

  • include_phases (list, optional) – Names of phases in the library to do an indexation for. By default this is all phases in the library.

  • n_keep (int, optional) – Number of templates to do a full matching on in the second matching step

  • frac_keep (float, optional) – Fraction (between 0-1) of templates to do a full matching on. When set n_keep will be ignored

  • delta_r (float, optional) – The sampling interval of the radial coordinate in pixels

  • delta_theta (float, optional) – The sampling interval of the azimuthal coordinate in degrees

  • max_r (float, optional) – Maximum radius to consider in pixel units. By default it is the distance from the center of the image to a corner

  • intensity_transform_function (Callable, optional) – Function to apply to both image and template intensities on an element by element basis prior to comparison

  • find_direct_beam (bool, optional) – Whether to optimize the direct beam, otherwise the center of the image is chosen

  • direct_beam_positions (2-tuple of floats or 3D numpy array of shape (scan_x, scan_y, 2), optional) – (x, y) coordinates of the direct beam in pixel units. Overrides other settings for finding the direct beam

  • normalize_images (bool, optional) – normalize the images to calculate the correlation coefficient

  • normalize_templates (bool, optional) – normalize the templates to calculate the correlation coefficient

  • parallelize_polar_conversion (bool, optional) – use multiple workers for converting the dataset to polar coordinates. Overhead could make this slower on some hardware and for some datasets.

  • chunks (string or 4-tuple, optional) – internally the work is done on dask datasets and this parameter determines the chunking. If set to None then no re-chunking will happen if the dataset was loaded lazily. If set to “auto” then dask attempts to find the optimal chunk size. If None, no changes will be made to the chunking.

  • parallel_workers (int, optional) – the number of workers to use in parallel. If set to “auto”, the number will be determined from os.cpu_count()

Returns:

result

Return type:

dict

Notes

Internally, this code is compiled to LLVM machine code, so stack traces are often hard to follow on failure. As such it is important to be careful with your parameters selection.