Check out the Hyperspy Workshop May 13-17, 2024 Online

phase_cross_correlation#

pyxem.utils.diffraction.phase_cross_correlation(reference_image, moving_image, *, upsample_factor=1, space='real', disambiguate=False, reference_mask=None, moving_mask=None, overlap_ratio=0.3, normalization='phase')#

Efficient subpixel image translation registration by cross-correlation.

This code gives the same precision as the FFT upsampled cross-correlation in a fraction of the computation time and with reduced memory requirements. It obtains an initial estimate of the cross-correlation peak by an FFT and then refines the shift estimation by upsampling the DFT only in a small neighborhood of that estimate by means of a matrix-multiply DFT [1].

Parameters:
  • reference_image (array) – Reference image.

  • moving_image (array) – Image to register. Must be same dimensionality as reference_image.

  • upsample_factor (int, optional) – Upsampling factor. Images will be registered to within 1 / upsample_factor of a pixel. For example upsample_factor == 20 means the images will be registered within 1/20th of a pixel. Default is 1 (no upsampling). Not used if any of reference_mask or moving_mask is not None.

  • space (string, one of “real” or “fourier”, optional) – Defines how the algorithm interprets input data. “real” means data will be FFT’d to compute the correlation, while “fourier” data will bypass FFT of input data. Case insensitive. Not used if any of reference_mask or moving_mask is not None.

  • disambiguate (bool) – The shift returned by this function is only accurate modulo the image shape, due to the periodic nature of the Fourier transform. If this parameter is set to True, the real space cross-correlation is computed for each possible shift, and the shift with the highest cross-correlation within the overlapping area is returned.

  • reference_mask (ndarray) – Boolean mask for reference_image. The mask should evaluate to True (or 1) on valid pixels. reference_mask should have the same shape as reference_image.

  • moving_mask (ndarray or None, optional) – Boolean mask for moving_image. The mask should evaluate to True (or 1) on valid pixels. moving_mask should have the same shape as moving_image. If None, reference_mask will be used.

  • overlap_ratio (float, optional) – Minimum allowed overlap ratio between images. The correlation for translations corresponding with an overlap ratio lower than this threshold will be ignored. A lower overlap_ratio leads to smaller maximum translation, while a higher overlap_ratio leads to greater robustness against spurious matches due to small overlap between masked images. Used only if one of reference_mask or moving_mask is not None.

  • normalization ({“phase”, None}) – The type of normalization to apply to the cross-correlation. This parameter is unused when masks (reference_mask and moving_mask) are supplied.

Returns:

  • shift (ndarray) – Shift vector (in pixels) required to register moving_image with reference_image. Axis ordering is consistent with the axis order of the input array.

  • error (float) – Translation invariant normalized RMS error between reference_image and moving_image. For masked cross-correlation this error is not available and NaN is returned.

  • phasediff (float) – Global phase difference between the two images (should be zero if images are non-negative). For masked cross-correlation this phase difference is not available and NaN is returned.

Notes

The use of cross-correlation to estimate image translation has a long history dating back to at least [2]. The “phase correlation” method (selected by normalization="phase") was first proposed in [3]. Publications [1] and [2] use an unnormalized cross-correlation (normalization=None). Which form of normalization is better is application-dependent. For example, the phase correlation method works well in registering images under different illumination, but is not very robust to noise. In a high noise scenario, the unnormalized method may be preferable.

When masks are provided, a masked normalized cross-correlation algorithm is used [5], [6].

References