Motion Estimation and Correction

ScanImage® can continuously detect XYZ motion of the currently acquired image relative to a reference volume during an active acquisition.

The motion correction can be used to

Note

For 3D motion correction, the MariusMotionEstimator is recommended. This estimator requires the Matlab Parallel Computing Toolbox and a Nvidia CUDA enabled GPU.

Setup

If not already done once, perform a stage-scanner alignment.

Collect a reference Stack. Right click on the volume in the channel view window and select ‘Set as Motion Correction Reference’.


Motion Estimators

ScanImage® comes with 4 Motion Estimators. All estimators uses basic slice-wise phase correlation to find the best match between the acquired slice and the reference volume.

Name

System Requirements

Performance

Description

SimpleMotionEstimator

None

Good

Requires no additional toolboxes. Not well suited for 3D motion correction due to performance issues.

Marius Motion Estimator

Parallel Computing Toolbox - Nvidia CUDA enabled GPU

Best

Best suited for 2D Motion Correction

GpuMotionEstimator

Parallel Computing Toolbox - Nvidia CUDA enabled GPU

Better

Best suited for 3D Motion Correction.

Note

Processing data on the GPU is fast, but transferring data to the GPU is a bottleneck. When imaging with low resolution, the SimpleMotionEstimator might perform better.

ParallelMotionEstimator

Better

Alternative to the GpuMotionEstimator if no GPU is present. This estimator uses parallel workers for precessing and dows not slow down the acquisition. The tasks are queued for processing. The queue size is a user settable property.


Motion Correctors

ScanImage® ships with 2 Motion Correctors.

Name

Description

SimpleMotionCorrector

This motion corrector averages the motion estimates of the last N seconds. If average motion vector is greater than the correction threshold, a correction event is triggered. The minimum time in between correction events is settable by the property correctionInterval_s.

MariusMotionCorrector

This corrector should be paired with the marius motion estimator.


Match Current FOV with Previous Session

Motion estimation can be used to minimize the deviation of the sample between sessions to the pixel level. As an optional first step, it may be helpful to reorient the sample relative coordinate system to the orientation of the sample as it was acquired in the first session. ScanImage comes with a session-session alignment tool which allows an affine transformation to be easily defined between sessions by

  1. marking and saving fiducial points as A, B, and C in the first session, and then

  2. refinding and remarking the fiducials matching A, B, and C on the subsequent session

  3. loading the points from the first session, and applying to reorient.

It’s recommended that the fiducial marks are made as stationary as possible relative to the regions of interest in the sample. For example, a metal bezel with custom A, B, and C engravings could be fixed permanently to the sample and double as the mating surface to hold the sample still under the objective.

After the sample coordinate system has been realigned to the previous session, sample coordinates used in the first session can be reused for coarse alignment to previous imaging sites - at which point pixel-wise precision can be obtained with motion estimation.

It is technically possible to use motion estimation to match the current FOV with a previous session with pixel-level accuracy. Current requirements for this are:

  1. No rotation of sample relative to previous session

  2. Minimal changes to image features between sessions

If these stipulations are met, then one can simply load the background of pinned images from a previous session, and then use the pinned images as reference for the motion estimator.

Note

In the future, it may be possible to eliminate the zero-rotation stipulation by rotating the reference image data.

If this is of particular interest to your lab, then contact us at support@mbfbioscience.com.


Output Files

If data logging and motion correction are both enabled, a motion correction output file will be generated with a [File name stem] + “_Motion_” + [File counter] filename.

This file will contain the following attributes: timestamp, frameNumber, success, quality, xyMotion, roiUuid, motionMatrix, z, and channel.


API

Motion Estimators

Motion estimators derive from the class

scanimage.interfaces.IMotionEstimator

The reference volume and the image data are handed to the Motion Estimator as instance of the class

scanimage.mroi.RoiData.

scanimage.mroi.RoiData contains information about the ROI geometry (hRoi), the channels (channels) and the currently imaged zs (zs). The image data is stored in the property imageData. imageData is a cell array, where the first index is the channelIdx, and the second index is the z index.

The function

motion_estimator_result = estimateMotion(obj,roiData)

does not return the motion estimate directly, but instead returns an object of type scanimage.interfaces.IMotionEstimatorResult. ScanImage then polls this class to obtain the estimation results. The purpose of this class is to enable asynchronous processing.

Motion Correctors

Motion estimators derive from the class

scanimage.interfaces.IMotionCorrector

When a new motion estimate is available, ScanImage® populates the estimate by calling the function updateMotionHistory. This hands the entire motion history to the corrector. The corrector can then analyze the history and determine if a correction is required. When the corrector wants to initiate a correction, it notifies its event correctNow. ScanImage® then queries the function getCorrection to get the correction value.

Note

if the corrector returns an invalid value (e.g. values outside the allowable correction range), ScanImage discards the correction event.

After a correction is performed, ScanImage® calls the function correctedMotion.

Using Averaged frames

Motion Estimators feature two properties which allow use of averaged live frames to compare against the reference (which can also be an averaged frame). All that is required is to set a Frames Averaged Factor > 1 on the channel matching the reference stack’s channel, and then set useAveragedData to true from the selected estimator settings table.

There is also a throttle checkbox. Checking this will have motion detected every n frames where n is the Frame Average Factor for the channel used.