-
Notifications
You must be signed in to change notification settings - Fork 241
Improve stability of velocity fits in template metrics #4342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
This looks great, I'll have a close look tomorrow :) |
| distances_um_above = np.array([np.linalg.norm(cl - max_channel_location) for cl in channel_locations_above]) | ||
| velocity_above, _, score = fit_velocity(peak_times_ms_above, distances_um_above) | ||
| inv_velocity_above, score = fit_line_robust(distances_um_above, peak_times_ms_above) | ||
| velocity_above = 1 / inv_velocity_above |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something to improve while we're at it: I think it would be nice to catch the division by zero before it happens, since we're expecting this to fail for numerous units. If we do this, the user doesn't get annoying / uninterruptible division by zero errors
| velocity_above = 1 / inv_velocity_above | |
| if inv_velocity_above != 0: | |
| velocity_above = 1 / inv_velocity_above | |
| else: | |
| velocity_above = np.nan |
| distances_um_below = np.array([np.linalg.norm(cl - max_channel_location) for cl in channel_locations_below]) | ||
| velocity_below, _, score = fit_velocity(peak_times_ms_below, distances_um_below) | ||
| inv_velocity_below, score = fit_line_robust(distances_um_below, peak_times_ms_below) | ||
| velocity_below = 1 / inv_velocity_below |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| velocity_below = 1 / inv_velocity_below | |
| if inv_velocity_below != 0: | |
| velocity_below = 1 / inv_velocity_below | |
| else: | |
| velocity_below = np.nan |
chrishalcrow
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
Confirmed that it gives similar results on real data, is fast, and gives fewer nan results. Win, win win. Made a tiny comment which would marginally make the error messages less annoying.
@ecobost out of interest, since you've been looking at this score: I've only really got NP2 data. Do you know if this algorithm generalizes to large square array probes? Or is it designed only for long ones?
In template metrics,
velocity_aboveandvelocity_belowestimate how fast an spike moves along the axon/dendrites by fitting a line where x is distance (template_channel_location - location of soma) and y is time (position of the peak at each template_channel compared to the soma expressed in msecs). A small slope means the spike takes a long time to move through the probe (so mm/msecs), a higher slope means the spike moves fast. Currently when a peak position is the same (or very close) along the template channels, VelocityFits() tries to fit a straight vertical line; which gives an infinite slope (in practice, the fitting fails with NaN because X is ill-conditioned). This is pretty common.This PR switches the fitting to regress peak_ms (in y) onto channel_distances (in x) and then take 1/slope to obtain the same velocities as before. This is more stable and also justified in the original source "Because the time difference between the trough of adjacent sites could be 0, to avoid infinite numbers, we calculated the inverse of velocity (ms/mm) instead by fitting a regression line to the time of waveform trough at different sites against the distance of the sites relative to soma" (Jia et al, 2020). I also centered the data before fitting the regressor. Centering helps stability and avoids having to learn an intercept which gives the model a bit more robustness.
Here's the velocities calculated with the current method and the new method (proposed in this PR):

Predicted velocities are consistent for lower velocities (center of the plot) and more stable at higher velocities. It is also able to make predictions for a lot more cases (this is a 3h neuropixels recording and the proportions of NaNs drops from 0.37 to 0.26). I also manually checked some results (where there were nans before) and they look sensible.
Overall, I think it's just a sensible change for added stability and better predictions.