-
-
Notifications
You must be signed in to change notification settings - Fork 1k
feat: add number/float16/base/ulp-difference
#9321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
feat: add number/float16/base/ulp-difference
#9321
Conversation
Coverage Report
The above coverage report was generated for the changes in this PR. |
lib/node_modules/@stdlib/number/float16/base/ulp-difference/examples/index.js
Show resolved
Hide resolved
lib/node_modules/@stdlib/number/float16/base/ulp-difference/benchmark/benchmark.js
Show resolved
Hide resolved
| if ( isnan( x ) || isnan( y ) ) { | ||
| return NaN; | ||
| } | ||
| // Convert input values to unsigned 32-bit integers corresponding to the IEEE 754 binary representation of half-precision floating-point numbers: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little confused. Why 32-bit integers? IIUC, toWord returns "16-bit" integers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahh right forgot to change the call should be:-
| // Convert input values to unsigned 32-bit integers corresponding to the IEEE 754 binary representation of half-precision floating-point numbers: | |
| // Convert input values to unsigned 16-bit integers corresponding to the IEEE 754 binary representation of half-precision floating-point numbers: |
|
|
||
| // MODULES // | ||
|
|
||
| var isnan = require( '@stdlib/math/base/assert/is-nan' ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In JS, using this util is likely fine, given that numeric values in JS are all implicitly doubles.
| */ | ||
| function monotoneKey( word ) { | ||
| if ( word & SIGN_MASK ) { // x < 0 | ||
| return ( ~word + 1 ) &UINT16_MAX; // two's-complement negation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am guessing that you are AND'ing the bits because you've done ~word negation and because JS only has 32-bit integer ops, the leading zeros all get flipped on during negation and we want them to remain off. Am I correct?
The question I have, however, is whether two's complement negation actually works here, as the +1 no longer causes the word to overflow as a 32-bit integer does. Have you actually verified that the negation logic works as you expect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just so we are clear, the negation and +1 is intended to convert values to allow lexicographic ordering. So it is important that we ensure this logic works as expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right so far while testing for specific range values it provides the desired result. What I had in mind is what you guessed right. The only way I can find out whether the complement negation is working out is by test it a few edge cases close to around 16-bit integer ops.
Also I'll take a look at a few references just to be on the safer side.
| return ( ~word + 1 ) &UINT16_MAX; // two's-complement negation | ||
| } | ||
| // x >= 0 | ||
| return ( word | SIGN_MASK ) &UINT16_MAX; // push +0 to just above -0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to & UINT16_MAX here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used &UINT16_MAX to force the result to stay within 16 bits just so that monotone key would not be corrupted by sign-extension and overflow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| return ( word | SIGN_MASK ) &UINT16_MAX; // push +0 to just above -0 | |
| return ( word | SIGN_MASK ) >>> 0; // push +0 to just above -0 |
kgryte
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, this looks good. My main question is around monotone key conversion.
|
Tested the two-complements negation logic with the following edge cases:- d = ulpdiff( -1.1920928955078125e-7, -5.960464477539063e-8 );
console.log( d );
// => 1.0
d = ulpdiff( -5.960464477539063e-8, 5.960464477539063e-8 );
console.log( d );
// => 2.0output log:- [Running] node "c:\Users\HP\Stdlib\stdlib\lib\node_modules\@stdlib\number\float16\base\ulp-difference\test\edgeCase.js"
1
2
what I believe if the logic was broken it would have provided huge ulp diff. |
Resolves none.
Description
This pull request:
number/float16/base/ulp-differenceRelated Issues
This pull request has the following related issues:
Questions
No.
Other
No.
Checklist
AI Assistance
If you answered "yes" above, how did you use AI assistance?
Disclosure
{{TODO: add disclosure if applicable}}
@stdlib-js/reviewers