D3 Precision – Who says MV doesn’t have data types?
I got an email about an issue with high-precision data passed to NebulaXLite. That started several exchanges with Tiger Logic Technical Support, and I’m summarizing what I’ve learned here.
D3 has a default numeric precision of 4 decimals. Since we’re working with Excel here, that implies numbers, financial data – potentially a number of significant digits. Consider the following code:
01 precision 9
05 call prog2(c)
06 crt c
01 sub prog2(c)
The output from this is 1.524138393 if both programs were Flash-compiled (with the ‘O’ option). So if you want high precision with NebulaXLite, add a precision statement and flash-compile your code. NebulaXLite for D3 is delivered with Flashed-object modules, but if you want to re-flash the code, do this:
SELECT DICT NEBULA.BP
COMPILE NEBULA.BP (OW
The calling program is not flashed, you will see a runtime error like the following:
[B33] in program "PROG1", Line 5: Precision declared in subprogram ” is different from that declared in the mainline program.
(Note there are minor errors in that the subprogram isn’t identified, or sometimes is a line number from the calling program, and the line breaks are a little weird. We can ignore that here.)
Now, here is an "interesting" anomaly. Note that the subroutine didn’t touch the data, so it was generated with 9 digits and then returned with the same. Let’s do something with the data in the subroutine – as you would expect might happen when you pass data:
01 sub prog2(c)
02 c = c:""
The result is now 1.5241. The precision has been adjusted to match the called program that last operated on the data. So FlashBASIC has given us freedom to have different precision in calling and called programs, and doesn’t return an error when they’re different, but the penalty is that an error might be missed where the called program reduces data to a different precision. This could happen with any in-house software and any third-party product in our industry.
Is there a solution?
One possible solution is for the calling program to include a Precision statement of its own:
01 sub prog2(c)
02 precision 9
03 c = c:""
Thankfully, no matter what the precision of the calling program, the called program does not seem to adjust the precision of the data. Only extensive testing would reveal if some permutation of calling precision 0 through 9 and data would return a different value than what’s sent into the subroutine where precision 9 is used.
But do we all need to modify our subroutines with Precision 9 to avoid losing decimals? Maybe not, and that’s a good thing. Let’s say I add Precision 9 to all of our Nebula R&D subroutines which are provided to our clients as object-only modules. If a site flash-compiles their code then the solution works. But there are many sites that cannot or will not flash their code. So as soon as a new site calls to a NEBULA.BP subroutine, they’re going to get a runtime abort. This means the object module needs to know ahead of time whether or not the target environment is going to be flashed – that’s not a good solution. The "fix on a fix" for this situation is for vendors like Nebula R&D to provide two object modules for every subroutine – one with a Precision 9 compiled into the code and one without. That’s not optimal either.
This could be handled from the calling program too, depending on what’s expected of the called program. Let’s add a single line to PROG1:
01 precision 9
06 call prog2(c)
07 crt c
Concatenating a string to the data in line 5 changes the data from numeric to string. From there, as long as the subroutine processes the data as a string, like wrapping it in XML or saving it to a log file, the data will remain simply a stream of characters. However, if the data is handled in the subroutine as numeric data, then the precision is adjusted to that of the subroutine. For example: The latest calling program with the new line 5 passes the data to the 3-line subroutine above. As long as both sides treat the data as a string then the output from the subroutine remains the string with full precision. But consider the following subroutine modification:
01 sub prog2(c)
02 c<2> = c<1> * .123
The result is "1.524138393^0.1874". We see in the subroutine that the first attribute of the dynamic array is not modified, so the string remains unchanged. But we create a second attribute, using the source data as a numeric value, and we perform an arithmetic operation on it. The result saved to attribute 2 is processed using the local precision of the subroutine, which is the default 4. So the unchanged string in atb1 is unrelated to the processed atb2.
Outputting the data at any time before passing to the subroutine also converts it from numeric to string. For example, replace c=c:"" with crt c in the above code, and you’ll see the high-precision number. However, this is not practical in many situations. It’s non-intuitive that simply printing data will somehow convert the data type but when you consider any output device is only interested in strings and not numbers then it makes a little more sense. It could be very confusing to a developer that simply looking at the data will cause it to suddenly be treated as a high-precision value, but if the data isn’t rendered before sending to the subroutine it’s suddenly low-precision. For this reason, if you’re going to solve the problem from the calling side, use the concatenation trick instead of printing the data.
If a called program is going to operate numerically on data from a calling program, the called program should include Precision 9, and all programs must be flashed.
In a flashed environment, if called programs do not operate numerically on data from a calling program, then the subroutines do not need to include a Precision statement, but the calling program must "cast" the data from numeric to string before passing it to the subroutine. This is done by concatenating null to the value.
In a non-flashed environment, where called subroutines operate numerically on data, developers/vendors should be prepared to provide object modules for all possible precisions. This can be done simply by putting a single Precision statement in an Include item that is referenced by all programs. Change the one line and recompile everything when the need arises.
Given all of this, I know I need to make some code changes because Nebula R&D supports clients all over the world in all kinds of businesses. NebulaXLite is the first target.
Developers/Vendors – if you’re not sure about the deployment environment, you really should ask more questions before deploying your software to a given environment. For example, do the calling programs already have a specific precision setting, and why? And is there a requirement or preference for flashing or not flashing code? This is good for customer/end-user relations and will help to avoid later reports of mysterious numeric truncation errors.
For ultra-high precision, have a look at the XP library, originally written over 20 years ago by Tony Speed. The source code is provided in the DM account, file XP. Here is a small code sample:
01 include dm,xp, xpa.defs
02 result = ""
03 call xdiv(PI,EE,result) ; * divide PI by EE and return result
04 crt PI
05 crt EE
06 crt result
You wouldn’t want a subroutine to mess with precision like that.
— Thanks to TigerLogic Tech Support and Engineering for working with me to understand the issue and for verifying this blog entry for others.