[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: OOC Core Library
Michael G. wrote:
I've been working on a machine-independent implementations of Math &
MathL that don't depend on the host OS. They should work with any
machine that supports the IEEE floating point format. With an optimizing
compiler, these modules should be more efficient than most host-supplied
routines--especially with the REAL numbers, since most host systems, will
be forcing use of LONGREALs (much slower) than a true REAL implementation.
Host independent sounds great. But on which operations are your
functions built?
The modules are Oakwood-compliant with some additional routines. Here
is the definition of the Math module.
EXTENDED DEFINITION Math;
CONST
IllegalInvTrig = 7;
IllegalLog = 2;
IllegalLogBase = 5;
IllegalPower = 4;
IllegalRoot = 1;
IllegalTrig = 6;
NoError = 0;
Overflow = 3;
I do like the error handling. Nice touch.
e = 2.71828175E+00;
pi = 3.14159274E+00;
VAR
epsilon-: REAL;
err-: INTEGER;
infinity-: REAL;
zero-: REAL;
Um, aren't there two values for infinity and zero in IEEE? I guess
those variables denote the positive ones. How do you define the values
of these variables? Do you use the precise bit pattern or some
approximation?
[functions deleted]
END Math.
If anyone has any comments, let me know what you think. If a floating
point coprocessor is available with hardware support for some of these
functions, part of the porting effort to generate optimized code will
be to replace those functions which have hardware support with INLINE
code to generate the appropriate coprocessor instructions--although
initial ports should work with no changes to this module.
Inlining one of those functions would mean to give them an GSA opcode
and tranlate calls to the (external) function by a single GSA
instruction that denotes the operation. This could be accomplished by
declaring a function (eg exp) like
MODULE Math;
...
PROCEDURE [INLINE(234)] exp* (x: REAL): REAL;
...
END Math.
where 234 would be the number of the associated GSA opcode. Calls to
the function `Math.exp' would then be changed into a instruction with
opcode 234.
But this inlining would probably prohibit any error reporting. We
could add a compiler flag `fast math'. When set, FPU (=hardware)
instructions are used as much as possible, but errors aren't reported
in much detail.
I'll run some benchmarks later and let you know the results.
I'd especially interested in the functions' precision. Can you make
any statements about that, and how this compares to the precision of
a math lib (eg the SUN one) or of a FPU?
-- mva