[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: OOC Core Library



Michael V.A. wrote:

> Michael G. wrote:
>    I've been working on a machine-independent implementations of Math &
>    MathL that don't depend on the host OS.  They should work with any
>    machine that supports the IEEE floating point format.  With an optimizing
>    compiler, these modules should be more efficient than most host-supplied
>    routines--especially with the REAL numbers, since most host systems, will
>    be forcing use of LONGREALs (much slower) than a true REAL implementation.
> 
> Host independent sounds great.  But on which operations are your
> functions built?

The functions require regular floating point operations (+,-,*,/).
 
> I do like the error handling.  Nice touch.

Thanks.  In all cases, the functions will continue operating by returning
a default value like infinity, zero, or other logical result and set the
appropriate error condition.  It will be up the caller to check (or ignore)
the error state.

>      e = 2.71828175E+00;
>      pi = 3.14159274E+00;
> 
>    VAR
>      epsilon-: REAL;
>      err-: INTEGER;
>      infinity-: REAL;
>      zero-: REAL;
> 
> Um, aren't there two values for infinity and zero in IEEE?  I guess
> those variables denote the positive ones.  How do you define the values
> of these variables?  Do you use the precise bit pattern or some
> approximation? 

Both zero and infinity are approximations.  These approximations were
stolen from some Math modules posted by Alan D. Freed from NASA.  Basically
infinity=MAX(REAL)/100 and zero=1/infinity.  This implementation thus
avoids hardware overflow/underflow traps.  To implement a check against
zero, this module does the following:

    IF ABS(x)<zero THEN (* the number is effectively zero *) END

 
> Inlining one of those functions would mean to give them an GSA opcode
> and tranlate calls to the (external) function by a single GSA
> instruction that denotes the operation.  This could be accomplished by
> declaring a function (eg exp) like
>   MODULE Math;
>   ...
>   PROCEDURE [INLINE(234)] exp* (x: REAL): REAL; 
>   ...
>   END Math.
> where 234 would be the number of the associated GSA opcode.  Calls to
> the function `Math.exp' would then be changed into a instruction with
> opcode 234.
>   But this inlining would probably prohibit any error reporting.  We
> could add a compiler flag `fast math'.  When set, FPU (=hardware)
> instructions are used as much as possible, but errors aren't reported
> in much detail.

It would probably help to define a standardized module which lists all
possible opcodes.  That way, a particular back-end would be able to
produce code for all opcodes supported by that target (co)processor.  
Any opcodes not supported would generate errors in the back end.

BTW, does this mean we wouldn't be supporting CODE procedures as
used by ETH and shown (1st recommendation) in the Oakwood report.  It
may be useful to use code procedures when interfacing to other languages
and the OS. 

>    I'll run some benchmarks later and let you know the results.
> 
> I'd especially interested in the functions' precision.  Can you make
> any statements about that, and how this compares to the precision of
> a math lib (eg the SUN one) or of a FPU?

All routines will be as accurate as possible in the given floating point
mantissa.  The routines should be comparable to the SUN math routines'
accuracy and similar to a FPU.  The only difference will be in how the
overflow/underflow are handled by the software routines so that execution 
continues instead of trapping.
 
> -- mva

Michael Griebling