[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Complex numbers & Oakwood suggestions



Brian Hawley <bhawley@luc.edu> wrote:
>  I have a couple questions that relate to the efficiency of the
>  implementation of complex numbers in the library. 
>  [ lots of good arguments deleted ]

This is a very convincing text about the problems of handling complex
values efficiently.  At least it convinced _me_ that complex math is a
pain if you have to implement it with real numbers and without any way
to pass them as function results.  This is strange because I never had
to use complex numbers in my live and it doesn't make any difference
to me.  It probably means that I'm easily influenced by long mails
exhaustively listing solutions that are all proven to be inadeqaute
;-).

Well, there is always the option to extend the language to include
COMPLEX and LONGCOMPLEX as basic types.  The Oakwood Guidelines deal
with most of the details, and it is fairly simple to integrate into
the front-end.  So, if there are no strong objections on this mailing
list against adding those types, I propose to extend the language as
sketched out in the Oakwood text.

------------------------------------------------------------------------

There are some points to add, though:

Oakwood has no constructors for complex values.  Adding two predefined
functions 
  CMPLX(REAL,REAL) --> COMPLEX 
and 
  LONGCMPLX(LONGREAL, LONGREAL) --> LONGCOMPLEX
would do the trick.

It is a bit vague on the type hierachy to be used.  Proposal:
LONGCOMPLEX--LONGREAL--REAL--LONGINT--INTEGER--SHORTINT
           \         /
             COMPLEX

Is there any reason at all to provide MIN/MAX for complex types?

Maybe toss in an operator for conjugation?  `~' is free and has the
right priority.  `~z' would calculate the conjugate complex value of
`z'.  While I'm at it, what about adding a predefined complex ABS
function?  Is anyone in favor of these additions?

------------------------------------------------------------------------

From:  The Oakwood Guidelines for Oberon-2 Compiler Developers
Revision: 1A First Issue


Section 3.2: Additional Datatypes

Extending  ETH Oberon-2  with new  data  types  is a very  contentious
issue.  At the  Oakwood meeting the general  feeling was that only the
complex number type should  be considered.  Enumerations  and unsigned
types  have been specifically rejected by  ETH although they are still
found  desirable  by applications   programmers.  Unsigned  types  are
particularly important when interfacing  to existing external standard
libraries such as  X Windows, 'C' or Windows  and had support from the
applications programmers.   Bit    level types are   considered  to be
unnecessary as the SET type can be used.

Type inclusion Hierarchies

Adding  data  types which are  additional to   Oberon-2 should be done
sympathetically   (if at all)  and   with  due  consideration to   the
implications on     the whole  language.     Separate type   inclusion
hierarchies should  be used to   separate families of types  which are
intrinsically  incompatible.  Explicit conversion procedures should be
used to  convert values that   can  be represented  in different  type
inclusion hierarchies.   The  predefined function procedures LONG  and
SHORT should provide conversion within any extended type hierarchy.

An example (please note this is NOT a proposal for general
implementation) 

LONGCOMPLEX	           REAL E LONGINT E INTEGER E SHORTINT
                        

		           LONGCARD E CARDINAL E SHORTCARD

			              LONGCHAR E CHAR

The  intention of  this scheme  is   to  retain the  benefits of  type
inclusion whilst separating explicitly the system dependent aspects of
value  conversion  between    types  in    different  type   inclusion
hierarchies.   Such  procedures   should  be  included   as built   in
procedures.  For any additional data type extension to the language it
is the implementors responsibility   to provide an updated version  of
the   Oberon-2 Language Report    indicating all the relevant  changes
required to it.

There is a  known problem with this proposal.   It does  not allow for
type inclusion of COMPLEX within LONGREAL.  However  it was felt to be
a better solution than having only one complex type selectable as LONG
by  compiler  switch, which could  easily  be set to  select different
options in different modules (and library modules).

<< BK:  A proposal is needed for the conversion routine, see Section
3.3.1 >>


Section 3.3: Type COMPLEX and LONGCOMPLEX

Complex numbers are made up of two parts  (real, imaginary).  The type
LONGCOMPLEX is defined as (LONGREAL, LONGREAL), and can be included at
the top end  of the  type inclusion hierarchy.   The type  COMPLEX  is
defined   as (REAL,  REAL)  and is  an   extension of  REAL within the
hierarchy.  See  Section 3.4.1.  If a   value of a  type less  than or
equal to REAL is interpreted as a  COMPLEX value then it is considered
to be  the  real part; the   corresponding imaginary part   is 0.  All
expression and  assignment compatibility rules  can  be applied to the
complex types, for example

	VAR
	   c: COMPLEX;
	   r: REAL;
	   i: INTEGER;
	   c:=i+r;
	   c:=c*r;

New conversion functions

The following predeclared function procedures are defined, (z) stands
for an expression 
Name       Argument Type	Result Type	   Function       
RE(z)      COMPLEX		REAL		   Real part
RE(z)      LONGCOMPLEX		LONGREAL	   Real part
IM(z)      COMPLEX		REAL		   Imaginary part
IM(z)      LONGCOMPLEX		LONGREAL	   Imaginary part
<< BK/HM/AF:  Predeclared functions SHORT, LONG, MIN, MAX and SIZE
need to be defined for COMPLEX and LONGCOMPLEX. >> 

Complex literal number syntax
A common notation is used for complex number literals:
number = integer | real | complex.
complex = real "i".
Examples
		             Values
		          RE	  IM
	1.i	            0.	  1.
	2.+3.i	            2.	  3.
	4.	            4.	  0.
	5.3-6.2i	    5.3	 -6.2

Reasons against introducing COMPLEX

The omission of  COMPLEX data types  from Oberon was a deliberate  ETH
design decision and not an oversight.  The following reasons are cited
by Josef Templ.

Internal Representation
Cartesian or polar ?  Both have advantages, cartesian is more common,
though. 

Efficiency
Utmost efficiency can only be gained by coding COMPLEX operations as
REAL operations, because often the real or imaginary parts are zero,
one, or a value which allows algebraic simplifications.

Accuracy
Not under full programmer control in case of COMPLEX.

Difficulties in the hierarchy of numeric types
The linear type inclusion would be changed to a directed acyclic graph
(DAG)  if   two  types  COMPLEX and   LONGCOMPLEX are  introduced with
compatibility rules as  naturally expected.  A simplification would be
to  set   COMPLEX =  LONGCOMPLEX, but   is  it  sufficient  ?  Another
simplification would  be to form  a  separate hierarchy consisting  of
COMPLEX and  LONGCOMPLEX, but is  this  convenient ?  One  should also
observe the    effect on the rest of     the language definition.  For
example, what about the comparison operators for numeric types ?

Implementation
Not the most important point, but COMPLEX also makes the compiler more
complex, especially when good code should be generated.  The reason is
that  two separate operand   descriptors  must be  maintained  in  the
compiler to represent the two parts of one complex.

Hardware
Unlike INTEGER and REAL, no hardware support for COMPLEX is available.

Syntax
Additional  syntax is necessary  for denoting complex constants and/or
additional predeclared functions are necessary.

Structured function returns
A common  misbelief is  that introducing  structured  function returns
would eliminate the discussion about  COMPLEX, because then one  could
define complex operations as functions.  It should  be noted that this
is only half the way since the mathematicians still want to have infix
notation  which  would require  the  introduction  of  a  more general
overloading concept including infix   operators.  This in  turn  would
break the idea  of always qualifying   imported objects by the  module
name.

Unused
For the reasons outlined  above  (efficiency, accuracy), many  Fortran
programmers don't use complex operations  although they are  supported
by the   language.  Not   sufficient  For  the purpose   of scientific
computing, COMPLEX is  only a small step.   What is still  missing are
vector operations and subarrays.



-- mva