[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 64-bit extensions



> 
> > A byte is possibly _the_ most portable construct around.  Every micro
> > and computer that I know of has a byte data type.  So putting it into
> > system really has no basis in reality -- it's just a whim of the language
> > designer since no operations were defined on a byte (well they were on
> > SHORTINTs which is essentially what BYTE would become).
> 
> Its not its size, which makes as a "near hardware" type its is name. If
> you want 8Bits use SHORTINT, if you do memcopy and need a datatype that
> stands for void instead of numercial data, use BYTE.

Well that's why I suggested using BYTE as an 8-bit integer AND as a
generic parameter.  Besides SHORTINTs would be filling the 16-bit integer
requirements.  It has never made much sense to not have any operations
defined on BYTEs since they are a basic data type all computers support.
I don't see any advantage to defining an arbitrary SYSTEM entity the same
size as a BYTE.  This is the same situtation as having an open ARRAY of
some parameter where the most basic parameter in a computer is the BYTE
which is the smallest addressable and computable quantity.  Obviously, an
open array of these elements would be compatible with any data type -- just
as in a computer's memory where all variables are simply streams of bytes.
Why separate this single abstraction into two distinct entities?

Of course, the argument might be made that this will break the type
safeness of Oberon-2.  But it's already broken.  My response would be to 
close this loophole and not allow the concept of any type being compatible 
with an ARRAY OF BYTE.  I don't know of any functions which couldn't be 
performed in a type-safe way without this loophole.
 
> > As far as interface to existing OSes, that would still be possible
> > with the final conversion consisting of a simple SHORT() on the
> > UNICODE string.  Eventually, OSes will start using Unicode and the
> > interface will be a non-issue.
> 
> That not that good, I don't like puting SHORT around every string and I
> don't like implcit conversions either. However at least under linux
> implementation of unicode is not that easy, see "man unicode" and "man
> utf-8" and much internal performance will be lost.

Not every string needs this.  Just the lowest level interfaces to the
OS.

> > Of course, I don't see mva changing the current definition.  He has
> > already indicated that a HUGEINT 64-bit type is the way he would go
> > in the existing compiler.  I don't think a 16-bit character type would
> > be implemented either.
> 
> I don't see any problems with implementing UNICHAR like he will implement
> HUGEINT.

No problem.  Just not as convenient.
 
> > I believe these are stop-gap measures.  Eventually OSes will speak
> > Unicode and that will drive the languages of the future.
> 
> There are not that many new OSs around and I doubt the existing ones
> (Window, diverse Unixes, Max OS) will fully switch to unicode. The simply
> cannot because of portability reasons. The will support Unicode by
> implementing a strings-library for wchar_t and add functionality for the
> GUI to print 16bit strings and leave everything else to the programmer.

There haven't been many new OSes.  The BeOS is the only recent one I
know of and it supports UTF for GUI text drawing while maintaining
Posix compatibility at a lower level with ASCII strings.  For characters
< 7FX, the two are equivalent so the conversion overhead is minimal.
 
> > Juice runs on two targets (Mac OS and Windows 95).  That leaves a lot of
> > operating systems which aren't supported.  Java byte code VMs can be found
> > almost everywhere or will be very soon -- even the BeOS has a Java VM.
> > If we want Oberon-2 to have a place in the future, eventually there will
> 
> Yes, if you can't kill your enemy, join him ;-) However I see support
> for the java engine at the horizon :-| And there is not only the bytecode

Do you know about something interesting which is ongoing?

> interpreter, but the whole jave engine, and I doubt we can totaly map
> everything to that (GUI, IO etc...), so this can't be really done...?

Well, I think we could simulate the Java libs in OOC and then put native
calls to those libs in place when targetting Java output.

> > Of course, they have some silly extensions too such as keywords added
> > just to use as pragmas.  OOC's method is better IMHO.
> 
> There may bew some good points, some of them I like to have in ooc, too,
> but they made changes, that made their language incompatible to oberon.
> You can't compile an oberon programm under CP as far as I understood.
> That's mainly because of silly keywords the *must* be added in most cases,
> which make me type as much as I do in C++ and which, as I staed, break
> compatibility. This changes have nothing to do with the intended simlicity
> of oberon.

I agree.
 
> > No need to use the 64-bit compiler on such systems.  There still would
> > be the 32-bit compiler.  Eventually everyone will have 64-bit processors
> > so then the switch is painless.  By that time memory will be even cheaper
> > than today so that will a non-issue as well.  Besides, after removing
> > Windows 2000, you'll have an extra couple of Gigabytes of RAM to play
> > with. :-)
> 
> But wouldn't that break compatibility between the 32bit and the 64bit
> version. Can files (ascci, binary) writen by the first loaded by the
> second and the other way round?

It was never intended to have the two binary compatible.  ASCII files
should be compatible by definition.  Of course, the 64-bit compiler could
also produce Unicode files which would require special readers/editors.
If an application takes care, it would be possible to have binary files
produced which are compatible on both systems but I don't see any advantage
to that.  Of course, the C-object/library files would be useable on both
systems.
 
> > 64-bit compiler, interfacing to a 32-bit OS can be done optimally by
> > selecting the appropriate data type for the interface.  IMHO this
> > interface has nothing to do with the integer representation.
> 
> But the type ed in that interfaces may than differ between the 32bit and the
> 64bit version, making work harder for the portable programmer. But if I
> thing about it again... wouldn't be mapping LONGINT to 64bit excatly that
> what happens with C.long if you port a C compiler to a 64 OS (not
> porcessor!)?. And arn't our two opinions regarding LONGINT as a result the
> same?

Yes, I would agree.  But it is possible to encapsulate the C type definitions
as OOC has done and then changing to a different OS is a fairly simple
matter of just adjusting these type definitions.  As long as all C interfaces
refer to these types, the mapping should be fairly simple.

Michael