[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Current state of the `ooc' project?



> >  Maybe you could explain how your loading would
> > work with an example.
> 
> Linux support ELF 'shared-libraries'. This support allows one to, for
> example, dynamically extend a C++ program with new classes.
> So, once a process is running Linux allows it to dynamically call routines
> in other shared-libs. I dont see any problems in mapping the 'shared-library'
> concept to the Oberon 'MODULE' concept.
> For example:
> Lets assume an Oberon program is running and somehow prompts the use for the
> name of a 'command' (ie: an exported parameterless procedure). The running
> program can map the 'modulename.commandname' to the name of a linux shared-lib
> and a so-called 'global function' within it. It can then use standard Linux
> system-calls to:
> a. load the shared-library
> b. ask for the address of a function in the newly loaded library.

In that case, this sort of facility should be available in the BeOS as well, since
it supports dynamic libraries.

> One can then call the function therby executing the Oberon 'command'.
> 
> One other point is that each newly loaded module need to perform a consistancy
> check with all the modules that it imports. Linux supports a 'module-body',
> ie: each shared-lib can have code that is executed when the lib is loaded.
> The way I see it, the runtime support package (GC etc..) will need
> to be called to perform this consistency check.
> Each loaded module will register itself with the runtime support package
> so that the consistency check can be performed.

Wouldn't a module higher up in the hierarchy also have to explicitly (or implicitly)
load all the modules (not already loaded) on which it depends?  This implies some
knowledge of what has been loaded and what still needs to be loaded.  I assume
your OS allows queries of whether a 'library' has been loaded.  The knowledge of
dependencies could be hard-coded into the module initialization code.
 
> > This was understood from the start.  I think that's why we envisioned a
> > 'peep-hole' optimizer substituting target-level instructions for sequences
> > of GSA stuff.  (e.g., the PowerPC has a multiply and add instruction so
> > this peep-hole optimizer would scan the GSA tree for sequences of adds
> > and multiplies and replace them with the add-multiply instruction).  I
> > guess this GSA tree pruning operation will have to insert back-end
> > specific instructions.  
> ...
> > If that's the case, there will have to be a GSA-instruction for every
> > back-end machine instruction.
> 
> I guess thats right if one wants to do more than a single traversal of
> the intermediate code. I certainly would like to have the backend perform
> only a single pass over the intermediate-represenatation but I was thinking
> that until I know how/what exactly needs to be done, it'd be easier to handle
> each backend-task separately.
> Since we'll need a target-specific intermediate representation (IR) for
> performing peephole optimization, doing more than one traversal of the
> IR for successive refinment into machine-code doesnt require any
> overhead and will probably make those passes simpler.
> I guess its the same tradeoff one makes when choosing between writing a
> one-pass recursive descent compiler or first compiling into an
> IR first. Since personally I have zero experience with compilers, the 'mixing'
> of conceptually separate tasks within the single-pass code could
> make things more complicated for me.
> I guess I'll need to see how simple/complex it becomes as I go along.
> In the Brandis thesis a very close mapping existed between the GSA
> an the target machine. Even so, I dont think (I dont have the thesis
> with me) he combined register-allocation and instruction scheduling.

I looked through his thesis.  He combines both steps (I think) in his code
generator but doesn't explain how they interact.  It seems to me that you
can't really schedule instructions without first knowing what registers they
will use, so these concepts are related.  Possibly you allocate registers
first and then schedule the instructions.  Of course, it may be possible 
to schedule instructions more efficiently if the register assignments might
remain changeable by the scheduler -- so, I don't really know which should
come first.  My best guess is that the registers are assigned first, then
the instructions are scheduled (although Brandis reverses this order in his
thesis description).
 
> > I'm still not sure how to handle the aliasing problem which has been
> > mentioned.
> I'm assuming youre referring to the problems Frank posted:
> various interactions between the optimizer and the GC.
> I think theres more about this in David Chase's thesis available at:
> http://www.centerline.com:80/people/chase/
> Anyway, I gather these are hard problems which probably arent solvable in
> the backend. At the moment I'm not giving them much thought.
> 
> Guy.

Michael G.