Migration Comparison Part 2

How are portability techniques involved by such approach of distributed computing? Firstly, because fine-grained objects are constantly moved across the network, there should be some unified way to transmit them. Unified means that any computer can ultimately talk to any other computer with which it may want to exchange objects, but does not mean that there will be a one global low-level encoding that would be used by all computers. Actually, wherever in the network there is a subnetwork (which can reduce to a single computer) where objects tend to stay as much or more than they tend to go out, it is useful to translate these objects into some encoding more adapted to the local subnetwork.

Well known techniques for a global encoding such as bytecode interpreters or virtual machines, as have been widely used since early Lisp systems, are good way to exchange code as well as data. Dynamical compilation (which has recently proven an efficient technique as people developped emulators for old hardware standards) can give performance to such systems; and nothing prevents architecture-dependent binaries to be provided in a fine-grained way, which is already widely used in migrationless coarse-grained contexts.

However, such low-level techniques tend to tie computations to some particular computing models, and make migration difficult or inefficient over significantly different architectures, where standard word sizes (8/16/20/32/64/128/howevermany bit, notwithstanding software encoding conventions), memory geometry (flat, segmented, banked), CPU design (register-based, stack-based, or whatever), breadth of CPU memory (number of registers, depth of stacks), or instruction set richness (CISC, RISC, VLIW, ZISC, or MISC) differ.

And even if some architecture family may be considered as "fairly standard" currently (e.g. RISCish 32-bit architecture in the 1990's), we know that any technology is due to be made obsolete by newer ones, so that unless we want to be tied to some architecture, or to lose the possibility of slowly upgrading hardware while keeping portability, we must not base portability on low-level standards. That is, Portability should not only be across space, but also across time.

This is why, when portability is more really meant, not just efficient portability in some particular computing model, high-level semantic-based object encodings, not low-level execution-based object encodings, should be used.

This means that portable objects must be much nearer to parsed source code than to compiled binary code. They can be interpreted at run-time, or dynamically compiled, and all combinations of the previous techniques still apply, as long as they do not hamper the enhanced kind of portability.

Actually, all these techniques were particular cases of meta-programmed dynamic evaluation, which consists in evaluating an object consistently with its high-level semantics, but by taking into account hints and tactics provided with the object, that help optimize it in some common possible contexts, providing enough information to skip the analysis and trial processes that make traditional coarse-grained optimization unadapted to fine-grained distributed computing.

Next


This page is linked from: Migration Comparison   Migration Comparison Part 1