Migration Comparison Part 3

Now, what additional complexity does Migration involve, as compared to previously considered techniques?

Migration has an implementational problem among distribution techniques which is quite comparable to the problem garbage collection (more simply known as GC) had among memory allocators. Actually, it appears that GC is a particular case of Migration, where only the memory resource is considered.

Because some objects may have to be further migrated, the system must be aware of any particular encoding used for these objects, so that it can recode them into code suited for the new location of those objects after migration.

And the solutions to this additional problem are also quite similar to those used for garbage collection: the system keeps track of a migration-wise "type" for each object, so that it can run through the structure of objects and do whatever pruning and grafting it needs to do.

After migration is made possible comes the problem of scheduling such fine-grained objects: techniques that apply to coarse-grained processes, with coarse-grained static tables for process statistics, do not apply to fine-grained objects. They would induce too large a overhead over them. Actually, they already cause considerable direct or indirect overhead with the underlying traditional coarse-grained centralized computing model.

Actually, the traditional low-level migrationless coarse-grained centralized computing model is a degenerate case for high-level migrating fine-grained distributed computing. So we may reasonably expect that the latter computing model will bring at least as good performance, because it can have exactly the same performance by restraining to this degenerate case in last resort.

To not spend more resource at scheduling than we gain at distributing, relatively to the previous model, we can simply do resource-based scheduling: only availability of resource (e.g. some provider going idle), or intended use of resources large enough will trigger distributed scheduling, so that no overhead will be felt if no resource are available, while if resource are available, the overhead will be more than largely compensated by the time gained by activating or managing resources that would have been seriously under-used on a non-migrating environment.

Of course, transmission time is not free and must be considered a resource itself. Consequently, objects should not be migrated without the consecutive effects upon network traffic being considered by the scheduler. To ease this work, when created and published, and/or after some profiling period, objects must come with information about how the object behaves relatively to interobject communication, as well as the usage the objects make of traditional resources, which is already done by traditional schedulers.

Because of the induced overhead, objects will come to the scheduler in medium-grained packaged modules that will include all this information. This won't prevent objects to be distributed in fine grain, though, for there is no reason why scheduling should be done with the same grain as execution.

Now, if objects are to migrate in a way that is not statically controlled, comes the problem of how safety, consistency and security constraints will be enforced.

Again, a low-level standard for Migration would make objects completely uncontrollable: no proof system can be both reliable and usable for low-level objects to prove high-level properties, while the constraints we need fulfill are quite high-level. The argument stands in any computing system, which already makes low-level programming standards a bad choice for any kind of computing; but the danger grows as objects move in a distributed system, having more chances to find a security leak, and being able to corrupt for more persons, should the above constraints fail to be enforced.

The obvious solution is again to choose high-level objects as a standard for migrating objects. Proof techniques already exist that fulfill any imaginable security criteria. As safety constraints can often be made quite simple, automated proof techniques could be used to enforce it in the common case. Consistency and security can involve much more complicated properties, though, and there is no reasonable way for the object receiver to build a proof for a common object;so the natural way to do things is for the receiver to require the sender to provide such proof. Finally, once safety is guaranteed, and because any proof relying on the behavior of physical objects has hypotheses that can never completely accepted, it may be quite acceptable that security techniques include cryptographic procedures.

Of course, any user of the system could twiddle his own strengthened or weakened security criteria, that would alter the distributed or secure characteristic of his subsystem; but at least, it is feasible to have a system both distributed and secure, where no one can be victim of failures of which one is not responsible.

Next


This page is linked from: Migration Comparison   Migration Comparison Part 2