This should consist of instances of meta-level architecture components, and uses of these to provide some convenient environment for expressing enough of present-day idioms (such as shell syntax) for the purpose of retaining the conveniences they serve. Gradually these should be incorporated into a larger, regularized framework.
Reserving an annotation list for each object would be quite too costly (and bloated), and should be reserved for objects that are more often manipulated by humans than used in actual computations (i.e. the objects nearest to the human user).
Now, we want to be able to say things about the constructors that an object was made with. We want to be able to determine a superlist of constructors used to make an object, and/or to say that the object can be expressed in a theory that does not include some constructor.
The problem is when we use meta-constructors that can be eliminated (by e.g. beta-conversion, inference, or more generally higher-order term rewrite), but that we don't want to be actually eliminated. Such meta-constructors are part of the implicitly instanciated, migratable meta-constructors for the object. Typical example: though we define "true" natural integers by Peano's axioms, we want to use usual logarithmical size representation to manipulate them. So even if we discuss about a type Nat=0|S:Nat->Nat
, we can manipulate usual system integers, and other such types.
The system must provide as part of language semantics access to a choice axiom concept, similar to the epsilon calculus. We explain this choice axiom:
One way to understand programming with epsilons (Epsilon stands for Hilbert's epsilon symbol for a witness for an existential predicate) is to view them as goals in the classical paradigm of higher-order logic programming (see languages like Prolog, Mercury, Lambda Prolog, Goedel, Maude); an epsilon is a hole in the well-founded construction of a computable object; the system thus uses tactics to fill the hole; these tactics are made of rules that may include holes themselves, hence generating filling of the subholes as subgoals.
The main differences of Tunes epsilon calculus with classical logic programming is that tactics can be dynamically defined from existing tactic constructs and libraries, and they are first-class objects, themselves subject to manipulation, etc. Also, there is a guaranteed constraint that should they succeed, they should return an object that rigorously matches the requested specification; hence, whereas Prolog programs only have a global semantics that can only be deduced from the totality of rules, Tunes tactics can have a nice local semantics with local definitions.
If O is an object with a free variable e, then O[E/e] where E is a compliant tactic to fill 'hole' e is a valid object and specialization for O. E may dynamically generate holes itself, etc.
The tactics are constrained to return a result that respect the specification for the requested object, instead of being pure arbitrary tactics in a program with global semantics only.
If the addition function is used in a context such that the fact that the numbers are integers rather than float does not intervene, then it is a valid to abstract the number type into something more general than integers. You reapply the said abstracted object to the float number type, but the result is not the abstraction itself; If the overall abstract semantics of the new object are the same as those of the original one (i.e. you were doing computations on integers, and are now using the FPU to speed them up), then it is what I called a migration of the original object.
It seems to me that references are not a good concept in a distributed system. Communication Channels, Linear objects, and projects seem to do more than references, in more precise ways:
If minimal cost of emulating one "reflective" operation on a non-reflective language is a*l+b (where l is length of history) then minimal cost of n reflective operations is a^n+...+n*b. And that's best-case behavior. Even in the worst case, reflective wins.
Plus there's the human cost behind dealing with separate meta-levels that can't be automatically unified and differenciated.