<fare@tunes.org>
and David Manifold <dem@tunes.org>
Copyright © 1998-2002 by François-René Rideau and David Manifold.
This document is free software; you can redistribute it and/or modify it under the terms of the bugroff license as originally published by John Carter.
Source code is available at http://tunes.org/Tunes-FAQ.sgml.
This is an interactively evolving document: you are especially invited to ask questions, to answer to questions, to correct given answers, to add new FAQ answers, to give pointers to other software, to point the current maintainer to bugs or deficiencies in the pages. If you're motivated, you could even take over the maintenance of this FAQ. In one word, contribute!
To contribute, please ask questions or give answers on the
tunes@tunes.org
mailing-list, or even better, use our
CVS system to submit actual modifications.
This document will be filled with answers when questions come. If it doesn't answer your question, then ask. If its answers are confusing then ask again.
All the TUNES documentation can be accessed on our web server and its mirrors: http://tunes.org/ (North America), http://uk.tunes.org/ (United Kingdom)
Of particular interest is our interactive CLiki and its Learning Lounge
The TUNES Project is a project to develop a free software computing system based on a reflective design, which system as a whole is also to be named " TUNES".
We want to get rid of the obsolete or plain misdesigned idiosyncrasies of current computing systems that standardize things from the low-level on, as the legacy of systems designed first for resource-poor computers, then under the constraints of proprietary software.
For instance, we want to replace filesystems with orthogonal persistence of objects, compilation and administration of program binaries with automatic consistency management of dynamically optimized code, user-interface-driven programming with structure-driven user-interfacing, explicit manual networking with implicit automatic distribution. See below if you like feature lists.
The common point about these features is that they consist not in something more that the user/programmer can do with some more work (implementing such "do" features is a "simple matter of programming", and requires no a priori system support besides the basic device drivers); they consist in the user/programmer being able not to do, about his not having to care, yet still achieve all that he does care about; they consist in relieving the human from work by letting the machine handle it for him, which is the essence of progress.
The ability to let the machine automatically do things that could previously be done only by manual interaction (which include typing programs), the ability to specify what one cares about and what doesn't care about and trusts the computer to handle all by itself, these are what the reflective design is all about.
Metaprogramming is the activity of manipulating programs that in turn manipulate programs. It is the most general technique whereby the programming activity can be automated, enhanced, and made "to go where no programming has gone before" (sic).
Reflection is the ability of systems to know enough about themselves so as to dynamically metaprogram their own behavior, so as to adapt themselves to changing circumstances, so as to relieve programmers and administrators from so many tasks that currently need to be done manually.
These notions are explained in my article, Metaprogramming and Free Availability of Sources, that also explains why a reflective system must be Free Software. You may also consult the Reflection section of the TUNES Review subproject.
Reflection is important because it is the essential feature that allows for dynamic extension of system infrastructure. Without Reflection, you have to recompile a new system and reboot everytime you have to change your infrastructure, you must manually convert data when you extend the infrastructure, you cannot mix and match programs developed using different infrastructures, you cannot communicate and collaborate with people with different background. At the technical level, all these mean interruption of service, unreliability of service, denial of service, and unawareness of progress; but at the psycho-social level, lack of reflection also means that people will have to make irreversible static infrastructural choices and close their mind to infrastructural change, and will not communicate with people having made different infrastructural choices.
Reflection is a deep technical benefit, but also a deep psycho-social requirement, that allows individual progress despite historical choices by evolution of the conceptual infrastructure, and community progress despite variety of individual and unshared backgrounds by unification of infrastructures.
We feel that the distinction between programming language and operating system is most unwelcome, and mostly a lie:
one never programs in a programming language independently from an operating system, or in a operating system independently from a language (be it universal or limited to end-user interaction).
Language-independence and system-independence are myths: whatever the language, to communicate with other people and programs, it will have to go through the only services provided by the system, with the sole mutual invariants that are expressible in the standard system-interfacing language.
TUNES is a project for an integrated computing system. We must implement a new language and concept, because we want a system that be more expressive that can be achieved with what exists today. We need implement a new OS infrastructure, because the language must control the whole system so as to be able to reliably enforce its invariants. Moreover, we expect a tight integration of language and OS infrastructure to ultimately yield much enhanced performance, too. Of course, we may achieve a programming language and a low-level OS framework that could be used separately, but we feel that much of the added value of the project resides in the integration of them into a one reflective architecture.
For a rationale behind the TUNES approach, why we feel it is a necessary step in the progress of computing, read the article Why a New OS?.
The functionality that we want TUNES to provide could be considered a challenge in itself. However, every single feature that we propose has been successfully implemented somewhere by somebody, although in complete separation from other features. So the real challenge about TUNES is not to implement difficult or impossible features; it is to build a system that can consistently coordinate separately developed features that each tackle very different aspects of computing with respect to the other features. And this is precisely what a High-Level Language capable of both logical reasoning (including quotients) and computational reflection is all about: being able to specify, implement, and verify arbitrarily complex a posteriori relationships between separate software components. The TUNES challenge is thus in achieving the core reflective system above which currently impossible (because over-complex) combinations of features can be built. Because reflection hasn't been satisfyingly formalized in the past, our challenge includes academic research on the theoretical foundations of reflection as well as an implemention effort towards a full-featured system.
Yes and no, depending on what is implied by ``normal''.
If you mean that TUNES be implemented as a user-level process that uses GNU/Linux (or whatever existing OS you prefer) for its I/O, taking over some resources (disk space and CPU) that it manage on its own, then yes, this is conceivable, and this is what the LLL/OTOP subproject is all about. Now, if you mean that TUNES be expressed as a user-level process in a way that other programs could benefit from the features of TUNES through normal Linux means, then no, this isn't possible.
Even if implemented on top of GNU/Linux, the functionality that TUNES provides can only be available to applications written in TUNES and for TUNES, and only if no other Linux process tampers with TUNES data without going through a normal TUNES request. These restrictions greatly bound both the applicability and the reliability of such TUNES implementation. This is why TUNES can unleash its full potential only by taking over the whole machine (or a private part of it) either as a well isolated set of daemons and file hierarchies, or as a standalone kernel; it cannot be merely a ``normal'' GNU/Linux application.
Sometimes, people wonder how TUNES relate to works in Artificial Intelligence, since TUNES promote having machines develop programs, which is a typical "intelligent" human undertaking. But there is not much more than that to add about TUNES and AI. TUNES does not pretend or strive to be AI technology, and does not depend on the existence of such technology. We do claim that we will automate some of the programming work currently done by humans, but precisely in as much as this work we automate isn't as intelligent as is too often presumed to be; of course, some AI researchers will say, the field "AI" is precisely about pushing back the limits of what can be automated and no more considered "intelligent"; however such is actually the goal of any and all technology, so we are not specifically AI-like in this respect.
Actually, if TUNES has any specific tie to AI in any respect, as compared to technologies of the same kind, this tie is our enabling many previously impossible new developments, including "AI" kinds of developments, but far from exclusively so, thanks to our reflective infrastructure. All in all, TUNES is upstream to AI developments; we will use expert system technology, declarative programming style, etc, but we do not consider that AI or AI research in itself, but rather as reuse of useful paradigms previously developed by AI researchers. Now, we're also convinced that a system such as TUNES will be a great tool for AI researchers. If AI is possible, we hope that TUNES will help toward reaching it; If it isn't, well, we hope that TUNES will help make the most of what is possible.
See the collaboration page on our web site. If you're interested, you may become a member.
Note that although design and implementation of a computing system is a complex task, you don't need have all the knowledge before you join: you may begin by participating into the Review subproject that strives to detect and isolate the best of what Computer Science has to offer on the subject.
You can start by browsing and enhancing the Tunes CLiki, with the Learning Lounge being a good place where to learn, and where to help other people learn by contributing your feedback.
Other places to learn (not part of our project) include LAMBDA, the Ultimate weblog and the Open Directory Project.
We don't currently have anything like management, or a development process, or anything. Everyone is basically free to code what one wants, with the basic constraints that
We've succeeded to avoid waste of time on internal disagreements, thanks to a charter that defines the mutual relations between members. The basic rule is that in case of a disagreement, the one who writes the code ultimately decides (if you want things your way, and the coder disagrees, just write your own version). The result is that people who still have strong disagreements just leave the project as freely as they joined, possibly splitting a new project out of the same free software base. Since everything published by the project is freely available, no one loses anything at joining and leaving.
No, we're not incorporated in any country. We are just an informal free association of persons over the Internet. We do not have any official existence, as recognized by any government. On the other hand, we do not officially recognize any government, either, so we're mostly even with them.
Now, we're currently trying to startup a company that would be a haven for TUNES development. If you feel you can help, contact Faré by e-mail. We're also intending to setup a not-for-profit organization (foundation or institute) that would foster development of TUNES and reflective free software systems in general; contact Tril by e-mail if you want to help.
As a short answer, you should consult our Activities page for up-to-date information.
On the administrative side, we're always looking for people and tools who can help us collaborate better and be more productive. The mailing-list and CVS tree are well-integrated into the web server, but we'd like further integrate the web site into an interactive database of version structured documents (especially the Review subproject).
As far as theory goes, I (Faré) am working on the theorical foundations of reflection. We're also following progress from people like Brian Rice. Otherwise, we're following general progress in CS research through our Review subproject.
Implementation of the system goes in two parallel ways:
On the one hand, Tril is building a prototype for the HLL, and Faré ought to be writing an open compiler for a Lisp dialect.
On the other hand tcn is implementing in i386 assembly and FORTH retro, a first sketch of the low-level bricks needed for an OS.
We don't have a well-defined language specification, even less a compiler, so Don't hold your breath(TM).
Still then, we hope all the good ideas that we have accumulated are already an interesting reading to those who want to implement a new system. And of course, feel free to help TUNES become a reality!
To sum up the main features in technical terms, TUNES is a project to replace existing Operating Systems, Languages, and User Interfaces by a completely rethought Computing System, based on a fully reflective architecture.
Proeminent features built around this reflective architecture will include unification of system abstractions, security based on formal proofs from explicit negociated axioms as controlled by capabilities, higher-order functions, self-extensible syntax, fine-grained composition, distributed networking, orthogonally persistent storage, fault-tolerant computation, version-aware identification, decentralized (no-kernel) communication, dynamic code (re)generation, high-level models of encapsulation, hardware-independent exchange of code, migratable actors, yet (eventually) a highly-performant set of dynamic compilation tools (phew).
You may find precise definitions of these features in the TUNES Glossary (if it lacks a definition you need, please tell, so we should add it; contributions welcome).
These are NOT buzzwords. These are technical terms, and, again, you may find precise definitions in the TUNES Glossary.
We do not choose the above terms because they are flashy or anything; at the time this description was written down, none of them had any particular hype-value. We chose these terms because they describe in concise ways the features we want for TUNES; you sure wouldn't like them to be replaced by fully expanded definitions in layman terms!
If you're not convinced, you may compare to the pitiful list of real buzzwords below, that could sadly be applied to many an existing proprietary commercial operating system:
"A proven 32-bit cutting-edge state-of-the-art industrial-strength Y2K-compliant zero-administration plug-and-play industry-standard Java-enabled internet-ready multimedia professional personal-computer Operating System that is even newer and faster yet compatible, with a user-friendly object-oriented 3D graphical user interface, amazing inter-application communication and plug-in capability, an enhanced filesystem, full integration into Enterprise networks, an exclusive way to deploy distributed components, seamless network sharing of printers and files." (yuck)
The TUNES project being in very early development state, it would be ridiculous to give a time-scale for the availability of some functionality or another.
However, TUNES being a Free Software (aka Open Source(TM)) project, people who feel the need for some functionality whatsoever are free to go ahead and write a TUNES module implementing this functionality, or otherwise wrap an existing implementation for use inside TUNES.
This is a generic answer as of compatibility of TUNES with existing systems and standards in general.
We are committed to embracing established standards when they exist. When we deem these standards unfit, which will no doubt often be the case, we will strive to provide a replacement, as well as an upgrade path, through dynamic emulation and/or static code conversion.
However, compatibility with a given system being but some particular kind of functionality, see the above question about when some functionality will be available in TUNES.
Now, TUNES is so much different from existing low-level systems, that the generic answer should be: no, TUNES won't be compatible with given system. However, TUNES will nonetheless strive to run programs designed for existing systems through a combination of binary emulation and source conversion packages. Both binary emulation and source conversion are widely known and used techniques, and there are lots of programs running on e.g. GNU/Linux to emulate other architectures or operating systems, or convert programs written for different languages/environments.
Alan Perlis once said: "A programming language is low level when its programs require attention to the irrelevant." We think the Unix API in particular, and actually the whole of current computing systems, are much too low-level, and unfit for general programming of high-level communicating agents.
However, we do acknowledge that an enormous mass of useful free (and unfree) software has been written (and is still being written) on top of this API, so that, until all this software is reengineered to work with better APIs, and even afterwards for the sake of retro-computing, we will provide compatibility with as much of this API as possible.
This compatibility will be achieved in two complementary ways, as described in the generic answer. The first, shorter term, way, is to support binary emulation of Linux programs inside paranoid isolation boxes (whereas native TUNES code runs without such boxes), by developing our PIG subsystem (PIG IS GNU). The second way is to develop a source analyzer for C code that will allow to decompile C code into higher level code that can integrate with the TUNES system in smoother ways as well as detect and remove such bugs as buffer overruns et al.
Now, "native" TUNES code won't use anything remotely resembling the Unix API. For instance, a large part of Unix deals with meddling with files and otherwise manipulating raw sequences of bytes, as a way to explicitly handle persistence and exchange of data in explicitly defined low-level formats; in contrast, native TUNES code will have orthogonal persistence or high-level data-structures, and hence, no need for such a low-level concept as a "file", no ubiquity of sequences of bytes.
Finally, we do intend to use free Unices (Linux, *BSD) as both cross-development and underlying run-time environments until TUNES is mature enough to fully replace them, so that we will rather have a TUNES-over-Unix compatibility layer than a Unix-over-TUNES compatibility adapter.
We do hate Windows and similar misdesigned bad quality low-level proprietary software; we particularly hate the constraint of binary and hardware compatibility induced by their business model, that is maybe the main brake to progress in the computer industry. But even then, we have hopes that, in the long run, TUNES would support compatibility with even the lowliest existing systems, including Windows, if there is any use for it (which might not be the case anymore by the time we're ready to handle it).
It seems that shortest path to such support would be "simply" to run WINE inside our PIG subsystem (GNU/Linux wrapper); further integration of it into TUNES would be achieved by patching (or otherwise metaprogramming) WINE and/or the subsystem.
We have no desire to write such a compatibility layer ourselves from scratch, and considering the huge size of the bloated legacy Win32 API, we see no interest in wasting resources in efforts redundant with WINE. If what you want is free software Windows compatibility, do have a look at WINE. In case there be something in the WINE project you dislike, you may improve it by contributing to WINE, which is a free software project. And in case the WINE people ignore your contributions (why would they?) you may split their project, since their software is free! In any case, people who claim to reimplement Win32 compatibility without even citing the project WINE and giving reasons not to just collaborate with it are just being ridiculous.
We know of no system that has the general feel of what we want TUNES to be. However, we can have get a rough idea from former or surviving integrated development systems where the source code is dynamically available. Such systems include those used on the Lisp Machines of old (some of them also Smalltalk machines), or Squeak (free software implementations of Smalltalk), or maybe some Self or Forth or Oberon systems or Pliant. In these systems, program code is "live"; incremental modifications take immediate effect. Software is not "dead" in files that must be statically compiled and executed from scratch everytime.
Now, the goal for TUNES differ notably from the above systems: firstly, all these systems were mostly centered around having a one global language, a one image, a one environment, etc. This means that as far as metaprogramming goes, these systems are mostly autistic; they have no support for multiple nested computational systems, computing or reasoning on well-defined closed subsystems. Now, such well-defined closed subsystems are necessary for managing multiple users, for security concerns, for mixing real-time and usual computations, for proving meaningful whole-system program properties, etc. Also, none of these systems (except experimental ones that were never used, or the good old deceased Eumel) are or were orthogonally persistent - you can have variables persist, but you must dump images explicitly, which is a very costly operation (however the Lisp Machine did have a nicely integrated transactional object database).
See the above question and answer for a general response. More specifically, key features in which TUNES will differ from Lisp Machine systems of old are:
Also, there are lots of nifty features and applications that Lisp Machine systems had that we will not dare try to implement for quite some time, from the windowing interface to MACSYMA, from the 3D software to the documentation tools. We do hope that we will eventually have some of these, and that someone somewhere will implement such things on top of TUNES; we might even develop skeletons of such software, for our own purpose, or interface to third party such software; but they do not constitute the heart of the project.
You'll have to be more specific about your question. What do you mean "Lisp-based"? If you mean built on top of Lisp, then yes, the one Tril is writing now is on top of Common Lisp. Faré also wants to write TUNES starting with a dialect of Lisp. The finished TUNES won't necessarily be like Lisp, but you will be able to run Lisp, as well as many other languages, in TUNES.
If you're asking about what I should be writing first as part of TUNES, then I guess the answer, to me, is some open infrastructure for code transformation based on rewrite logic. This infrastructure would get bootstrapped from Common Lisp: a Lisp processor would transform rewrite rules into some Lisp, and then, this same transformation would be rewritten using rewrite rules; macros can help factorize things a lot to minimize this bootstrap. Once the basic infrastructure is bootstrapped, we'll use the rewrite rules to transform a suitable Lisp dialect into some intermediate representation then into some annotated assembly and finally into binary. The Lisp dialect would have some kind of linear Lisp subdialect which would allow for explicit memory management, which in turn would be used to implement memory management (including GC and persistence) for the rest of the system.
So at the end of the bootstrap phase, we'd mostly have some Lisp dialect with rewrite rules; but more than that, we'd also have a code transformation infrastructure that we can work toward making generic, instead of specializing it toward compiling just one Lisp dialect. Actually, we WOULD be specializing it, but not manually; rather, we would try to systematically develop metaprogramming tools that would automate the specialization out of declarative descriptions and specialization hints.
Our mid-term goals would be to work out the initial bootstrapped system into such a generic metaprogramming platform that can be used not just to dynamically compile itself into binary, but to do arbitrary meta-level manipulations of arbitrary structures into arbitrary other structures. On the syntactic front, we'd begin manipulating lowly HTML and XML, then jump on to more arbitrary grammars, including subsets of other programming languages, as well as any ad-hoc stuff we need to interoperate with the external world.
This could be the basis for semi-automatically interfacing with code from other systems, or even semi-automatically stealing it, starting with the nicer ones and moving toward more complexity (Scheme, Haskell, CAML, Mercury, Common Lisp, Java, etc., culminating with C; C++ is too horrid to ever salvage its code into TUNES).
On the semantic front, we'd develop (or port) graph manipulation libraries, with systems and metasystems for giving semantics to graphs: computed vs inherited attributes, type systems (abstract interpreters), combination of higher-order modules or rewrite systems; then we can investigate type-directed transformation techniques, deforestation, partial-evaluation, and other well-known algorithmic optimization techniques.
On the heuristic front, we'd develop expertises for declaring multiple implementation tactics for given concepts, then specifying combinations of them as a way to compile code; on the one hand, this would lead to declarative compiler traces that can be used by many metaprograms (invariant checkers, interface extractors, automatic code instrumentators for GC or debugging, etc.). Then there would remain to develop expertises to automatically select best implementation techniques according for a given structure, to cost models and usage constraints (e.g. even for a "same" abstract concept of array, it would choose different algorithms to handle 16KB chunks of real-time data, 1MB of interactive gaming stuff, or 1GB worth of symbol-crunching databases).
We'd then develop some performance introspection technique, so the (manual and automatic) decisions can be based on actual measurements instead of some semi-educated guess of a model. Actually, models are of course both useful and unavoidable, but we should eventually be extracting better cost models from actual dynamic measurements and rough meta-models. Due to resource limitations, in the mid-term, we'd only develop proof-of-concepts and/or kluges for all these, as we need them; but keeping in mind that the goal is to clean them eventually, at the cost of time-compatibility (who cares about time-compatibility when you have the whole-system source to metaprogram into a new coherent version and recompile?).
At the same time in the mid-term, we'd also have to make the system an expedient choice for everyday programming, by interfacing with legacy code: we'd provide outside programmers with a robust persistent dynamic object system that they can use over the network as an "intelligent database", while we'd develop wrappers around existing libraries, to be able to reuse the wealth of existing code (from picture/sound/animation/PostScript/TeX/HTML/whatever viewers to bignum/linear algebra/cryptography/X-window interface libraries to games or foreign platform emulators).
The result of such an effort would be to have a system that could integrate the functionality of Linux in a coherent way, where functionality is wrapped into subprocesses with well-declared semantics, so that the reflective TUNES infrastructure can do whole-system invariant enforcement, automatic robustification of libraries (putting them into isolated subprocesses; intercepting system interaction and doing additional safety checks; detecting failure and restarting them; etc.). This can bring a great incremental improvement to Unices, and lead Unicians to move to TUNES first as a way to better manage their whole systems or just subsystems (instead of having the horrid mess in /etc/*
and ~/.*
).
In the long term, well, we'd have to fully develop and seamlessly integrate all the ideas and techniques that have been proposed as mid-term goals. Continuing on that trend, we'd get the system to be based on generic declarative descriptions, and to be self-compiled through a series of keener and keener expertises, based on dynamically obtained measures automatically taken according to profiling expertises, using heuristics such as genetic algorithms, constraint propagation (a la constraint-based logic programming), dynamic monitoring to detect deadlocks or optimization opportunities, etc.
Actually, by properly developing the meta-level expertises, we could have a seed proto-AI that would be expert in developing programs, and could be used to incrementally improve on itself until it is able to fully rewrite itself and maybe move toward more AI. Other directions for improvement would be to engineer into the system such things various as checkable formal program proofs, integration of the whole toolchain into both a proof system and a symbolic mathematical manipulation system, use of formal knowledge about programs in program transformations, decompilation of human-maintained low-level C programs into high-level TUNES programs, making the system self-hosted and stand-alone (no more dependency on C legacy), running on bare hardware by stealing or converting C drivers for Unices, compiling to (D)FPGA, automatically extracting man-machine interfaces (GUI-style as well as voice-based, text-based, or anything that suits the needs of the user) from a dynamic combination of program structure, interface declarations, and measurement of actual use, etc.
There are just so many things I'd like to experiment with TUNES! As you see, TUNES does not ambition per se to do things that have not been done in the past. It ambitions to do them in a new way, that hopefully allows them to seamlessly integrate different points of view. My general wish is that in computer science, it becomes true, as it is true in mathematics, that abstract theorems proved once by someone somewhere can thereafter be used by anyone anywhere with finite one-time cost. This is not true due to programs being tied to low-level considerations, and there being no metaprogram to automatically adapt programs from one low-level context to another, even less ones to reason about programs and manipulate them with any kind of dynamically enriched sense of intelligence and obviousness. In other words, there will always be more than one way to do things, but you should be able to prove and the the system should be able to understand that they are about the same thing, and that one way can be replaced by another, depending on what suits the context best.
The name "TUNES" is a recursive acronym, which stands for "TUNES is a Useful, Nevertheless Expedient, System". The name demonstrates our commitment to build a computing system according to long-term concerns about how computing should be, as well as practical requirements and opportunities of today's computing.
Actually, the "N" used to stand for "Not", meaning that we conspicuously do not strive toward expediency in the restricted meaning of "being good in the short term with overwhelming bad side-effects in the long run". It was transformed into "Nevertheless", to insist on the fact that despite our being attached to long-term implications of our system, we are conscious that to live and survive, we must be strong on short term issues, so that we aim not only long-term utility, but also short-term expediency, in its extended meaning that does not preclude said long-term utility.
Currently, we have the 100% pure HTML virtual reflective logo that you can see on our page. People have suggested or submitted eye-candy logo pictures, and we have been queueing these suggestions and submissions; but we won't even consider changing logo until we have an actual serious code base we can be proud of, that could honestly bear a logo. Otherwise, the project would be all fluff and no stuff.