Part I: Operating Systems and Utility

  1. Utility
  2. We herein call useful something that saves time, effort, money, strength, or anything valuable in the long run and for a lot of people. Utility is strictly opposed to Harmfulness, but we also oppose it to mere Expediency something being called expedient if it saves such valuable things, but most usually only in the short term, for special, personal, temporary purposes, and not (forcibly) for general, universal or permanent purposes.

    Utility and Expediency are relative, not absolute concepts: how much you save depends on a reference, so you always compare the utility of two actions, even though one of the actions might be implicit. Utility of an isolated, unmodifiable, action is therefore meaningless. Particularly, from the point of view of present action, utility of past actions is a meaningless concept; however, the study of the utility that such actions may have had when they were taken can be quite meaningful. More generally, Utility is meaningful only for projects, never for objects. Projects here must not be understood in the restricted meaning of conscious projects, but in the more general one of a consistent, lasting, dynamic behavior.

    Note that projects can be considered in turn as objects of a more abstract "meta-" system; but the utility of the objectized project becomes itself an object of study to (an extension of) the metasystem, and should not be confused with the utility of the studying metasystem. Sciences of man and nature (history, biology, etc) lead to the careful study of terrifying events and dangerous phenomena, but the utility of such study is proportional rather to some kind of amplitude of the studied projects, than to their utility from the point of view of their various actors.

    Utility is a moral concept, that is, a concept that allows pertinent discourse on its subject. More precisely, it is an ethical concept, that is, a concept colored with the ideas of Good and Duty. It directly depends on the goal you defined for general interest. Actually, Utility is as well defined by the moral concept of Good, as Good is defined by Utility; to maximize Good is to maximize Utility.

    Like Good, Utility needs not be a totally ordered concept, where you could always compare two actions and say that one is "globally" better than the other. Utility can be made of many distinct, sometimes conflicting criteria. Partial concepts of Utility can be refined in many ways to obtain compatible concepts that would solve more conflicts, gaining separation power, but losing precision.

    However, a general theory of Utility is beyond the scope of this article (for those interested, a first sketch could be found in J.S.Mill's Utilitarianism). Therefore, we'll herein admit that all the objects previously described as valuable (time, effort, etc) are indeed valuable as far as general interest is concerned.


  3. Information
  4. Judgements of Utility deeply depend on the knowledge of the project being judged, of its starting point, of its approach. Now, humans are no gods who have universal knowledge to base their opinions upon; they are no angels who by super-natural ways, receive infuse moral knowledge. Surely, many people believed it, and some still do. But everyone's personal experience, and mankind's collective experience, History, show how highly improbable such things are. Without any further discussion, we will admit an even stronger result: that, by the finiteness of the structure of the human brain, any human being, at any moment, can only handle a finite amount of information.

    This concept of information should be clarified. The judicial term from the Middle Ages slowly took the informal meaning of the abstract idea of elements of knowledge; it was only with seventeenth century mathematicians that a formal meaning could timidly appear, that surprisingly found its greatest confirmation in the nineteenth century with thermodynamics, a branch of physics that particularly studied large dynamical systems. Information could thus be formalized as the opposite of the measurable concept of entropy. The information we have irreversibly decreases as we look forward or backward in time, beyond the limits of our knowledge, on this side of these limits being present. That is, information is a timely notion, valid only in dynamical systems. And such is Life, a dynamical system.

    As for Utility before, there needs not be a universal total ordering on Information; what we most often have is partial orderings, and each of us has to try arbitrarily choose finer orderings when basing a decision upon equivocal information. For information is also an moral concept, though it is not until late twentieth century, with cybernetics, that the deep relationship between information and morals explicitly appeared. Few people remember cybernetics as something else than a crazy word associated to the appearance of information technology, but we invite the reader to consult the original works of Norbert Wiener on the subject [Wie]. However, this relationship had been implicitly discovered by liberal economists of the eighteenth and nineteenth centuries, then rediscovered by biologists studying evolution, and surely, many have always intuititively felt this relationship. What allowed to make it explicit might be the relativization of ethics as something that was not to be taken as known and granted, but first as unknown and more recently as incomplete.

    Moral judgments depend on the information we have, so that in order to make a better judgement, we must gather more information. Of course, even though we might have rough ways to quantify information, this doesn't make elements of information of same quantity interchangeable; What information is interesting depends on the information we already have, and on the information we can expect to have. Now, gathering information itself has a cost, that physicists may associate to free energy, which is never zero, and must be taken into account when gathering information.

    Because the total information that we can handle is limited, any inadequate actual information that be gathered would be to the prejudice of more adequate potential information. Such inadequate information is then called noise; noise is worse than lack of information, because it costs resources that won't be available for adequate information. Thus, in our dynamic world, the quest of information itself is subject to criteria of utility, and the utility of information is its pertinency, its propensity to favorably influence moral judgements. As an example, the exact quantization of information, when it is possible, itself requires so much information, that it is seldom worth to be sought. Of course, pertinency in particular is not more an absolute concept than utility in general. When a criteria for pertinency is implicitly available, we might use the term "Information" for raw information, and "Knowledge" for pertinent information.

    So to gather information in better ways, one must scan the widest possible space of elements of knowledge, which is the essence of Liberty; but the width of this knowledge must be measured not in terms of its cost or of its interest in case it was true, but in terms of its pertinency and of its solidity, which is the essence of Security. These are dual, inseparable aspects of Knowledge, that get their meaning from each other. Any attempt to priviledge one upon the other is pointless, and in the past and present, such attempts have led to many a disaster: trying to promote some liberty without corresponding security leads to chaos, whereas promoting security without associated liberty leads to totalitarianism.

    Reality and potentiality, finiteness of knowledge, world as a dynamic system, relation between information and decision, duality of liberty and security, all these are parts of a consistent (we hope) approach of the world, that we will admit in the rest of this paper, at least on the considered subjects. A more detailed study of these moral issues per se would certainly be quite interesting, but the authors feel that such study ought to be postponed to another paper, and invite readers to refer to the bibliography (and contribute to it), so as to focus on the goal of this article, discussion about Computer Systems, whereas these moral concepts are a means.


  5. Computers
  6. Computers are machines that handle quickly large amounts of exact discrete information, and interact with the external world, according to a set of exact discrete directives called "programs". This makes them most suited to apply concepts from the above-mentioned information theory; everywhere else in life, information is approximate continuous, and difficult to quantize. Actually, the histories of information theory and of computers, that are information technology, are deeply interrelated; but these histories escape the subject of this article. Just note that being a computer is an abstract concept independent from the actual implementation of a computer: if most current computers are made of silicon transistors, their ancestors were made of metallic gears, and no-one knows what their remote successors will be made of.

    Computers are built and programmed by men and for men, with earthly materials and purposes. Hence the utility of computers, among other characteristics, is thus to be measured like the utility of any object and project in the human-reachable world, relatively to men. And, because what computers deal with is information, their utility lies in what will allow humans to access more information in quicker and safer ways, that is to communicate better through them computers with other humans, with nature, with the universe.

    Again, Utility criteria should not only compare the visible value of objects, but also their cost, often invisible, in terms of what objects where discarded for it. Cost of computer information includes the time and effort spent at manufacturing or earning money to buy computer hardware and software, but also the time and effort spent before the computer to explain it the work you want to be done, and the time and effort spent verifying, correcting, trying again the obtained computer programs, or just worrying about the programmed computer crashing, preparing for possible or eventual crashes, and recovering from these. All this valuable free energy might have been spent much more profitably at doing other things, and is part of the actual cost of computerware, even when not taken into account by explicit financial contracts. We will stress on this point later.

    So to see if computers in general are a useful tool, we can take the lack of computer as the implicit reference for computer utility, and see how computers benefit or not to mankind, comparing the result and cost. Once properly programmed, computers can do quickly and cheaply large amounts of simple calculations that would have required a large number of expensive human beings to manage (which is called "number crunching"); and they can repeat relentlessly their calculations without committing any of those mistakes that humans would undoubtly have made. When connected to "robot" devices, those calculations can replace the automatic parts of work, notably in the industry, and relieve humans from the degrading tasks of chain work, but also control machines that work in environments where no human would survive, and do all that much more regularly and reliably than humans would do. Only computers made possible the current state of industry and technology, with automated high-precision mass production, science of the very small, the very large, and the very complex, that no human senses or intelligence could ever have approached otherwise.

    Thus, computers save a great amount of human work, and allow things that no amount of human work could ever bring without them; not only are they useful, but they are necessary to the current state of human civilization. We let the reader meditate on the impact of technology on her everyday life, and compare it to what was her grandmother's life. That this technology may be sometimes misused, and that the savings and benefits of this technology be possibly misdistributed, is a completely independent topic, which may hold for quite any technology, and which will not be otherwise commented in this article.


  7. Limits of Computers
  8. Some only see in computer's utility a matter of raw performance, a quantitative progress, but not a qualitative one, at least, nothing qualitatively better than what other tools bring about. However we already saw that beyond their performance, beyond the volume of information handled and the speed at which it is handled, which already suffice to make computers a highly desirable tool, computers bring something fundamentally more important than raw information or raw energy, something that is seldom explicitly acknowledged: a new kind of reliability that no human effort can achieve.

    Not only can computers perform tasks that would require enormous amounts of human work without them, and do things with more precision than humans, but they do it with reliability that no human can provide. This may not appear as very important, not even as obvious, when the tasks undertaken are of independent one from the other, when erroneous results can be discarded, or will be compensated somehow by the mass of good results, or when on the contrary the task is unique and completely controlled by one man. But when the failure of just one operation involves the failure of the whole effort, when a single man cannot warranty the validity of the task, then computers prove inestimable tools by their reliability.

    Of course, computers are always subject to failures of some kinds, to catastrophes and accidents; but computers are not worse than anything else with respect to such events, and can be arbitrarily enhanced in this respect, because their technology is completely controlled. However, not only is it not a problem particular to computers, but computers are most suited to fight this problem: unpredictable failures are the doom of the world as we live, where we always know a tiny finite piece of information, so even if we can sometimes be fairly sure of many things, and can never be completely sure about anything, as we can never totally discard the event of some unexpected external force significantly pertubating our experiments. The phenomenon is the most pronounced with humans, where every individual is such a complex system by himself, that one can never control all the parameters that affect him, can never perfectly reproduce them; so there are always difficulties in trusting a man's work, even when his sincerity is not in doubt. On the contrary, by their very mechanical nature of their implementation, by the exactitude of their computations, which derives from their very abstract design principle, computing is both a predictable and a reproducible experiment; it can be both mathematically formalized, and studied with the tools of the physicists and engineers; computer behavior is both producible and reproducible at will; and this founds computer reliability: you can always check and counter check a computer's calculations, experiment with them under any condition you require before you trust them. We see that computers allow to accumulate reliability like nothing else in the human-reachable world, though this reliability must be earned in the hard way, by ever-repeated proofs, checks, and tests. In fact, this reliability is one of the two faces of information, which is what information technology is all about, of which computers as we know them are the current cutting edge.

    The problem with computers, the absolute limit to their utility, is that by the same mechanical virtues that make them so reliable, they can replicate and manipulate actual information, realize potential information into actual information, but they can't create information. Any information that lies in a computer derives from the work of men who built and programmed the computers, and from the external world with which the computer interacts by means of sensors and various devices.

    Hence the limits of computers are men. If a man misdesigns or misprograms a computer, if he feeds it with improper data, if he puts it in an environment not suitable for correct computer behavior, then the computer cannot be expected to yield any correct result. One can fully trust everything he sees and tests about a computer, but as computers grow in utility and complexity, there are more and more things one cannot personally see and test about them, so one must rely on one's fellow human beings to have checked them. Again, this is not worse than anything else in the human world; but for computers as well as for anything else, these are hard limits of which we should all be conscious.


  9. Computing as a Project
  10. Man is surely a limit to the power of computers, in that computers are made by man, and can be no better than man makes them. But this is not to be understood individually, as computers are not each the work of one man, but are collectively the work of mankind. Computing is a global project.

    Like any product of civilization, computers depend on a very rich technological, economical, and social context, before they can even exist, not to talk about their being useful. They are not the work of a single man, who would be born naked in a desert world, and would build every bit of them from scratch. Instead, they are the fruit of the slow accumulation of human work, of which the foundations of civilization participate at least as much as the discoveries of modern scientists. The context is so necessary, that most often it is implicit; but one shouldn't be mistaken by this implicitness and forget or deny the necessity of the context. Actually, this very context, result of this accumulation process, is what Civilization is.

    But again, the characteristic of information technology, is that the information you manage to put in it can be made to decay extremely slowly, as compared to objects of same energy: we can expect data entered today in a computer, that is interesting enough to be copied once every ten years at least, to last as long as information technology will exist, that is, as long as human civilization persists. Of course, huge monuments like the egyptian pyramids are work of men that decay slowlier, need less care, and resist to harsher environments, so may last longer; but their informative yield is very weak, as compared to their enormous cost. If only slowness to decay was meant and not informational yield, then nuclear wastes would be the greatest human achievement!

    Now computing has the particularity, among the many human collective projects, and as part of mankind being its own collective project, that it can be contributed to in a cumulated way for years. For this reason, we can have the greatest hope in it, as a work of the human race, as a tool to make masterpieces last longer, or as a masterpiece of its own. Computing has already gained its position among the great inventions of Man, together with electricity, writing, and currency.

    This whole paper tackles the problems of software as an evolving domain. If ever software settles and stabilizes, or comes to a very slow evolution, then the phenomena described in this paper may cease to be dominant in the considered domain. However, because life is movement, as long as there will be life, there will be a domain where these phenomena are of importance. Besides, the authors are confident that computer software, whatever it will be like, will be a lively domain until it possibly reaches AI.


  11. Dream of Artificial Intelligence
  12. Let us justify the persistence of Computing as a Project, even when faced with this alleged doom of it: Artificial Intelligence.

    Many dream, hope, predict, or expect that some day, the cumulated work invested in computing will allow humans to create some computer-based being, which they call "artificial intelligence", or more simply, AI. Such AIs would rival with their human creators as for "intelligence", that is, their creativity, their ability to undertake independently and voluntarily useful projects; they dream (or some of them have the nightmare) that mankind engenders a next step in evolution by non-genetical means. According to some people, such AI would be the End of Computing as a Project, since humans wouldn't need to program anymore, leaving the task to AIs.

    Now, should this dream come true (the eventuality of which won't be discussed in this article), by Information Theory's version of Cantor's diagonal argument, the workings of AIs must globally surpass the understanding of AIs themselves, and hence of humans, if the AIs are similarly endowed as humans. This holds even though the general principles behind the functioniong of AIs might be understood: as with physics, the knowledge of elementary laws doesn't imply understanding of whole phenomena, for the formidable context involved in anything but the simplest applications (and the most useless, as far as "intelligence" is meant) would make it impossible for the most developed human or artificial brain to apprehend.

    The latter argument does not question the possibility of AI as an eventual a human work: there is large evidence that systems governed by a few human-understandable, human-enforceable rules can generate ununderstandable chaotic behavior -- if anyone "fully" understands a human work like the stock market, please contact me, so that I should learn how to become a billionaire. Rather, the argument means that if we replace in all the current discussion "human" by "sentient", with AIs being a new kind of different (superior or not) sentient beings, the situation of computing would remain essentially the same.

    Indeed, Computing is an activity characterized by exact formalizability and as complete understanding as desired of running programs, with the choice and evolution of the programs being directed by human (sentient) beings. AI, if it ever appears, will not quite be computing as we know it anymore, yet will need Computing even more than we do now. Maybe this AI will use a computer as an underlying structure, and will need most advanced computing techniques to be deployed; but the AI itself will not be a computer as we defined it, and querying the AI will not be computing anymore, though some may think that the ultimate goal of computing be to transcend computing in such way.

    Anyway, current design of computing systems, as we will show, greatly limits the potential of computer software into what a few programmers can fully understand; hence, until this design is replaced, AI will stay a remote dream. And even when and if this dream comes true, the problems we describe may be food for thought for the AIs that would replace current human readers. Computing is will always be a Project for sentient beings, be them AIs instead of humans.


  13. Computing Systems
  14. We herein call Computing System any dynamic system where what interests us is the exact information contained by part of it. Note that a computing system is not quite the same as a computer system. In a computer system, the computer is a static tool used in the project, but not part of it. In a computing system, the computer (or most probably only its program) is the very dynamic project being considered. Computer systems have been the subject of study of many very proficient people, who have published a great number of most interesting books on it. Computing systems are the subject of this article, upon which we'll try to bring new lights.

    As an example, a given modern washing machine is often a very useful computer system, where a static program manages the operations; but its utility lies entirely in the washing of clothes, so as a computing system, it is not excessively thrilling; the development of washing machines, however, contains a computing subsystem of its own, which is the development of better washing programs; this computing system might not be the most exciting one, but it is nevertheless an interesting one.

    Similarly, a desktop computer alone might be a perfect computer system, it won't be a very interesting computing system until you consider a man, perhaps one among many, sitting in front of it and using it. And conversely, a man alone without computer might have lots of ideas, he won't constitute a very effective computing system until you give him the ability to express it into computer hardware or software. Note that desktop publishing in a business office is considered as being some kind of software, but that as long as this information is not spread, copied, and manipulated much by computers, that the writing is very redundant but not automatized, it is a very lowly interesting computing system. Developing tools to automate desktop publishing, on the other hand, is an interesting computing system; even desktop publishing, if it allowed to take any tiny active part in the development of such tools, would be an interesting computing system; unhappily, there is a quasi-monopoly of large corporations on such development, which we'll investigate in following chapters.

    A most interesting Computing System, which particularly interests us, is made of all interacting men, computers, and particles in the Universe, where the information being considered is all that encoded by all known computers; we may call it the Universal Computing System (UCS). Actually, as the only computers we know in the Universe are on Earth, or not far from it in space, it is the same as what we might call the Global Computing System (GCS); however the two might diverge from each other in some future, so let's keep them separate.

    Now, the question that this article tries to answer is "how to maximize the utility of the Universal Computing System ?". That is, we take the current utility of computers for granted, and ask not how they can be useful, but how their utility can be improved, maximized. We already saw that this utility depends on the amount of pertinent information such systems yield as well as the free energy they cost. But to answer this question more precisely requires at the same time a general study of Computing Systems in general, of the way in which they are or should be organized, and a particular study of current, past, and expected future computing systems, that is, where the Universal Computing System is going if we're not doing anything.


  15. Subsystems
  16. When studying a dynamic system, one must always place oneself in an external, "meta" system, and choose some "representation" of the studied system. What kind of meta-system(s) and representation(s) to choose is a difficult question; again, the representations are better that allow to extract more information from the study of the system, which needs not be a total ordering among representations.

    Particularly, one could try to formalize the UCS with the set of the physical equations of its constituting particles. While such thing might be "theoretically possible", the complexity of the obtained equations would such that any direct treatment of them would be impossible, while the exact knowledge of these equations, and of the parameters that appear in it, is altogether unreachable. Thus, this formalization is not very good according to the above criterion.

    A fundamental tool in the study of any system, dynamic or not, called analysis, consists into dividing the system into subsystems, such that those subsystems, and the interactions between those subsystems be as a whole as simple as possible to formalize. Note that these subsystems need not (and often should not) form an homogeneous mass of isomorphic systems; on the contrary, the richness of information in the total system will depend on the fact that every subsystem be specialized in its way, and doesn't waste its resources by merely being redundant with its neighbours.

    For computing systems, the basic, obvious though not sole possible analysis is to consider individual computers and human computer users as the subsystems. Because information flows quickly and broadly inside each of these subsystems, but comparatively slowly and tightly between them, they can be considered as decision centers, each of which takes actions depending mostly on its internal information, and slowly interacting with each other "on purpose" (because according to these internal informations).

    Humans interact with other humans and computers; computers interact with other computers and humans. But while the stable, exact, objectized information lies in the computers, the dynamic nature of the project can be traced down to the humans; thus, even though only the computerized information might be ultimately valuable to the computing system, the information flow among humans, is a non-negligible aspect of the computing system as a project.

    Surely, this is not the only possible way to analyze computing systems; but it is a very informative one, and any "better" analysis should take all of this into account.

    Anyway, the point is that what counts when analyzing a system is the ability of the analysis to yield relevant information at a competitive cost. A "canonical representation" in terms of atoms and waves, while possibly being a valid analysis of a system, needs not be the most interesting one. Computers may be made, from the hardware, physical point of view, of electronic semiconductor circuitry and other substratum; from the information point of view, this is just transient technological data; tomorrow's computers may be made of a completely different technology, they will still be computers. Similarly, living creatures, among which humans, are, as far as we know, made of organic molecules; but we perhaps on other places in the universe, or in other times, things can live that are not made of the same chemical agents.

    What makes creature living is not the matter of which it is made (or else, the soup you obtain after reducing a man to bits would be as living as the man). What makes the living creature is the structure of internal and external interaction that the layout of matter implements. A chair is not a chair because it's made of wood or plastic, but because it has such a shape that a human can sit on it. What makes the thing what you think it is, are abstract patterns that make you recognize it as such, that constitute the idea of the thing. And as for computing systems, the idea lies in the flow of information, not the media on which the information flows or is stored.


  17. Operating Systems
  18. Now, we can define what an Operating System is (for which we use the acronym OS), that the project of this article is all about.

    Given a collection of subsystems of a cybernetical systems, we call "Common Background" the information that we can expect every of these subsystems to have. For instance, if we can expect most Europeans to wear socks, then this expectation is part of the Common Background of Europeans. If we can expect all the computers we consider to use binary logic, then this fact is part of the Common Background for those computers. This Common Background can thus contain both established facts and probabilistic expectations. The Common Background for a collection of human beings is called their collective culture, or even their Civilization, if a large, (mostly) independent collection of human beings is considered. The common background for a collection of computers is called their Operating System.

    The concept of Common Background appears in any cybernetical system where a large enough number of similar subsystems exist. Common Backgrounds grow in complexity only if those sybsystems do get more complex too, and the large number of such systems means that these should be self-replicating, or more precisely correlated to self quasi-replication. To sum it up, an interesting concept of Common Background is most likely to appear only when some kind of "life" has developed in the cybernetical system, or when we're examining a large number of specifically considered similar systems.

    Note that the "similarity" between the subsystems tightly corresponds to the existence of information common to the subsystems, that constitute the Common Background. In no way does this similarity necessarily correspond to any kind of "equality", among the subsystems: how could two subsystems be exactly the same, when they were specifically considered as disjoint subsystems, made of different atoms ? The similarity is an abstract, moral, concept, which must be relative to the frame of comparison that makes the considered information pertinent; a moral frame of Utility can do, but actually, any moral system in the widest acception can, not only those where an order of "Good" was distinguished. On the other hand, finding a lot of similarities in somehow (or completely) impertinent subjects (such as gory "implementation details") doesn't imply an interesting common background; finding a few similarities on pertinent subjects might not be sufficient to imply an interesting common background either. (technical remark: given a digital encoding of things, quantifying the level of interest of a common background might be expressed in terms of conditional Kolmogorov complexity.)

    If we consider humans in the World, can we find cells that are "exactly the same" on distinct humans ? No, and even if we could find exactly the same cell on two humans, it wouldn't be meaningful, just boring. Yet you can nonetheless say that those two humans share the same ideas, the same languages and idioms or colloquialisms, the same manners, the same Cultural Background. And this is meaningful, because these are used to communicate, and greatly affect the flow of information, etc. Genetical strangers who were bred together share more background as regards society than genetical clones (twins) who were separated after their being born.

    It's the same with computers: computers of the same hardware model, having large portions of common implementation code, but running completely different "applications" that have nothing conceptually common to the human user, might be considered as sharing little common information; on the contrary, even though computers may be of completely different models, of completely different hardware and software technologies, thus sharing no actual hardware or software implementation, they may still share a common background, that enables them to communicate and understand each other, and react similarly to similar environments, so that to the human users, they behave similarly, manipulate the same abstraction. That we called the Operating System.


  19. Controversy about the Definition for an OS
  20. There have always been many lengthy arguments everytime someone proposed any definition for what an Operating System is. Many will object to the above definition of an OS, because it doesn't fit the idea they believe they had of what an OS is. Now, let's see the requirement for such a definition: as a definition, it should statically reference a consistent concept, independently enough from space, time, and current state of society and technology, so as to enable discourse about OSes in general; as applying to an existing term, it should formalize the intuitive idea generally vehiculated by this term, and as much as possible coincide with its common usage, while staying consistent (of course, where common usage is inconsistent, the definition cannot stick to the whole of it).

    The definition the from previous chapter does fulfill these requirements, and it is the only one known to date by the author that fulfills them. This definition correctly identifies all the programs and user interface of Unix, DOS, Windows*, or Macintosh machines to be their respective OS, the class of similar machines being considered at each time, because they are what the user/programmer can expect to have when encountering such machines. It does support both the points of view that such software or feature, is an OS or part of the OS, or that it is not, depending on the set of machines being considered.

    By "Operating System", people intuitively mean the "basic" software available on a computer, upon which the rest is built.

    The first naive definition for an OS would thus be to define it by "whatever software is available with the computer when you purchase it". Now, while this sure unambiguously defines an OS, the according pertinency is very poor, because, by being purely factual, the definition induces no possible moral statement upon OSes: anything that's delivered is an OS, whatever it is. You could equivalently write some very large and sophisticated software that works, or some tiny bit of crappy software that doesn't, still it'd be OS, by the mere fact it is sold with the computer; one could buy a computer, remove whatever was given with it, or bundle completely different packages to it, then resell it, and whatever he resells it with would be an OS. Surely, this definition is poor.

    Then, one could decide that because this question of knowing what an OS is is so difficult, it should be let to the high-priests of OSdom, and that whatever is labelled "OS" by their acknowledged authority should be accepted as such, while what isn't should be deemed with the utmost defiance. While this puts the problem back, this is still basically the same attitude of accepting fact for reason, with the little enhancement that the rule of force applies to settle the fact, instead of raw facts being blindly accepted. This is abdicating reason in favor of religion. Now, the high-priests of computing that are to give a definition for an OS are not more endowed than the common computer user to give a good definition. Either they only abuse their authority to give unbacked arbitrary definitions, or they have some reasonable argument to back their definition. Since we're studying computer science, not computer religion, we can but contemptuously ignore them in the first case, and focus on their arguments in the second case. In any case, such definition by authority is useless to us.

    Those who escaped the above traps, or the high-priests of the second trap, will need other criteria to define an OS. They might most obviously try to define an OS as a piece of software that does provide such and such services, to the exclusion of any other services, each taking the list of provided services from their favorite OS or OS-to-be. Unhappily, because different people and groups of people have different needs and history, they would favor differently featured OSes. Hence, they would all define an OS differently, and every such definition would disqualify every past, present and future systems, but the few ones considered from being "OSes". Hence, this conception leads to endless fights about what should or not be included in a piece of software for it to be an OS. When human civilization rather than just computer background was concerned, these would be wars and killings, crusades, colonizations and missions, in the hope to settle the one civilization over barbarism. Even without fights, we see that completely different sets of services equally qualify as OSes, much like completely different civilizations like the ancient Greek and ancient Chinese civilizations, while being completely different, both qualify as civilizations, not talking about other more or less primitive or sophisticated civilizations. Such a definition for an OS cannot be universal in time and space, and only the use force can have one prevail, so it becomes a new religion. Again, this is a poor definition for an OS.

    The final step, as presented in the preceding chapter, is to define an OS as the container, instead of defining it as the contents, of the commonly available computer services; in other words, we give an intentional definition for an OS, instead of looking for an extensional definition. We saw that OS was to Computing Systems what Civilization was to Mankind; actually Computing Systems being a part of the Human system, their OSes are the mark of Human Civilization upon Computers. The language, habits, customs, scriptures, of some people, eating with one's bare hands, a fork and knife, or chopsticks, don't define whether these people have a civilization or not; they define what their civilization is. Similarly the services uniformly provided by a collection of computers, the fact that a mouse or a microphone be used as an input device, that a video screen or a braille table be used as an output device, that there be a built-in real-time clock or not, those features don't define whether those computers have an OS or not, but rather they define what is this OS they have.

    Our definition allows us to acquire knowledge, while refusing to endorse any dogma about what we can't know; this is the very principle of information against noise, of science against metaphysics. It separates the contingencies of life from the universal concept of an OS. An OS is the common background between the computers of a considered collection. This moves the question of knowing what should or not be in an OS from a brute fight between OS religions, from the mutual destruction of dogmas, to a moral competition between OSes, to the collective construction of information. That's why we claim that our definition is more pertinent than the other ones, hence more useful, by an argument previously explained.


  21. Operating System Utility
  22. Let it be clear that the concept of Operating System does not apply pertinently to machines that do not evolve, that do not communicate with other machines, that do not interact with humans. Such machines need complete, perfect, highly-optimized stand-alone software, adapted just to the specific task they are meant to accomplish. Whatever can be found in common among many such machines isn't meaningful to running those machines, as this does not influence the way information flows in the system.

    However, as soon as we consider further possible versions of a "same" piece of software, as soon as we consider its incomplete development and maintenance process, the way it interacts with other pieces of software, whether in a direct or remote fashion, as soon as it has any influence on other software, be it through the medium of humans who are examining it before to build the other pieces software (or while building these), then we are indeed talking about flow of information, and the concept of OS does become meaningful.

    See that communication between machines does not always mean that some kind of cable or radio waves be used to transmit exact messages; rather, the most used medium for machines to communicate pertinent have always been humans, those same humans who talk to each other, read each other's books, articles, and messages, then try to express some of their resulting ideas on machines.

    Particularly in the case above of lots of similar perfect machines, the concept of an OS on those machines might have been meaningless, or strictly limited to a vague or limited common interface that they may offer to customers; but the concept of an OS was quite meaningful on the development platforms for these, where a lot of common information is potentially shared by many developers working more or less independently.

    As we saw that the pertinency of a concept is related to the utility of the described object, we find the the utility of an OS lies in its dynamic aspects. An obvious dynamic aspect of the OS is how it itself evolves; but from the point of view of arbitrary user subsystems, the fundamental dynamic aspect of the OS, that dictates its Utility, is its propensity to ease communication of knowledge between the considered subsystems. Of course, these two aspects of course interact with each other.

    An OS eases communication of knowledge in that it will allow to pass more Information, by providing fast broad communication lanes and information stores, but also in that it gives pertinency to this Information, thus transforming it into Knowledge, by providing a context in which to interpret received information as unambiguously as possible, and in which to synthetize new information that represent as accurately as possible the ideas that are originally meant. Note that both Quantity and Quality of Information are being considered here, and that interaction goes in both ways.

    An OS will usefully evolve when modifications to a same OS project will allow improvements in the above communication of knowledge. For obvious reasons of information stability, the OS, can only evolve slowlier than its user base, and its design, which is the essence of the OS, and what manages the pertinency of Information, must change slowlier than its implementation (that drives raw performance).


  23. Operating System Expressiveness
  24. An Operating System is the common context in which information is passed across the Computing System. It is the one reference used for arbitrary subsystems to communicate information. Hence, the OS dictates not so much the amount of information that can be passed, which is mostly limited by the laws of physics and technological hardware achievements, as it dictates the kind of information that can be passed, which is a question of OS design proper. All OSes are more or less equal before technology, which is an external limitation; not all are equal before design, which is a internal limitation.

    For instance, given equivalent post office administrations, two countries can ensure similarly fast and reliable shipping of goods. However, the actual use of the post office for exchanging goods will greatly depend on what warranties the state will give to both people who send and people who receive goods: how agreements happen, how contracts are signed, how signed contracts bind the contractors, how payment happens, how disagreements are settled, how well the sent goods are guaranteed to match the advertisements, how much information people have on competing solutions, how likely a failure is likely to be, what support is available in case of failure, etc.

    Depending on the rules followed by the system, which are part of the OS design (according to our definition of an OS), the same underlying hardware can be either an efficient way to market goods, or an inefficient risky gadget. The Internet is a perfect example of a media with a great hardware potential for information passing, but a (currently) poor software infrastructure, that needs lots of enhancements before it can safely be used for large-scale everyday transactions. This will surely happen, but if things go as can be predicted, there is a wide margin for improvements.

    The key concept here is the expressiveness of the OS, which decides what kind of information is expressible by the OS.

    The common misconception about expressiveness is that Turing-equivalence be all there is to it. The theory says that "all (good enough) computing systems are Turing-equivalent", in that a good enough system can simulate any other system through an simulating interpreter or simulator, so it suffices to provide a Turing-equivalent system to be able to simulate any other system. But an simulation of something is not the original thing. Just like the idea of money is not actual money. The mere idea that someone may have signed a contract is not a binding contract in itself. Even the fact of someone actually signing a contract is not binding, in absence of any (explicit or implicit) legal context. If the system won't enforce your contracts, no one will. In absence of system support, the only enforceable contracts you can build are those where it suffices to dynamically check compliance from cooperative third-parties, and it is always legit for a party to fail.

    The catch is that a simulator gives no warranty: the meaningfulness of the result depend on the objects being manipulated respecting conventions for the validity of simulated representations. If the associated warranties can be interned by the very original system, then indeed that system can express rather than merely simulate the other system. If these warranties cannot be interned, then an external agent (most likely, the programmer) will have to enforce them, and you have to trust him not to fail, without recourse.

    Some will suggest paranoidly testing for dynamic compliance of every single operation for which contracts were passed; but such an approach not only is very expensive when even feasible but is not a solution (though it might be better than nothing): run-time checking can detect failure to comply but it cannot enforce compliance. In everyday life, it might mean that whenever you provide a service to a stranger, this stranger may run away and not pay you back, and you have strictly no recourse, no possibility to sue or anything. At times, run-time checking may allow to take appropriate counter-measures, but it might be too late. In the case of a spacerocket (e.g. Ariane V), a runtime failure means billions of dollars that explode. In the case of a runtime failure in control software for a nuclear device (civilian reactor or military missile), I just don't dare imagine what it might mean! In any case, having some paranoid test code that will terminate the program with message "ERROR 10043: the nuclear plant is just going to unexpectedly disintegrate." won't quite help. Finally, the ability to (counter-)strike, in absence of any system control, brings new dangers that in turn meet lack of solution, as malevolent agents may strike at will.

    All in all, the expressiveness of an operating system is its ability to require and provide trust, to enable exchange of trusted services, above that which can be built from zero by iterative interaction between agents.


    EVERYTHING BELOW IS A DRAFT, AND MUST BE COMPLETED OR REWRITTEN.....


  25. Computing System Structure
  26. Up to now, we've seen and discussed the external constraints of an OS, what is its goal, its why, in the implicit larger Computing System. Now that this goal is clarified, and keeping it in mind, it's time to focus on the internal constraints of an OS, its structure, its how.

    The structure of an OS is the data of its characteristic components, their interrelationships, and their relationships with the rest of the computing system. We must thus study once again the structure of the whole Computing System, of which the OS is but an aspect. For this, we will once again find inspiration in considering cybernetical systems in general, and in comparing the situation with that of another kind of well-known cybernetical systems, human societies. The latter analogy is more than a mere metaphor, since one aspect of computing systems is as actual human societies, with users and programmers being the humans, and the running programs being activities of these humans. Indeed, until AIs come into existence, all programs are human activities (and if AIs ever exist, they won't change much of the current discourse, if we understand "human" as "sentient being"). however, the analogy is certainly not an exact correspondance (an "isomorphism"), and it breaks down (as far as go specific properties not true of any cybernetical system) when we consider the computerized part of computing systems, that is, since computing systems comprise as basic identifiable agents not only complex unformalizable humans, but also running programs, that are quite different, simple and formalizable, and are the center of interest and of information processing in the system.

    Cybernetical systems of which computing systems are a projection. Human societies are made of lots of people, each with its own needs and capabilities, desires and will. Computer societies are similarly made of these same people, considered through the limited scope of the way they interact with computers, through computers, about computers, and of the computers themselves. People communicate with each other, and are dynamically organized in families, friendships, associations, companies, countries, confederations; every group is more or less stable; User services vs kernel services. Privatization vs nationalization of services. Rule of Law vs State Management. A computing system IS a human society! The programmers are the humans; the programs are theirx extensions.

  27. Users are Programmers
  28. To program: to influence the future behavior of the system.

    Intentional vs extensional definitions.

    Continuum between "Beginner" programmer vs "Advanced programmer" vs "Programmer demi-god". Some will never program. Some will stay rookies forever. Some will develop good programming skills in very specific domains. Etc.

    Artificial barriers due to proprietary software. See other article MPFAS. Historical barriers due to low resource availability at the time systems were designed: low-level systems.

    eager stratification universal system vs glue languages managing complexity vs multiplying services more than one way to do things? ultimately the same
    ..........

    (see the current draft for more ideas).



To Do on this page


Faré -- fare@tunes.org
$Id: Part_1.phtml,v 1.3 2000/02/01 01:05:30 fare Exp $