By Jon: First published in Online Currents 2004 – 19(2): 22-24
Operating systems
The basic function of an operating system (OS) is to allow a computer user to run programs. In order to do this the operating system must be able to: a) keep track of and modify the contents of the computer’s storage media; and b) start up automatically when the computer is turned on – i.e. ‘boot up’.
These two requirements mean that it is quite difficult – and can be dangerous – to have two or more operating systems installed on the same computer. Minor failures or idiosyncrasies by the operating system which has booted up and is currently in control may lead to catastrophic results when another operating system takes over. Nearly all users opt for single-OS systems, and – because the OS controls which programs are allowed to run – this in turn restricts them to a particular subset of programs. What we think of as ‘IBM-compatible software’ is in fact ‘DOS/Windows compatible software’; an IBM-compatible PC running a different OS has a completely different range of programs available to it.
Operating systems as such are not glamorous, and one way to persuade customers to buy or upgrade them has been to include extras in the package. Over time operating systems have ‘bolted on’ a large number of external applications in this way, sometimes resulting in an embarrassingly poor fit. Thus Windows now ships as part of a huge package which includes a set of Internet programs, basic word processor and graphics applications, and even games, as well as more relevant utilities like a file manager and a backup program, all originating from many different sources. For many people these ‘accessories’ are unnecessary, and some find them obtrusive.
Microsoft has also captured the current OS market so successfully that they find themselves selling essentially the same product to users of widely differing systems, from five-year-old home PCs to cutting-edge global networks handling top secret files. Their response has been to over-engineer, building in capabilities that most users don’t want or need. The cost of this is not only financial; adding to the complexity of the OS also slows it down and increases the chance of failure. And when failure occurs, it is also much harder to identify and fix. The never-ending series of security problems detected in Microsoft Internet products demonstrates how hard it is for even its creators to understand and maintain the system they have produced.
One major alternative to Windows has always been the Apple Macintosh system, but since this has grown and developed in much the same way there is relatively little to choose between them now in terms of complexity. Apple has now acknowledged the dominance of Windows by rushing to promote and adopt their own versions of Windows software; the days when Macintosh programs were unique and different are a thing of the past. And with Microsoft now a major investor in Apple, they are unlikely to return.
Linux: enter the penguin
Unlike the Macintosh OS, Linux is an operating system that provides a genuine alternative to Windows. It has four things in its favour: it is modular; it is consistent; it has a cute penguin for its logo; and it is free. It has reached its current level of popularity through a combination of brilliance and hard work – often unpaid – by an army of skilled fans all over the world. Regardless of where it goes from here, it stands as a monument to the power of enthusiasm.
Linux is an offshoot of Unix, a popular operating system from the 1970s which was (and still is) widely used in tertiary education environments. Unix in turn is closely related to C, the programming language which is used to construct nearly all the modules out of which Unix (and Linux) is made. At the heart of Unix is its ability to run, chain and combine C modules, so that simple routines can be built up into elaborate programs which will work on any Unix system.
Unix went through its own growth and diversification period in the 1980s which resulted in its becoming a large and complex system with several different incompatible ‘flavours’. Some of these were put forward as candidates for the growing PC market, but none were successful. In 1991, however, a second-year computing student from the University of Helsinki modified the central core or ‘kernel’ from a teaching version of Unix and put this forward as the foundation of Linux, which he named after himself – Linus Torvalds.
Linux began as a hobby activity, and Torvalds was happy to farm out its development among his friends and colleagues provided that he kept control of the kernel. The Linux development network has since grown to the point where it includes thousands of people and several corporations, but it remains based around the notion of anopen system. Nearly every module written for Linux is written in the same language – C, or some standard variant of it. It is only accepted as a Linux module if it is released to the public; that is, to the global community of Linux C programmers, who can then test it under different circumstances, critique it and suggest improvements. Competing modules are evaluated on their merits rather than who wrote them or where they come from. The result is a rapidly-developing system which is modular, internally consistent and quick to identify and fix faults. The public nature of Linux also means that it is normally free to distribute, download, and use, although some commercial versions are also beginning to appear. (A detailed history of Linux can be found at http://ragib.hypermart.net/linux.)
Unfortunately the circumstances that promote the development of Linux also encourage a fanatical belief in its virtues and advantages on the part of its creators. Like the Macintosh in earlier days, Linux developers and users have a personal commitment to the system and have difficulty in seeing it objectively: there are textbooks, for instance, which start by describing the simplicity of Linux and follow this with a fifty-page description of how to install it and all the many things that can go wrong. And Linux suffers from its own portability: unlike Windows it cannot ‘tweak’ its configuration to take advantage of specific hardware.
Linux in operation is disturbingly reminiscent of DOS, with a command-line interface where the user types in cryptic, tersely-named instructions like ‘gzip –d test.pdg.gz’. Luckily for new users there are free GUI desktop programs available which provide for a more intuitive, graphic, mouse-based approach; the most popular of these are Gnome and KDE. Both provide a very Windows-like desktop on which to begin exploring the system, complete with utility programs, multimedia players and games.
Linux distributions
Although nearly all the components are available separately, Linux normally comes in packages called ‘distributions’. These will include the Linux kernel, one or more desktop programs and many standard modules, but may also add their own extras to the bundle. Caldera OpenLinux, a commercial package, includes WordPerfect for Linux, and SuSE Linux includes the KDE desktop user interface. The most popular free distribution is Red Hat Linux, currently in Version 9, which includes a user-friendly module management system.
The best way to obtain a Linux distribution is by purchasing a manual which includes Linux on a CD-ROM. New versions of Linux distributions are available for $15-$100 depending on the associated text; older ones can be bought for less. Users with broadband connections can download several Linux distributions off the Web; see for instance http://www.caldera.com and http://www.conectiva.com.
Before installation, would-be Linux users should carefully record the details of all their peripheral devices – monitors, mice, scanners, printers and so on – in order to find and identify suitable drivers. They may either dedicate a PC entirely to Linux or choose to set up a ‘dual boot’ system which can operate in either Linux or Windows; changing between them requires rebooting the PC. This will require either two hard disks or a single hard disk split into (at least) two partitions. Curious users can also trial Linux in a ‘box’ running under Windows, although this negates most of the advantages of installing the system.
I attempted to install Red Hat Linux 9 on both a 486 laptop and a Pentium 2 desktop PC. On the laptop the installation process was fairly straightforward, although the terminology was sometimes confusing. The default installation included a boot manager, which I could use to select either the Windows or the Linux system on startup, and OpenOffice, an Office-compatible suite of applications including a word processor, spreadsheet and presentation management programs. Smaller programs ran satisfactorily on the laptop but attempts to run the larger programs resulted in delays of several minutes for even trivial actions. I was unable to successfully install Linux on the desktop machine, each of several attempts resulting in failure for a different reason.
When to use Linux
Linux is reportedly more stable and reliable than Windows, but it is not the infallible system that users occasionally make it out to be. Linux users also cut themselves off to some extent from the large community of Windows users for communicating and sharing resources, although there is a dedicated and growing community of Linux users to help take their place. Desktop Linux users tend to be either Microsoftophobes, programming experts who are familiar with Linux through other contexts, or – rarely – people who need a specific program which is only available under Linux. Desktop use of Linux requires a fast PC, and access to considerable help and advice to supplement that which comes with the distribution.
On the positive side, the growth of the Unix market is attracting several commercial software producers hoping to steal a march on Microsoft. And over 15% of desktop PC users are reported to be ‘considering’ Linux, substantially more than any other alternative OS. Linux is also making its way on to handheld PDAs.
Where Linux shines, however, is in ‘back room’ operations – managing networks, storing and accessing large collections of diverse files, providing access via the Internet to websites and other resources, handling vast quantities of e-mail, etc. These operations, once carried out by mainframe computers, are increasingly being relegated to off-the-shelf PCs equipped with the appropriate OS: either Windows NT or – increasingly – Linux.
The future of Linux
It may turn out in the long run that the greatest contribution of Linux has been to provide competition for Microsoft, which is no small thing in itself. If it does continue to develop and expand, though – and the signs are that it will – we can expect to see more paid commercial distributions appearing and increasing amounts of user-friendly software coming with them. Recently, in a depressing sign of commercial maturity, Unix spawned its very own copyright infringement lawsuit(http://www.commentwire.com/commwire_story.asp?commentwire_ID=4634). And the Linux community in Australia is starting to lobby for more government use of open-source software.
Can a commercialised Linux can retain the special character which gives it its unique advantages? Who can tell? But as we come to depend more and more on complex widely distributed networks for data storage, retrieval and communications, everything that helps to make them more reliable and fault-resistant surely deserves support.