Reviving your old PC with Linux, Part III: Understanding Linux distributions

Now that your hardware is reasonably in order, and you understand the potential issues involved there, it’s time to look at the software side of things. You want to run some kind of Linux distribution on your system, but you don’t know which one to pick.

This is the point at which a lot of people would just lob a lot of funny-sounding distro names at you and expect you to check them all out and blindly try them all. Well, I’ll eventually get to lobbing those names out; but first, let’s to try to understand “lightweight Linux” — and Linux distributions in general — in a theoretical way.

How to make a Linux distro lighter

There are basically two approaches to making a lightweight Linux distribution.

The first is to take a general-purpose distribution (Ubuntu, for example), strip it down to the studs, then flesh it out with smaller, simpler, less resource-demanding components. I call this the “Remix” approach.

The other approach is to start completely from scratch, customizing each component to be as lightweight as possible and including only software which is small, fast, and absolutely necessary. I call this the “fully lightweight” approach.

The advantages of the remix method are that you get a solid core OS which is known to work with a lot of hardware, a solid core toolset, access to repositories brimming with software, and complete compatibility with a major Linux distribution. Best of all, it’s relatively simple for someone with a modicum of Linux knowledge to create and maintain a remix distro. The biggest disadvantages are that these core components were designed for supporting a non-lightweight distribution, and can themselves be a bit heavy; the software dependencies in these distributions tend to err on the side of functionality over lightness; and the software available from the repositories has not been compiled or patched with older computers in mind. Depending on which distribution they are based on, remix distros tend to be more “medium light” distributions rather than “super light”.

For the “fully lightweight” distributions, the pros and cons are switched. Since everything from top to bottom is being handled by the distribution’s (typically small or one-person) development team, a lot more work has to be done by a lot fewer people. Security concerns are not always rigorously tracked, bugs are not consistently addressed; there are not huge repositories of software to install, and features are often lacking. Still, the end result tends to be a distribution that will truly run fast on older systems, and doesn’t bog things down with bulky dependencies or needlessly bloated tools.

The anatomy of a Linux distribution

To understand what “lightweight” or “heavy” components go into a Linux distribution, and what trade-offs are involved in the selection, we need to understand what the major functional components of a Linux distribution are, and what options exist for them.

The Kernel

The kernel is the lowest layer of software in an operating system. It handles communication with the hardware and most of the other low-level functions of the system. If the OS is Linux, then of course the kernel involved is the Linux kernel. That is, of course, what makes it “Linux” in the first place.

But not all Linux kernels are the same. If you’ve ever compiled a Linux kernel (and if you haven’t — boy are you missing out!!), you know that there is a vast array of configuration options you can tweak at compile-time: features that can be toggled, settings that can be adjusted, subsystems that can be removed, etc. — and that’s just the “vanilla” kernel! Hundred of patches are also available to alter the behavior of the kernel or optimize it for certain workloads.

The vast majority of the kernel is made up of hardware drivers, which can each be turned off or left out to produce a lighter, smaller kernel. This is a tweak common in many fully lightweight distros, which means the trade-off for a small kernel is often dealing with spotty hardware support or being stuck with generic drivers that don’t enable all available features. A smaller kernel takes less disk space, of course, and loads faster.

The init system

The init system is the first process launched by the kernel at boot time. It’s responsible for starting up all the services and subsystems and provides a mechanism for stopping, reloading, and checking the status of these services while running. The SysV-style init system has been the more widely used system in mainstream distributions for many years, though some (for example, ArchLinux) use the simpler BSD-style init; others are moving towards more complex next-generation init systems like Ubuntu’s upstart init system.

The functional differences between init systems comes down to the trade-off between simplicity, small size, and speed versus functionality, power, and extendability. BSD-style, for example, is basically a single script, whereas SysV is a comparatively complex system of directories and symbolic links with multiple “runlevels”.

Core userland

Userland is a catch-all term for the part of the operating system that the user sees and interacts with. The core userland in any Linux distribution (which would include the command shell, basic command-line tools, compiler, C libraries, etc) comes from the GNU project. This is why you sometimes see the term “GNU/Linux” (and there’s a flame-war there I won’t venture near).

While it’s probably impossible to put together a Linux distribution without some GNU software, there are some alternatives to parts of GNU that can bring down the footprint of a distribution’s userland. Notably, the busybox shell — a tiny version of the Unix shell with stripped-down versions of many file tools built in — is often used in lightweight distros to replace BASH and many of the core terminal commands. Busybox’s shell and commands lack a lot of the features of the full-blown versions, of course; so for those doing a lot of terminal work it may not be ideal.

Various Subsystems

Much of the functionality such as printing, scanning, wireless networking, power management, and so forth on a Linux system is broken down into individual subsystems. Sometimes there are alternatives to the standard software for these systems, but in many cases there is only one real choice. Often fully lightweight distributions leave out some of these subsystems in the default install, realizing that not everyone will want to print, scan, use wireless, etc. (though they can usually install such capabilities if desired). Of course, installing support for these activities means knowing the names of the software that handles them, so here are some commonly-used examples:

  • CUPS: The “Common Unix Printing System“, which handles (naturally) printing. If you’re an OSX user you might be familiar with CUPS, as it (and most other modern unixlike operating systems) uses CUPS. The older alternative is LPR printing, which is comparatively primitive.
  • SANE: “Scanner Access Now Easy” is the Linux scanning subsystem.
  • Network-manager: Many modern distributions use network-manager to help users connect to wired, wireless, VPN, or other networks. Other options exist, such as WICD or Wifi-radar for wireless, as well as good-old static configuration files.
  • ALSA: The “Advanced Linux Sound Architecture” is a commonly used sound subsystem for Linux. Sound is a complex, multilayered beast on many Linux distros, so ALSA is sometimes accompanied by systems like gstreamer and pulseaudio. The biggest alternative is OSS (Open Sound System), which ALSA largely superseded.
  • ACPID: The ACPI daemon handles power management support (suspend/hibernate/power-off/etc.) for computers with ACPI-compatible hardware. Some hardware uses the older APM interface for power management, so depending on the age of your computer you might need one or the other.

The Windowing System (X11)

One subsystem in particular stands out above the others in importance: X11, the graphics/windowing subsystem. The X11 system (sometimes just called X) provides the ability to have a graphical desktop and applications. Technically, X11 is more of a protocol than a system, and there are multiple implementations of it. On Linux, by far the most widely used is the implementation from X.org, commonly known as Xorg. Xorg is where all the bleeding edge advancements in Linux graphics happen, and it supports fancy things like compositing, 2D and 3D acceleration, kernel-mode-setting, and so forth.

It’s also one of the bulkier subsystems in the Linux software stack. Xorg has a tendency to make otherwise usable old hardware obsolete, unfortunately, as its development is focused on making modern equipment run at full steam. There are, fortunately, alternatives.

The one most generally used in lightweight situations is XVesa, an X11 server that works with only the generic capabilities of the graphics hardware, and thus does not need any special drivers. XVesa is small and uses little memory, but also has limited resolutions, sometimes renders poorly, and sometimes does not work with certain pieces of hardware at all. When it works, it is quite useful on older hardware.

An even lighter option is to go without X11 entirely and use the framebuffer, a special graphical mode for the text terminals. Enabling the framebuffer allows you to display pictures, video, higher resolutions, and nicer fonts on the text terminals. I’m not saying it’s anything short of a trip straight back to the early 1990’s, but we are talking about retro equipment here. It might enable a few key pieces of functionality on an otherwise useless machine.

The Window Manger/ Desktop Environment

The X11 server provides an interface to the graphics hardware, but it doesn’t actually give us a desktop with menus and panels, or let us move windows around the screen. This capability is actually provided by other software that we run on top of X11 (we call them X11 clients).

If all you want to do is draw the title-bar at the top of the windows and be able to move them around the screen, resize them, or minimize them; then what you need is a window manager. There are dozens — perhaps hundreds? — of window managers for X. Some of them strictly manage windows (move/resize/minimize and window decorations), but many window managers also add features like panels, menus, configuration utilities, or docks that qualify them for “desktop environment lite”.

A true desktop environment implies much more than just window management: panels, desktop widgets, configuration utilities, basic tools (text editor, browser, etc), a menu system, and a file manager (which is typically responsible for the desktop icons). These components are often bound together with some underlying services to make a cohesive desktop experience.

The choice of desktop environment is probably the single most defining aspect of most lightweight distributions. Many a medium-weight remix is nothing more than a mainstream distribution with a lighter desktop environment. XFCE and LXDE are the standouts here, though you may see Enlightenment used as well.

Some really light distributions eschew the desktop environment completely and give you only a window manager — albeit one with enough features to be usable by itself, or augmented with standalone panels/menus/file browsing to create a make-shift desktop environment. Popular choices here are Openbox, Fluxbox, JWM, and IceWM.

Since they’re so important to the lightweight experience, desktop environments will be discussed in more detail later in this series.

Package Management

Package management is another vital subsystem. This is the software which automates the installation, removal, and maintenance of software on the system. Most (but not all!) Linux distributions feature a package management system, and they range from simple scripts which download archived programs from an ftp server to complex, multilayered systems with sophisticated dependency handling, package searching, configuration wizards, package ratings and reviews, and more.

In general, remix distributions rely on the package management system of their parent distribution, giving them access to the (usually considerable) package selection offered by the larger distro. The trade-off is that the package management software can be big and slow on an older system, and that the huge database of package data can take forever to parse through on old equipment.

Fully lightweight systems usually roll their own simple package manager. This not only allows them to make the PM software itself smaller, but gives them the ability to control the packages available, their dependencies, and the default configuration. The downside is that these systems are not always robust when it comes to removing software or dealing with dependency conflicts.

Installer

Finally, the installer software is the program that is used to install the distro to your computer. The traditional method of installing Linux involves booting directly into an installation program (either text-mode, graphical, or a little of both) which guides you through partitioning the hard drives, selecting the software to include, and specifying your configuration settings. Then after chugging away for a bit, the installer reboots the computer into a newly-installed system.

Over the last several years, the Live Media method (sometimes called “Live CD”, though now we can use DVDs or USB drives too) has become popular, especially among Ubuntu-based distros. With this method, you boot the install media to an actual live desktop, where you can run the system just as if it were actually installed. When you’re satisfied that everything seems to work and you like the look of things, you run through a quick installation wizard and reboot. The live media can also be used as a rescue tool, or as a portable desktop (just install on a USB device and any computer becomes your favorite working environment).

The significant disadvantage to the live media approach is the resource requirements. In a typical live boot situation, the filesystem is read from compressed files on the media and unpacked into RAM, using the RAM as if it were a small hard drive. Obviously, this is taxing or just impossible on computers with low RAM or slow access to the media (e.g., a slow optical drive or older USB ports). It also requires the video system to be working properly before install.

A traditional, text-mode installer is a safer bet on older equipment where compatibility is uncertain and resources are low.

Other things that make Distros different

A Linux distribution is more than the sum of its parts. As long as we’re on the topic of what makes one different from the other, there are a few more things to consider.

Release Cycle

In the world of open source software, projects are always developing and evolving. Each software component is developed by a separate, independent developer or community of developers, and releases on its own schedule. For projects developing a distribution built from all this software, figuring out how to keep users up-to-date and reasonably stable at the same time can be a problem. There are a few different ways to solve the problem.

The most common is the big freeze method, in which the distro “freezes” the versions of all its software on a regular basis, then spends some time (either a fixed amount, or until the distro meets stated quality goals) testing everything together and making sure it all works. Most mainstream distributions (Ubuntu, Debian, Fedora, Suse) take this approach in one variation or another.

The next approach is the rolling release method. In this approach, new software is brought in as soon as it’s released, either with or without some testing. There is never a “version” of these distros, just a snapshot at a point in time.

Some distributions take a hybrid approach, keeping (for example) a frozen core with rolling release applications.

The downsides to the big freeze are that you have to wait until the next release (which could be a few weeks or a few years apart, depending on the distribution) to get new versions of your software, and that each release brings a big, potentially catastrophic, system-wide upgrade. The upside is that you get stability for the duration of the release cycle.

With rolling release distributions, it’s pretty much the opposite. You get the latest versions of everything when it comes out, but you’re constantly downloading updates and subjecting yourself to potential breakage.

Preinstalled Applications

Apart from the core system and desktop environment, most Linux distributions come with some basic applications installed. Assuming you have a network connection and system has a decent package manager, you aren’t stuck with them, of course; but knowing that there is a core set of applications which are well-tested with the distro and well-suited for its target audience can be helpful, especially for ultra-lightweight setups.

Usually you can expect at least a web browser, email client, text editor, and maintenance utilities to be installed; word processor, spreadsheet, image editor, media player, games, and instant messenger are also commonly included applications.

Later in this series, I’ll go over some lightweight options for common applications.

Community

Every distribution of Linux has some kind of community. It usually centers around the official forum, mailing list, or IRC channel, and is made up of people who work on, or work with, the distribution. Every community has a different tone and focus; some are very end-user focused, and are mainly good at answering questions for new users of Linux. Others are hacker-focused, filled with people who like to tinker with computers and Linux, and who expect one another to take a certain amount of initiative and possess a certain amount of knowledge.

Finding a good community that lines up with your needs, interests, and experience level is nearly as important as finding the right distribution for your computer. Inevitably, you’re going to need some help to get something working correctly, and that’s where the community comes in.

Ready to choose

By now you should have a good feel for what makes one distribution different from another, and what sort of considerations go into making a lightweight distro versus a medium-weight distro. Armed with this knowledge, we’re going to start looking at some of those distribution options in the next installment of this series.

 

4 Thoughts on “Reviving your old PC with Linux, Part III: Understanding Linux distributions

  1. john says:

    Thanks admin for sharing this information, it is worth to read.

Leave a Reply

Your email address will not be published. Required fields are marked *