February 1999 Column

February 1999 Column

[ Site Index] [ Linux Index] [ Feedback ]

Business as usual

It's all business as usual in the Linux world -- too much business to keep track of, in fact. Several more corporations have seen the light in the past month or so. Oracle just announced Oracle 8i -- their whizzy network-enabled server -- on Linux. Meanwhile, Corel have confirmed that they're putting serious muscle into the WINE project, in the form of full-time programmers; expect to see WINE running most Windows 95 applications straight out of the box within a few months. Speaking of Corel, they've announced that Corel Office 2000 (Quattro Pro, Paradox, and various other products) will be free for non-commercial use on Linux by mid-99, with commercial support available. Free Word Perfect 8.0 is already sitting on linux.corel.com by the time you read this, so if you need a professional-grade word processor you're in luck. Meanwhile, Star Division in Germany just shipped version 5.0 of StarOffice, an uncannily authentic-looking Microsoft Office 97 clone; as with Corel's product it's free for non-commercial use from Star Division, and commercial support packages are available. Finally, IBM are hyping an announcement connected with their DB2 database due some time in December -- one of the last heavyweight servers to show up on Linux.

It's business as usual at Microsoft, too, where the motto seems to be "what market would you like to take over today?" Microsoft has finally noticed Linux and open source software; a very interesting memo, written by Microsoft engineer Vinod Valloppillil and leaked to Eric Raymond, gave everyone the shivers on Halloween. While the documents aren't any kind of official policy (they appear to be position papers outlining possible strategies), lines like these don't fill those of us who stand outside the Microsoft camp with seasonal cheer: "OSS [Open Source Software] projects have been able to gain a foothold in many server applications because of the wide utility of highly commoditized, simple protocols. By extending these protocols and developing new protocols, we can deny OSS projects entry into the market." "The effect of patents and copyright in combatting Linux remains to be investigated." And, "OSS is long-term credible ... FUD tactics can not be used to combat it."

The buzz-word to watch in all this is 'commoditization'. Commodities are cheap items, bought in bulk; the internet runs on commodity standards that everyone uses (and which are free). Microsoft have carefully fashioned Windows as a premium product, which they can make money by selling. They're now talking about trying to 'de-commoditize' the various communications protocols that networked systems use to talk to one another, adding incompatible features which lock Linux (and relatives like FreeBSD) out of the market.

But they'll have an uphill struggle if they try it. The adaptability of the open source movement means that Linux acquires hooks into new standards with almost frightening speed. Linux currently supports more networking protocols than just about any other operating system: by the time you read this, Linux 2.2 should (touch wood) be available, and that's even more of a networking monster.

Going by the 2.1.129 (development) kernel, the biggest single feature of linux 2.2 is IPv6; the next generation of the venerable Internet Protocol (the IP in TCP/IP). Linux is the first operating system to support it fully, and following it are a whole slew of other somewhat arcane networking facilities -- some of which have broad implications for the commercial UNIX world.

Among other things, Linux 2.2 will have network block devices and some fairly advanced filesystem options such as full RAID support. Network block devices let you build a filesystem on top of a device node which, instead of being a driver for a hard disk, is actually a very fast internet connection to another computer. Make it a RAID-1 mirrored filesystem with one partition on your local box and another on a network block device and every time you save a file on that filesystem, it is saved on two different machines. This capability is absolutely vital to anyone trying to run a fault-tolerant cluster of servers; it means if one of the machines crashes, the other is ready and waiting to take over. And clustering is important, for reasons we shall see ...

Although I spend a lot of time talking about desktop applications, you probably want to bear in mind that this is not what Linux is best at. Just as the Apple Mac had a particular strong point in the DTP and publishing field, so Linux is gaining a real position of authority in the server world. The ability to provide file and printer sharing services to Macs, Windows boxes, UNIX workstations and other more arcane machines (DECnet, anybody?) mean that Linux makes excellent network glue. And some of the applications aren't at all obvious to someone coming from the button-down corporate world.

Take IP Masquerading as an example. IP masquerading is a trick whereby a suitably configured linux system can act as a router. It is visible to hosts on the internet (via a dial-up or leased-line connection) on one side, and to other (local) machines on the other side; and it magically keeps track of packets going from machines on the LAN to the outside world and replies coming back in. There's an old 486 sitting under my desk, gathering dust. Trying to run Windows 95 on it is torture, but it's quite efficient at running faceless server programs that make some facility available over a network: email, web, DNS, file services, and so on. I could plug that old 486 in, give it a modem, put it on the local network, and configure it as a masquerading firewall. The internet at large only sees the one computer dialed in via an ISP, but as far as the machines on my LAN are concerned there's a router connecting them to the internet: they can see out.

Masquerading is built into the linux kernel. If you have a stock out-of- the-box copy of RedHat 5.1 or later, or a copy of SuSE 5.3 (or most other recentg distributions) you have masquerading ready installed; otherwise you'll need to rebuild the kernel with masquerading switched on. (This is not as hairy a job as it sounds; just look in /usr/src/linux and read the README file for instructions, or failing that look on your nearest sunsite mirror -- for example, ftp://src.doc.ic.ac.uk/packages/linux/sunsite-unc.edu/ -- for the kernel sources. Hint: if you've got X running, make xconfig gives you a graphical kernel configuration tool with online help.)

Once you've switched masquerading on, you control it using the cryptically-named ipfwadm utility: there's a HOWTO document giving detailed instructions in /usr/doc/HOWTO/mini/IP-Masquerade. (Note that I'm assuming you're still on a 2.0 kernel -- 2.2 will use a different utility, ipfwchain. I'm also assuming you're using a linux distribution which comes with the full linux documentation kit and sticks it in the usual place -- /usr/doc. There's the equivalent of a three-thousand page manual sitting there; so much for the nay-sayers who insist Linux comes with no support or documentation!) You tell the kernel that connections from other machines on your network are to be forwarded via your dial-up network connection, while incoming packets from the rest of the net can quietly not be forwarded. Then, whenever your cheap old 486 goes on line, so does the rest of your firewalled LAN.

This kind of cheap and cheerful router would cost quite a lot of money if you were to go to Cisco or Shiva and order one -- but you can build it out of an old PC in an hour or two at home. A small, slow box sitting in a corner without a monitor, chugging away for weeks on end without human intervention, doing something boring but critically important, is something that the industry at large has only just noticed and assigned a name to: traditionally servers are big, expensive systems. Thin servers, like the 486 sitting in a corner doing masquerading, are cheap. They do one thing, and do it well. This is one of the reasons Linux has a natural advantage in the server market: you can recycle fully depreciated kit and make use of it, for the cost of a two pound CD-ROM.

Oddly enough, the fact that Linux is the ideal operating system for a thin server (it's cheap, reliable and has networking facilities coming out of its ears) has not gone unnoticed. A few leading-edge companies have already jumped on the thin server bandwagon, and if you need a dedicated server that's a bit more gutsy than last year's old workhorse they deserve a look.

Probably the first to show a machine in public were Cobalt Networks Inc., whose Qube and Raq systems look rather promising. Based on the same MIPS processor architecture as the Silicon Graphics workstation family (and, less obviously, the Nintendo 64), the Qube (and the ISP-targeted 19" rack-mounted Raq) is a turnkey Linux server. While you can run one as a general-purpose Linux box, they're designed to be installed as shrink-wrapped web servers and proxy-servers. You can administer them remotely via a web-based interface; hopefully I'll be in a position to describe them in more detail in a future issue.

A second machine in this field is the Corel Netwinder. Corel have really jumped on the Linux bandwagon, and the Netwinder is their first venture into hardware. The Netwinder is based on the StrongARM RISC processor, running a derivative of the Red Hat distribution; they've yet to hit these shores, although the specification is somewhat mouth-watering. About the size of an external modem, the Netwinder comes in a couple of forms -- of which, the one most likely to sell is (surprise!) the dedicated web server box.

As if that isn't enough, Linux is creeping into shrink-wrapped thin servers based on conventional PC hardware. For example, not many people know that McAfee WebShield (in its first, standalone incarnation, two years ago) was based on the little-known Mastodon distribution of Linux. WebShield was unusual in that it was a virus-scanning firewall. You stick two ethernet cards in a PC: one goes to your internal network, one goes to your internet connection, and every stream of packets that goes through it is decoded, scanned for viruses, and re-encoded on the fly. The box can be administered via a web-based front end, of course.

Thin servers are definitely surfacing as a major market for Linux. Why throw away old 486's when you can recycle them as printer servers, mail servers, web servers, and even virus-scanning firewalls? And why buy full-featured PC's or workstations as web servers when a shrink- wrapped box designed to fit in an equipment rack will happily run Apache or Netscape Commerce Server for months on end without crashes? All we need now is a reminder that Sun is now supporting Linux as a leading-edge Java porting platform and a horrible suspicion begins to dawn: Linux is the Network Computer platform everyone was talking about a year or two ago.

Crashes -- or the lack thereof -- are a sore point, and likely to be one of the issues that ultimately lets Linux gain a niche in the corporate market. A linux system running as a thin server is usually pretty reliable, with uptimes measured in months (barring power cuts). However, when you really need high availability, fault tolerant systems are where the smart money goes. Fault tolerant computers have redundant systems and can switch out components when they develop problems; they're also fiendishly expensive.

One attempt to deal with the price problem is to use a cluster of cheap workstations (cheap is relative, here!) that monitor each other, share files, and have hot backup servers waiting to take over whenever one of the front-line machines falters.

Note that the emphasis here is not on parallel processing -- as with the Beowulf system, that lets a gang of Linux boxes divide complex programs up between themselves and work on them in parallel -- but on availability. If a single-user PC goes down, one user is left tapping her fingers while it reboots: but if a big corporate server goes down, potentially thousands of people lose the ability to work until it's back again. So big servers can't afford to crash; and because all hardware sooner or later lets out a big puff of magic smoke, the solution is to have some spare hardware waiting in turn to take over the job. NT supports a degree of clustering in wolfpacks; so do all the commercial UNIXes. So, as of a few weeks ago, does Linux.

The Eddie system, set up by Ericsson Telecom, is an infrastructure for fault-tolerant clustering on Linux. There's a load-balancing DNS (domain name) server, so that people can always find you. There's an IP migration system, so that if one machine (or cluster of machines) goes off-line, another cluster can grab their network addresses and take over. There's a smart web gateway that routes requests to whichever machine is best able to handle them; and work is in progress on a replicated file system, where data written to a disk on one machine is replicated to all the others in the cluster. (If you need the latter right now, you can bodge up something based on RAID 1 mirroring and the network block device in the Linux 2.1 development kernels; but a properly designed one is likely to be a wee bit better.)

Eddie basically makes it possible to take a gang of cheap PCs and turn them into a robust high-availability server cluster. Better still, it's free software; they intend to make their money by selling support (and if you're running a commercial high-availability cluster you probably want to pay for support anyway: if you're not commercial, though, this is the only game in town that won't charge you a thousand smackers per box).

So. Put it all together: thin clients, free high-availability clustering, ports of LDAP and Novell's NDS, Network Computers, Java, and the world's most flexible network operating system -- all for free. Pile on the applications that are being ported to it, and add a decent desktop like KDE. The big picture is total world domination in the server market: 1999 will be characterised by Linux moving into the corporate IT department in a big way and beginning to eat into Solaris, HP/UX, and (especially) SCO territory. And, as Microsoft are all too aware, if you control the servers you can dictate to the desktop; it's no good writing clients that use fancy proprietary protocols if the servers refuse to support them.

Sounds to me like the guys from Redmond have good reason to be worried.

Linux hardware

One question that comes up a lot is "what do I need to look for when buying a PC to run Linux on?"

A quick answer is, "if you already run Windows 95 on a PC, you've got everything you need to run Linux". It's not always accurate, but it describes the CPU, disk, and memory requirements fairly well: Linux needs no more of any of these than Windows, and can often get away with less (although the more the merrier). A sensible minimum spec these days is a P200, 64Mb of RAM, and at least 1Gb of disk space -- mind you, you'll be hard-pressed to buy a new machine with less.

But that's not where the story ends. Not all PC hardware is linux-friendly. The first and biggest gotcha is equipment that relies on Windows software to make up for some sort of deficiency. Winmodems are a case in point. Most modems have a built-in microprocessor and ROM of their own; you can plug them into just about any computer and they'll work. Not so with Winmodems; much of their processing work is farmed out to a set of Windows drivers that run on the PC. No windows? No modem, then.

Similar caveats apply to some printers; indeed, it's prudent to be suspicious of any hardware gadget with "Win" as a prefix to its name. (It's not necessarily incompatible, but it's a sign that you should double-check before spending money on it.)

Probably the most important item you can worry about is your hard disk subsystem; the drives and controllers. At this point it's worth mentioning a simple law -- SCSI rules. SCSI (small computer systems interface) is a standard bus that Macintoshes and other platforms have used for donkey's years in preference to such PC-specific items as IDE, EIDE, or the venerable ST-506 interface. Most PC motherboards don't have built in SCSI controllers; you usually buy them as separate boards. Furthermore, SCSI drives tend to cost a bit more than EIDE drives; in general, prices are about a year behind in terms of the price/capacity ratio. However, SCSI kit is generally of higher performance. In particular, the SCSI bus permits asynchronous commands; your computer can send a stream of read/write requests to the drive without waiting for each one to complete in turn. While modern UDMA drives probably outclass older SCSI kit (certainly pre-SCSI-3), a machine equipped with a SCSI drive thrashes equivalent EIDE-dependent kit silly. Linux doesn't treat its storage in the way Windows does; it's very filesystem- intensive, and it's under just these conditions that SCSI hardware outshines the opposition.

Another advantage of SCSI is that it's possible to daisy-chain devices in a SCSI chain. On the PC I'm typing this on, there are two hard disks, a CD writer, a CDROM jukebox, a DAT drive, and -- sometimes -- a scanner; and it's all dangling off a single controller card.

Next most important is the display system. As a general rule of thumb, expect to run the X11 windowing system and some sort of desktop like KDE. X11 was designed for high-resolution imaging in the eighties, and consequently it's a bit of a pixel hog; it's almost unusable at resolutions of less than 800x600, and prefers lots of real estate: 1400x1600 resolution isn't unusual. Realistically, you will want a fairly fast video card with at least 4Mb of video RAM -- however, you will need to ensure it's supported by an X server (display driver) available to you: see http://www.xfree86.org/ for lists of supported cards. A general rule of thumb is that the latest and greatest new item probably won't be supported for three to six months after it comes out -- most hardware companies don't yet release Linux drivers with their new hardware. A secondary rule of thumb is to look for OpenGL compatability; this is the most common 3D system used on Linux, and a few cards that come with hardware 3D support and have appropriate X11 servers will run OpenGL applications far faster than unaccelerated cards. Another point of note is that SuSE are supporting leading-edge graphics boards; the XSuSE servers can be found at http://www.suse.de/XSuSE/.

It goes without saying that a good multisync monitor is its own reward. After all, you've only got one pair of eyeballs!

Other news

The biggest late-breaking news of November was the change in licensing of Qt, the graphics library that the KDE desktop relies on. Qt wasn't considered to be free software by some; although it was free for non- commercial use the license was somewhat encumbered. Troll Tech have announced that as of Release 2.0 of Free Qt, a new license is going to be adopted. While the dust is still settling, it looks as if this will meet the open source requirements, making KDE unequivocally free software. This caused, understandably, some despondency in the GNOME desktop camp; with KDE 1.1 out by the time you read this and work in hand on the new, all-singing object-oriented KDE 2.0 desktop, GNOME seems to be further behind than ever, and the change in the Qt license has just removed one of the two main reasons for developing it in the first place.

RedHat released version 5.2 of their distribution. A welcome improvement in stability, this release also comes with an expanded and improved manual, a CD full of demo versions of commercial software (plus some very welcome non-demos, like Word Perfect 8), and some really cool stickers. In not-entirely-unrelated news, Red Hat just moved to new premises -- seem they expect to be up to 120 staff by February. Meanwhile, Caldera say that shipments of their Open Linux 1.3 distribution have taken off, SuSE seem to be doing well with the very polished and professional SuSE 5.3 -- and even Slackware have come out with a new (3.6) release for the first time in ages. Indeed, reporting on the doings of Linux distributors is getting boring -- the market is growing so fast that you'd have to be truly incompetent not to grow with it. Maybe I should look into Debian -- the one true GNU distribution with no commercial involvement -- for a change?

Linux 2.2 is very nearly out; it'll probably be released a day after the deadline on this column. Expect lots of words, as soon as I get it compiled and figure out what the most interesting angles are.

Bill Gates sold almost a million Microsoft shares in November, valued at something more than a hundred dollars each. I suppose he's decided to invest in Red Hat?

Meanwhile, Linux moves onto more platforms. It now runs on the Psion Series 5, although CF card support is still in development; meanwhile, rumour has it that someone inside IBM is porting Linux to run on a System 390 mainframe ...

[ Site Index] [ Linux Index] [ Feedback ]