[ Site Index] [ Linux Index] [ Feedback ]
The Masters of Reverse Engineering meet the Lawyers from Hell
DVD, we are told, is the killer format that will replace the humble video cassette. It's the size of a CD, much higher quality than a VHS tape, and it's, um, it's digital. Yeah, that must be why it's such a huge improvement! It's digital and it's therefore somehow cool. Oh, and you can play it on your computer.
On such a mess of misconceptions hangs a story.
DVD movies fit on a four-point-something gigabyte disk because they use MPEG2 compression, a highly efficient tool for squeezing video data down to size. In addition to MPEG2, DVD tracks are encoded using a content security scheme, CSS. This is used for regional encoding -- so you can't play a European-market DVD-ROM in an American player, for example -- and for encryption: for example in the late (and not at all lamented) DIVX scheme, a variant on DVD in which you had to pay per use for the movies you had on disk (the payment, via an online service, being used to download a key that would allow your player to run the movie just once). For normal DVD, each player needs a key in order to decrypt DVD's.
The DVD specifications are, interestingly, a guarded secret; if you want to build a DVD player you will need to go to the DVD industry association, pay them US $5000 for the documentation, and sign a Non-Disclosure Agreement (promising not to tell anyone else what they've told you). This has some undesirable side-effects. Distribution of open-source software based on this documentation would of course be illegal, because it would violate the NDA. So it should come as no surprise to learn that although a number of software DVD-decoders exist on Windows and MacOS, nobody has written one for Linux.
Note that this doesn't benefit the media combines who produce and sell movies; it merely shrinks their potential market a bit. The people who really benefit from this silly state of affairs are the manufacturers of DVD players -- who are effectively using the closed standard to maintain a cartel, keeping the price of entry high enough to deter amateurs.
In late 1999, some talented Linux hackers finally got pissed off with the lack of a DVD player for their favourite operating system. Working under the name of The Masters of Reverse Engineering, they cracked the CSS encoding scheme. Much to their surprise they worked out that it was basically secured by a warmed-over 40-bit RSA encryption scheme; a commercial product -- Xing's XingDVD player -- not-so-cleverly contained an unencrypted key. Using the Xing key, the Masters of Reverse Engineering fed test tracks to this encoder until it revealed the entire keyspace -- which took a matter of hours. The piece of software that they released as a result, DeCSS, lets you pull tracks off a DVD-ROM and feed them to an MPEG2 movie player. Which means there's now an open source movie player for Linux.
Unfortunately, some corporate types with more money than sense decided that this was a bad thing; if just anybody could write their own DVD player, their neat little cartel was dead. The hastily-formed DVD Control Association promptly filed for court injunctions against the developers of DeCSS -- and against seventy-odd web sites that held links to the software or the source code itself.
I have to abandon journalistic objectivity at this point and state that I think these people are scoundrels and fools. The argument they presented before a superior court judge in California in December, at the preliminary hearing, can best be characterised as a big lie; that DeCSS promotes piracy and will cause massive damage to the film and TV industry.
It is possible to copy a DVD-ROM without decoding it. Large-scale pirates simply have no reason to decode a DVD-ROM. The only reason for decoding one is to watch it; thus, DeCSS doesn't promote piracy. It just serves to expand the range of platforms that DVD's can be played on. A secondary point is that if you understand the encoding scheme you can create your own DVD-ROMs. Without a tool like DeCSS, this requires very expensive mastering software or hardware. With an open source solution, DVD becomes an accessible medium for the general public.
It follows that the DVDCCA are lying. I believe that they're lying to protect a de-facto manufacturer's cartel, against the public interest: and also to assert their control over the DVD. They're lying to keep small content providers out of the medium. They're lying because they don't know how to build an honest relationship with the public, who they see as a resource to be exploited rather than as valued customers whose goodwill they value. And they're lying because they want to constructively extend their legal rights over the copyrighted material, to include the ability to say exactly what you may or may not do with a DVD-ROM you've bought -- what equipment you may use to play it (and, ultimately, how often you play it).
Luckily, the DVDCCA screwed up royally when they served the injunction against the web sites. Their initial fusilade bracketed slashdot, a portal site that's the jewel in the crown of Andover.net, a public corporation. Unlike most of the enthusiast sites they targeted, Andover can afford to pay their own lawyers, and do the job properly. Moreover, the Electronic Frontier Foundation, a pressure group set up to defend civil liberties in cyberspace, is treating the case as a free speech issue; trying to shut people up by force of law goes down badly in America, and there are grounds to believe that the case will be dismissed under California's SLAPP legislation (which explicitly bans lawsuits intended to shut up members of the public who take exception to corporate actions and protest against them).
At the preliminary hearing, the whole audience in the public gallery burst out laughing when the DVDCCA lawyers asserted that DeCSS was a piracy tool. Whether the judge payed attention to this is a moot point, but he refused to sustain the preliminary injunction. A full hearing was scheduled for mid-January, and the firestorm should be over by the time you read this piece.
What we're seeing here is an example of a secretive trade organisation discovering, the hard way, that open source is like the internet; it treats information blockages as a malfunction and routes (or in this case, reverse-engineers) around them. By next year, this sort of thing should be unthinkable; no new data standard will be able to ignore a platform that has more than ten percent of the PC market. But for the rest of 2000, expect to see some sparks fly, as the lawyers from hell discover the linux phenomenon.
Rebuild that kernel!
Kernel 2.4 is, if not out, then in pre-release. The timing is interesting; it's not a deliberate attempt to upstage Windows 2000, but it's by no means accidental. The gap between the 2.0 and 2.2 kernels was widely agreed to be too long; for that reason, the 2.4 kernel is a lot closer to it's predecessor, and has come out a lot faster. (I wouldn't be surprised to see a 2.6 release before 2001; but I'm not betting on it. It all depends on how much pressure there is for a complete re-write of some chunks of the kernel that will be needed before 3.0.)
Before we talk about how to rebuild the kernel it's probably worth asking three questions: what is the kernel, why might we want to rebuild it, and what are the pitfalls of doing so?
The kernel is the software heart of Linux; it sits between the hardware of your computer and the applications you run. At one end it manages the actual hardware, pumping data onto hard disks via their controller, providing access to serial ports and modems, possibly letting applications like X talk to the video card via a frame buffer, and so on. (Note that the graphics system, X, is not part of the kernel -- unlike MacOS or Windows). At the other end, it provides a library of system calls that Linux applications can invoke to do various things -- open a file, apply a line discipline to a character device such as a serial line, change effective user ID, and so on. In general, if you add hardware to your Linux system you need to make sure that your kernel can recognize it: otherwise the kernel won't be able to make it accessible to applications via its standard interfaces.
The kernel is modular; that is, there's a central core that must be loaded at all times (to do memory allocation, schedule tasks, and so on), but much of it consists of separate modules that need only be loaded when some application wants to use them. For example you can build a kernel with parallel printer port support built into it, but it's usually more convenient to compile the parallel port drivers as separate modules, so they need only be loaded if you have plugged a printer or a zip drive or something into the computer; ditto support for things like modems, the PPP or SLIP dialup networking protocols, non-standard mice, and so on. One advantage of having the separate modules is that you can use the insmod command to load them -- and pass special directives to the modules (for example, memory addresses and interrupts if your devices are configured in a non-standard way). Lest this sound a bit too complicated, the modprobe utility normally loads modules automatically, on demand -- you can find out more about this in the Modules mini-HOWTO (see /usr/doc/HOWTO/mini on Red Hat or similar distributions).
(Note that you need to be root in order to load or unload modules manually.)
Most Linux distributions (such as SuSE and Red Hat) ship with fully modular kernels. They should automatically load modules to cope with whatever hardware they detect on your machine -- the exception is if you have a device that isn't configured as standard. You can look at your currently loaded modules using the lsmod command; you can also remove them using rmmod -- although this is not recommended unless you know exactly what you're doing!
You might want to rebuild your kernel for several reasons. Firstly, the kernel is not 100% bug-free. Sometimes, network denial-of-service attacks show up that can only be fixed by a major upgrade. (If you're running an internet server you really need to keep up to date -- the consequences of not being immune to all the known attacks is likely to be a bad hack, just when you least need it.) Or someone may have just discovered an obscure memory leak (a condition that stealthily grabs memory and doesn't release it afterwards, leading to your system running out of memory over a period of days or weeks). Secondly, you might have installed a new piece of hardware that your old kernel doesn't understand -- for example, a FireWire controller or a video board. In general, newer kernels contain more device drivers, and better support for new hardware. Finally, you might want to rebuild your kernel in order to experiment with a new development release, such as the 2.3 kernel (which will be Linux 2.4 when it's finished).
Rebuilding your kernel is a fairly straightforward process, but there is a hidden hazard in the tail: you can easily render your machine unbootable, or unable to access its hard disks! Because of this, it is a very good idea to plan out what you intend to do ahead of time -- and to make sure you plan to keep your existing kernel around as an emergency fallback. You can set up LILO, the Linux boot loader, to boot several different kernels depending on what you type at boot time. You can also keep a spare kernel on floppy disk, boot off the floppy, and tell the kernel to load the system on your hard disk, by typing a command like:
At the LILO: prompt (this tells LILO to boot the kernel called "linux" and tell the kernel that its root filesystem is on /dev/hda -- your first hard disk drive -- in partition 2, where you have presumably installed linux). (For details of using LILO see the mini-HOWTOs, Multiboot-with-LILO and LILO; for details of the format of the /etc/lilo.conf file, which controls LILO, see the man page for lilo.conf.)
In a nutshell: the program /sbin/lilo reads the /etc/lilo.conf file, and scribbles a boot sector onto your hard disk. /etc/lilo.conf defines some names for kernels, each of which corresponds to a file containing a binary kernel -- which you compiled earlier. It also includes directives that tell the kernels about special hardware, which partition to look in for the root filesystem, whether or not to set up some ramdisks, and so on. You should copy your compiled kernels into /boot, then add a new entry to /etc/lilo.conf that assigns a name to the new kernel image file. Then you run /sbin/lilo, which writes the boot sector. Next time you boot, when you see the LILO: prompt you can type the name you assigned to your new kernel and it will (hopefully!) load. If it panics or doesn't work for some reason, you can reboot and type the name of your old kernel (you did remember to keep it around and assign it a name like "old" or "safe", didn't you?), then try again.
(Be warned that there is no such thing as a guaranteed clean kernel build! Never, ever, delete your working kernel until you've proven that your new one boots happily and performs as required -- and always keep an entry in /etc/lilo.conf that points to it, so that if your new experiment fails you can still boot your system. Otherwise you will be sorry!)
Rebuilding the kernel itself is usually straightforward. First, you must ensure that you have the kernel source code and header files installed on your system -- the headers live in /usr/include, and the kernel source lives in /usr/src. If you are upgrading to a new kernel you should download the kernel source code (or just the upgrade patches, if you are intimate with the patch(1) utility) and put it in /usr/src. Normally, you will already have kernel sources in a directory called something like /usr/src/linux-2.2.11, with a symbolic link called /usr/src/linux pointing to this. Delete the link (/usr/src/linux) but keep your existing source directory. Then unpack the new source archive in /usr/src (which will probably create a new directory called linux). Finally, rename this to linux-2.x.y (where x.y is your version number), and create a new symbolic link to this directory:
ln -s /usr/src/linux-2.x.y /usr/src/linux
Next, you need to ensure that the rest of your system is compatible with this kernel version. If you're upgrading from, say, 2.2.5 to 2.2.14, you probably don't have any problems. However, if upgrading from a 2.0 series kernel, or testing a 2.3 series development kernel, other bits and pieces of your system may be obsolescent and incompatible with the new kernel. The file linux/Documentation/Changes is critically important at this point; it lists the minimum version numbers of other packages that must be present for this kernel to function properly. Look for "Current Minimal Requirements" and check the version number of the packages listed -- it tells you how to do this. If your system is too old, you will have to download and compile a lot of upgrades -- and if you don't do this you may render your system inoperable!
I'm going to beg the question of how to get your system ready for a new kernel version. Suffice to say, books could be written about all the possibilities. You will need to learn how to read the documentation and follow instructions to the letter; you'll also need to learn about make and Makefiles (make(1) is the standard UNIX tool for automatically driving the C compiler, equivalent to a project in an integrated development environment, but less friendly and more powerful).
Now your system's ready to start building a new kernel; the first stage is to read the file README in /usr/src/linux. This tells you how to install the kernel sources and rebuild your kernel, oddly enough. In general, you will need to follow the directions under "INSTALLING the kernel" (including typing various magical incantation, such as "make mrproper", while logged in as root or in a root terminal window), then either "make menuconfig" or, if you've got a working graphical desktop, "make xconfig".
The options menuconfig and xconfig both give you hierarchical, menu- driven interfaces that allow you to choose what features to build into your new kernel and which to compile as modules. In general, there are about thirty different categories on a 2.2 kernel; running through all the options is an exhausting process, even when you know what they all are! If you don't have some variety of hardware you can safely switch a subsystem off altogether -- for example, there's no point compiling ISDN support if you don't have an ISDN terminal adapter, no point using Video4Linux unless you have a video or radio card or a related device like a camera or frame grabber, and so on. Some of the other options you may never have heard of or may not understand; it's generally sensible to accept the default settings in these cases.
Both menu-driven interfaces give you some rather terse online help, that describes specific drivers; nevertheless, they tend to assume that you understand what you're doing. There's more documentation in the directory /usr/src/linux/Documentation; it's a very good idea to make a note of what hardware is plugged into your computer, then read all the documentation relating to it before you start configuring your kernel. (Some bits are extremely esoteric; for example, the Linux kernel supports obscure WAN hardware, X.25 PADs, and strange network interfaces that are of interest only to mainframe gurus and ISP tech support staff. Don't worry about it. Linux's power stems from its eclectic nature, but that doesn't mean you have to use it all at once, any more than desktop publishing means you have to use every font on your hard disk in any document you produce.)
Having selected a bunch of options, you exit and save your configuration (in a file called .config, stored in /usr/src/linux). You then type another magical incantation from the README file:
make dep && make bzImage && make modules && make modules_install
(The && is the shell logical-AND operator; if the clause on the left is successful, execution proceeds to the next clause on the right, otherwise the whole sequence fails.)
Note that modules are installed under /lib/modules/2-x.y, so that you can have different sets of modules for different kernel versions coexisting on the same system.
Once you've compiled and installed your modules, you need to take your new kernel -- sitting in /usr/src/linux/arch/i386/bin, or another appropriate subdirectory of "arch" if you're on a non-Intel platform -- and copy it into /boot, with an appropriate name. Then edit your lilo.conf file, run /sbin/lilo to tell the boot loader about the new kernel, shut down in good order (e.g. by typing /sbin/telinit 6) ... and reboot.
Then see your new kernel get as far as mounting the root filesystem, panic, and lock the computer so solid that you need to power cycle it before you can boot back into your original kernel to see what went wrong.
Don't worry! This is all part of the learning experience. You'll get it right on the second or seventh attempt, and then you'll have the joy of knowing you're running a system that's tailored precisely to your requirements. And who knows? Maybe you can even get that weird eight-port serial card your uncle left you in his will working ...
[ Site Index] [ Linux Index] [ Feedback ]