Penguin
Note: You are viewing an old revision of this page. View the current version.

The Mosix HOWTO

The Mosix HOWTO

Kris Buytaert

buytaert@be.stone-it.com

Revision HistoryRevision v0.1513 March 2002Revision v0.1318 Feb 2002Revision ALPHA 0.0309 October 2001----; Table of Contents; 1. Introduction: ; 1.1. Introduction; 1.2. Disclaimer; 1.3. Distribution policy; 1.4. New versions of this document; 1.5. Feedback; 2. So what is mosix Anyway ?: ; 2.1. A very, very brief introduction to clustering; 2.2. The story so far; 2.3. Mosix in action: An example; 2.4. Components; 2.5. Work in Progress; 3. Features of Mosix: ; 3.1. Pros of Mosix; 3.2. Cons of Mosix; 3.3. Extra Features in !OpenMosix?; 4. Requirements and Planning: ; 4.1. Hardware requirements; 4.2. Hardware Setup Guidelines; 4.3. Software requirements; 4.4. Planning your cluster; 5. Distribution specific installations: ; 5.1. Installing Mosix; 5.2. Getting Mosix; 5.3. Getting !OpenMosix?; 5.4. openMosix General Instructions; 5.5. !RedHat; 5.6. Suse 7.1 and Mosix; 5.7. Debian and Mosix; 5.8. Other distributions; 6. Cluster Installation: ; 6.1. Cluster Installations; 6.2. Installation scripts [LUI, ALINKA?; 6.3. The easy way: Automatic installation; 6.4. The hard way: When scripts don't work; 6.5. Kick Start Installations; 6.6. DSH, Distributed Shell; 7. ClumpOS: ; 7.1. What is Clump/OS; 7.2. How does it work; 7.3. Requirements; 7.4. Getting Started; 7.5. Problems ?; 7.6. Expert Mode; 8. Administrating openMosix: ; 8.1. Basic Administration; 8.2. Configuration; 8.3. Informations about the other nodes; 8.4. Additional Informations about processes; 8.5. the userspace-tools; 9. Tuning Mosix: ; 9.1. Optimising Mosix; 9.2. Where to place your files; 10. Special Cases: ; 10.1. Laptops and PCMCIA Cards; 10.2. Diskless nodes; 10.3. Very large clusters; 11. Common Problems: ; 11.1. My processes won't migrate; 11.2. setpe reports; 11.3. I don`t see all my nodes; 12. Other Programs: ; 12.1. mexec; 12.2. mosixview; 12.3. mpi; 12.4. mps; 12.5. pmake; 12.6. pvm; 12.7. qps; 13. Hints and Tips: ; 13.1. Locked Processes; 13.2. Choosing your processes; A. More Info: ; A.1. Further Reading; A.2. Links; A.3. Supporting Mosix; B. Credits; C. GNU Free Documentation License: ; 0. PREAMBLE; 1. APPLICABILITY AND DEFINITIONS; 2. VERBATIM COPYING; 3. COPYING IN QUANTITY; 4. MODIFICATIONS; 5. COMBINING DOCUMENTS; 6. COLLECTIONS OF DOCUMENTS; 7. AGGREGATION WITH INDEPENDENT WORKS; 8. TRANSLATION; 9. TERMINATION; 10. FUTURE REVISIONS OF THIS LICENSE; How to use this License for your documents----

Chapter 1. Introduction

1.1. Introduction

 This document gives a brief description to Mosix, a software package that turns a network of GNU/Linux computers into a computer cluster. Along the way, some background to parallel processing is given, as well as a brief introduction to programs that make special use of Mosix's capabilities. The HOWTO expands on the documentation as it provides more background information and discusses the quirks of various distributions.

 Kris Buytaert got involved in this piece of work when Scot Stevenson was looking for somebody to take over the Job, this was during February 2002 The first new versions of this HOWTO are rewrites of the Mosix Howto draft and the Suse Mosix HOWTO 

 ("FEHLT", in case you are wondering, is German for "missing"). You will notice that some of the headings are not as serious as they could be. Scot had planned to write the HOWTO in a slightly lighter style, as the world (and even the part of the world with a burping penguin as a mascot) is full of technical literature that is deadly. Therefore some parts still have these comments

 Initially this was a draft version of a text intended to help Linux users with SuSE distributions install the Mosix cluster computer package - in other words, to turn networked computers running SuSE Linux into a Mosix cluster. This HOWTO is written on the basis of a monkey-see, monkey-do knowledge of Mosix, not with any deep insight into the workings of the system. 

 The original text did not cover Mosix installations based on the 2.4.* kernel. Note that SuSE 7.1 does not ship with the vanilla sources to that kernel series. 


1.2. Disclaimer

Use the information in this document at your own risk. I disavow potential liability for the contents of this document. Use of th concepts, examples, and/or other content of this document is ent at your own risk.

All copyrights are owned by their owners, unless specifically no otherwise. Use of a term in this document should not be regarde affecting the validity of any trademark or service mark.

Naming of particular products or brands should not be seen as endorsements.

You are strongly recommended to take a backup of your system before major installation and backups at regular intervals. 


1.3. Distribution policy

 Copyright (c) 2002 by Kris Buytaert and Scot W. Stevenson. This document may be distributed under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the appendix entitled "GNU Free Documentation License". 


1.4. New versions of this document

 New versions of this document can be found on the web pages of the Linux Documentation Project at http://www.linuxdoc.org in the appropriate subfolder. Changes to this document will usually be discussed on the Mosix Mailing List. See the Mosix Hompage http://www.mosix.org for details. 


1.5. Feedback

Currently this HOWTO is being maintained by Kris Buytaert, please do send questions about Mosix to the mailing list.

 Please send comments, questions, bugfixes, suggestions, and of course praise about this document to the author.

 If you have a technical question about Mosix itself, please post them on the Mosix mailing list. Do not repeat not send them to the Scott who doesn't know squat about the internals, finds anything written in C++ terribly confusing, and learned Python mainly because the rat on the book cover was so cute.


Chapter 2. So what is mosix Anyway ?

2.1. A very, very brief introduction to clustering

 Most of the time, your computer is bored. Start a program like xload or top that monitors your system use, and you will probably find that your processor load is not even hitting the 1.0 mark. If you have two or more computers, chances are that at any given time, at least one of them is doing nothing. Unfortunately, when you really do need CPU power - during a C++ compile, or coding Ogg Vobis music files - you need a lot of it at once. The idea behind clustering is to spread these loads among all available computers, using the resources that are free on other machines.

 The basic unit of a cluster is a single computer, also called a "node". Clusters can grow in size - they "scale" - by adding more machines. A cluster as a whole will be more powerful the faster the individual computers and the faster their connection speeds are. In addition, the operating system of the cluster must make the best use of the available hardware in response to changing conditions. This becomes more of a challenge if the cluster is composed of different hardware types (a "heterogenous" cluster), if the configuration of the cluster changes unpredictably (machines joining and leaving the cluster), and the loads cannot be predicted ahead of time. 


2.1.1. A very, very brief introduction to clustering

2.1.1.1. HPC vs Failover vs Loadbalancing

Basically there are 3 types of clusters, the most deployed ones are probably the Failover Cluster and the Loadbalancing Cluster, HIGH Performance Computing.

Failover Clusters consist of 2 or more network connected computers with a separate heartbeat connection between the 2 hosts. The Heartbeat connection between the 2 machines is being used to monitor wether all the services are still in use, as soon as a service on one machine breaks down the other machine tries to take over.

With loadbalancing clusters the concept is that when a request for say a webserver comes in, the cluster checks wich machine is the lease busy and then sends the request to that machine. Actually most of the times a Loadbalancing cluster is also Failover cluster but with the extra load balancing functionality and often with more nodes.

The last variation of clustering is the High Performance Computing Cluster, this machine is being configured specially to give data centers that require extreme performance the performance they need. Beowulfs have been developed especially to give research facilities the computing speed they need. These kind of clusters also have some loadbalancing features, they try to spread different processes to more machines in order to gain perfomance. But what it mainly comes down to in this situation is that a process is being parralellised and that routines that can be ran separately will be spread on different machines in stead of having to wait till they get done one after another. 


2.1.1.2. Mainframes and supercomputers vs. clusters

Traditionally Mainframes and Supercomputers have only been built by a selected number of vendors, a company or organisation that required the performance of such a machine had to have a hughe budget available for it`s Supercomputer. Lot`s of universities could not afford them the costs of a Supercomputer, therefore other alternatives were being researched by them. The concept of a cluster was born when people first tried to spread different jobs over more computers and then gather back the data those jobs produced. With cheaper and more common hardware available to everybody, results similar to real Supercomputers were only to be dreamt of during the first years, but as the pc platform developed further, the performance gap between a Supercomputer and a cluster of multiple personal computres became smaller. 


2.1.1.3. Cluster models [(N)UMA, DSM, PVM/MPI?

There are different ways of doing parallel processing, (N)UMA, DSM , PVM, MPI are all different kinds of Parallel processing schemes.

(N)uma , (Non-)Uniform Memory Access machines for example have shared access to the memory where they can execute their code. In the Linux kernel there is a NUMA implementation that varies the memory acces times for different regions of memory. It then is the kernel's task to use the memory that is the closest to the cpu it is using.

DSM 

PVM / MPI are the tools that are most commonly being used when people talk about GNU/Linux based Beowulfs. MPI stands for Message Passing Interface it is the open standard specification for message passing libraries. MPICH is one of the most used implementations of MPI, next to MPICH you also can use LAM , another implementation of MPI based on the free reference implementation of the libraries.

PVM or Parallel Virtual Machine is another cousin of MPI that is also quite often being used as a tool to create a beowulf. PVM lives in userspace so no special kernel modifications are required, basically each user with enough rights can run PVM. 


2.1.1.4. Mosix's role

 The Mosix software packages turns networked computers running GNU/Linux into a cluster. It automatically balances the load between different nodes of the cluster, and nodes can join or leave the running cluster without disruption. The load is spread out among nodes according to their connection and CPU speeds. 

 Since Mosix is part of the kernel and maintains full compatibility with normal Linux, a user's programs, files, and other resources will all work as before with no changes necessary. The casual user will not notice the difference between Linux and Mosix. To him, the whole cluster will function as one (fast) GNU/Linux system.


2.2. The story so far

2.2.1. Historical Development

 The name "Mosix" comes from FEHLT. The 6th incarnation of Mosix was developed for BSD/OS. GNU/Linux was chosen as a development platform for the 7th incarnation in DATE_FEHLT because of 


2.2.2. Current state

 Like most active Open Source programs, Mosix's rate of change tends to outstrip the the follower's ability to keep the documentation up to date. See the Mosix Home Page for current news. The following relates to Mosix VERSION FEHLT for the Linux kernel FEHLT as of DATUM FEHLT: 


2.2.3. openMosix

openMosix is in addition to whatever you find at mosix.org and in full appreciation and respect for Prof. Barak's leadership in the outstanding Mosix project .

Moshe Bar has been involved for a number of years with the Mosix project (www.mosix.com) and was co-project manager of the Mosix project and general manager of the commercial Mosix company.

After a difference of opinions on the commercial future of Mosix, he has started a new clustering company - Qlusters, Inc. - and Prof. Barak has decided not to participate for the moment in this venture (although he did seriously consider joining) and held long running negotiations with investors. It appears that Mosix is not any longer supported openly as a GPL project. Because there is a significant user base out there (about 1000 installations world-wide), Moshe Bar has decided to continue the development and support of the Mosix project under a new name, openMosix under the full GPL2 license. Whatever code in openMosix comes from the old Mosix project is Copyright 2002 by Amnon Bark. All the new code is copyright 2002 by Moshe Bar.

openMosix is a Linux-kernel patch which provides full compatibility with standard Linux for IA32-compatible platforms. The internal load-balancing algorithm transparently migrates processes to other cluster members. The advantage is a better load-sharing between the nodes. The cluster itself tries to optimize utilization at any time (of course the sysadmin can affect these automatic load-balacing by manuel configuration during runtime).

This transparent process-migration feature make the whole cluster look like a BIG SMP-system with as many processors as available cluster-nodes (of course multiplicated with 2 for dual-processor systems). openMosix also provides a powerful mized for HPC-applications, which unlike NFS provides cache consistency, time stamp consistency and link consistency.

There could (and will) be significant changes in the architecure of the future openMosix versions. New concepts about auto-configuration, node-discovery and new user-land tools are discussed in the openMosix-mailinglist.

To approach standardization and future compatibility the proc-interface changes from /proc/mosix to /proc/hpc and the /etc/mosix.map was exchanged to /etc/hpc.map. Adapted commandline user-space tools for openMosix are already available on the web-page of the project and from the current version (1.1) Mosixview supports openMosix as well.

The hpc.map will be replaced in the future with a node-autodiscovery system.

openMosix is supported by various competent people (see www.openMosix.org) working together around the world. The gain of the project is to create a standardize clustering-environent for all kinds of HPC-applications.

openMosix has also a project web-page at http://openMosix.sourceforge.net with a CVS tree and mailinglist for the developer and user. 


2.3. Mosix in action: An example

 Mosix clusters can take various forms. To demonstrate, let's assume you are a student and share a dorm room with a rich computer science guy, with whom you have linked computers to form a Mosix cluster. Let's also assume you are currently converting music files from your CDs to Ogg Vobis for your private use, which is legal in your country. Your roommate is working on a project in C++ that he says will bring World Peace. However, at just this moment he is in the bathroom doing unspeakable things, and his computer is idle. 

 So when you start a program called FEHLT to convert Bach's .... from .wav to .ogg format, the Mosix routines on your machine compare the load on both nodes and decide that things will go faster if that process is sent from your Pentium-233 to his Athlon XP. This happens automatically - you just type or click your commands as you would if you were on a standalone machine. All you notice is that when you start two more coding runs, things go a lot faster, and the response time doesn't go down. 

 Now while you're still typing ...., your roommate comes back, mumbling something about red chile peppers in cafeteria food. He resumes his tests, using a program called 'pmake', a version of 'make' optimized for parallel execution. Whatever he's doing, it uses up so much CPU time that Mosix even starts to send subprocesses to your machine to balance the load. 

 This setup is called single-pool: All computers are used as a single cluster. The advantage/disadvantage of this is that you computer is part of the pool: Your stuff will run on other computers, but their stuff will run on your's, too. 


2.4. Components

2.4.1. Process migration


2.4.2. The Mosix File System (MFS)


2.4.3. Direct File System Access (DFSA)

Both Mosix and openMosix provide a cluster-wide filesystem (MFS) with the DFSA-option (direct filesystem access). It provides access to all local and remote filesystems of the nodes in an Mosix or openMosix cluster.


2.5. Work in Progress

2.5.1. Network RAM


2.5.2. Migratable sockets


2.5.3. High availablility


Chapter 3. Features of Mosix

3.1. Pros of Mosix

No extra packages required

No Code changes required


3.2. Cons of Mosix

Kernel Dependent

Not Everything works this way Shared memory issues

Issues with Multiple Threads not gaining performance.

You won't gain performance when running 1 single process such as your Browser on a Mosix Cluster , the process won't spread itselve over the cluster. Except of course your process wil migrate to a more performant machine.


3.3. Extra Features in !OpenMosix?


Chapter 4. Requirements and Planning

4.1. Hardware requirements

Installing a basic

clusters requires at least 2 machines with network connected. Either using a crosscable between the two network cards or a switch or hub. Off course the faster your networkcards the easier you will get better performance for your cluster. These days Fast Ethernet is standard, putting multiple ports in a machine isn`t that difficult, but make sure to connect them through other physical networks in order to gain the speed you want. Gigabit ethernet is getting cheaper any day now but I suggest that you don`t rush to the shop spending your money before you have actually tested your setup with multiple 100Mbit cards and noticed that you really do need the extra network capacity.


4.2. Hardware Setup Guidelines

Setting up a big cluster requires some thinking to be done, where are you going to put the machines, not under a table somehwere or in the middle of your office. It`s ok if you just want to do some small tests , but if you are planning to deploy a N node cluster you will have to make sure that the environment that will hold this machine is capable of doing so. I`m talking about preparing one or more 19" racks to host the machines, configure the appropirate network topology, either straight, single connected or even a 1 to 1 cross connected network between al your nodes. You will also need to make sure that there is enough power to support such a range of machines. That your airconditioning system supports the load and that in case of powerfailure your UPS can cleanly shut down al the required systems. You might want to invest in a KVM Switch in order to fasciliate access to the machines consoles. But even if you don`t have the number of nodes that justify these investments, make sure that you can always easily access the different nodes, you never know when you have to replace the fan or a harddisk of a machine in trouble. If that means that you have to unload a stack of machines to reach the bottom one hence shutting down your cluster you are in trouble.


4.3. Software requirements

The systems we plan to use will need a basic Linux installation of

your choice, !RedHat , Suse , Debian or another distribution, it doesn`t really matter which one. What does matter is that the kernel is at least on 2.4 level, and that your networkcards are configured correctly, next to that you`ll need a healthy space of swap.


4.4. Planning your cluster

How to configure MOSIX clusters with a pool of servers and a set of (personal) workstations:

*

Single-pool = all the servers and workstations are used as a single cluster: install the same "mosix.map" in all the computers, with the IP addresses of all the computers. Advantage/disadvantage: your workstation is part of the pool.

* *

Server-pool = servers are shared while workstations are not part of the cluster: install the same "mosix.map" in all the servers, with the IP addresses of only the servers. Advantage/disadvantage: remote processes will not move to your workstation. You need to login to one of the servers to use the cluster.

* *

 Adaptive-pool = servers are shared while workstations join or leave the cluster, e.g. from 5PM to 8AM: install the same "mosix.map" in all the computers, with the IP addresses of all the servers and workstations, then use a simple script, to decide whether MOSIX should be activated or deactivated. Advantage/disadvantage: remote processes can use your workstation when you are not using it.

*


Chapter 5. Distribution specific installations

5.1. Installing Mosix

This chapter deals with installing Mosix and !OpenMosix? on different

distributions. It won't be an exhaustive list of all the possible combinations. However thoughout the chapter you should find enough information on installing Mosix in your environment.

Techniques for installing multiple machines with Mosix will be discussed in the next chapter.


5.2. Getting Mosix


5.3. Getting !OpenMosix?


5.4. openMosix General Instructions

5.4.1. Kernel Compilation

 Always use pure vanilla kernel-sources from e.g. www.kernel.org to compile an openMosix kernel! Be sure to use the right openMosix version dependend on the kernel-version. Do not use the kernel that comes whith any linux-distribution; it won't work.

Download the actual version of openMosix and untar it in your kernel-source directory (e.g. /usr/src/linux-2.4.16). If your kernel-source directory is other than "/usr/src/linux-[version_number?" at least the creation of a symbolic link to "/usr/src/linux-[version_number?" is required. Now apply the patch using the patch utility:

patch -Np1 ` openMosix1.5.2moshe This command displays now a list of patched files from the kernel-sources. Enable the openMosix-options in the kernel-configuration e.g.

... CONFIG_MOSIX=y

  1. CONFIG_MOSIX_TOPOLOGY is not set

CONFIG_MOSIX_UDB=y

  1. CONFIG_MOSIX_DEBUG is not set
  2. CONFIG_MOSIX_CHEAT_MIGSELF is not set

CONFIG_MOSIX_WEEEEEEEEE=y CONFIG_MOSIX_DIAG=y CONFIG_MOSIX_SECUREPORTS=y CONFIG_MOSIX_DISCLOSURE=3 CONFIG_QKERNEL_EXT=y CONFIG_MOSIX_DFSA=y CONFIG_MOSIX_FS=y CONFIG_MOSIX_PIPE_EXCEPTIONS=y CONFIG_QOS_JID=y ... and compile it with:

make dep bzImage modules modules_install After compilation install the new kernel with the openMosix options within you boot-loader e.g. insert an entry for the new kernel in /etc/lilo.conf and run lilo after that.

Reboot and your !OpenMosx?-cluster(node) is up!


5.4.2. hpc.map

 Syntax of the /etc/hpc.map file Before starting openMosix there has to be a /etc/hpc.map configuration file (on each node) which must be equal on each node. The hpc.map contains three space seperated fields:

openMosix-Node_ID IP-Adress(or hostname) Range-size An example hpc.map could look like this:

1 node1 1 2 node2 1 3 node3 1 4 node4 1 or

1 192.168.1.1 1 2 192.168.1.2 1 3 192.168.1.3 1 4 192.168.1.4 1 or with the help of the range-size these both exampels are equal with:

1 192.168.1.1 4 openMosix "counts-up" the last byte of the ip-adress of the node according to its openMosix-ID. (if you use a range-size greater than 1 you have to use ip-adresses instead of hostnames)

If a node has more than one network-interfaces it can be configured with the ALIAS option in the range-size field (which is equal to set the range-size to 0) e.g.

1 192.168.1.1 1 2 192.168.1.2 1 3 192.168.1.3 1 4 192.168.1.4 1 4 192.168.10.10 ALIAS Here the node with the openMosix-ID 4 has two network-interfaces (192.168.1.4 + 192.168.10.10) which are both visible to openMosix.

Always be sure to run the same openMosix version AND configuration on each of your Cluster nodes!

Start openMosix with the "setpe" utility on each node : setpe -w -f /etc/hpc.map Execute this command (which will be described later on in this howto) on every node in your openMosix cluster. Installation finished now, the cluster is up and running :)


5.4.3. MFS

 At first the CONFIG_MOSIX_FS option in the kernel configuration has to be enabled. If the current kernel was compiled without this option recompilation with this option enabled is required. Also the UIDs and GUIDs in the cluster must be equivalent. The CONFIG_MOSIX_DFSA option in the kernel is optional but of course required if DFSA should be used. To mount MFS on the cluster there has to be an additional fstab-entry on each nodes /etc/fstab.

for DFSA enabled:

mfs_mnt /mfs mfs dfsa=1 0 0 for DFSA disabled:

mfs_mnt /mfs mfs dfsa=0 0 0 the syntax of this fstab-entry is:

[device_name? [mount_point? mfs defaults 0 0 After mounting the /mfs mount-point on each node, each nodes filesystem is accessable through the /mfs/[openMosix_ID?/ directories.

With the help of some symbolic links all cluster-nodes can access the same data e.g. /work on node1

on node2 : ln -s /mfs/1/work /work on node3 : ln -s /mfs/1/work /work on node3 : ln -s /mfs/1/work /work ... Now every node can read+write from and to /work !

The following special files are excluded from the MFS:

the /proc directoryspecial files which are not regular-files, directories or symbolic links e.g. /dev/hda1

Creating links like:

ln -s /mfs/1/mfs/1/usr or

ln -s /mfs/1/mfs/3/usr is invalid.

The following system calls are supported without sending the migrated process (which executes this call on its home (remote) node) going back to its home node:

 read, readv, write, writev, readahead, lseek, llseek, open, creat, close, dup, dup2, fcntl/fcntl64, getdents, getdents64, old_readdir, fsync, fdatasync, chdir, fchdir, getcwd, stat, stat64, newstat, lstat, lstat64, newlstat, fstat, fstat64, newfstat, access, truncate, truncate64, ftruncate, ftruncate64, chmod, chown, chown16, lchown, lchown16, fchmod, fchown, fchown16, utime, utimes, symlink, readlink, mkdir, rmdir, link, unlink, rename

Here are situations when system calls on DFSA mounted filesystems may not work:

diffrent mfs/dfsa configuration on the clusternodesdup2 if the second file-pointer is non-DFSAchdir/fchdir if the parent dir is non-DFSApathnames that leave the DFSA-filesystemwhen the process which executes the system-call is being tracedif there are pending requests for the process which executes the system-call




5.5. !RedHat


5.6. Suse 7.1 and Mosix

5.6.1. Versions Required

The following is based on using SuSE 7.1 (German Version), Linux Kernel 2.2.19, and Mosix 0.98.0. 

The Linux Kernel 2.2.18 sources are part of the SuSE distribution. Do not use the default SuSE 2.2.18 kernel, as it is heavily patched with SuSE stuff. Get the patch for 2.2.19 from your favorite mirror such as . If there are further patches for the 2.2.* kernel RROR URL HERE by the time you read this text, get those, too. 

 If one of your machines is a laptop with a network connection via pcmcia, you will need the pcmcia sources, too. They are included in the SuSE distribution as MISSING: RPM HERE.

 Mosix 0.98.0 for the 2.2.19 kernel can be found on http://www.mosix.org as MOSIX-0.98.0.tar.gz . While you are there, you might want to get some of the contributed software like qps or mtop. Again, if there is a version more current than 0.98.0 by the time you read this, get it instead.

 SuSE 7.1 ships with a Mosix-package as a rpm MISSING: RPM HERE Ignore this package. It is based on Kernel 2.2.18 and seems to have been modified by SuSE (see /usr/share/doc/packages/mosix/README.SUSE). You are better off installing the Mosix sources and installing from scratch.


5.6.2. Installation

We're assuming your hardware and basic Linux system are all set up correctly and that you can at least telnet (or ssh) between the different machines. The procedure is described for one machine. Log in as root. Install the sources for the 2.2.18 Kernel in /usr/src. SuSE will place them there automatically as /usr/src/linux-2.2.18 if you install the RPM RPM NAME. Rename the directory to /usr/src/linux-2.2.19. Remove the existing link /usr/src/linux and create a new one to this directory with

ln -s /usr/src/linux-2.2.19 linux

(assuming you are in /usr/src). Patch the kernel to 2.2.19 (or whatever the current version is). If you do not know to do this, check the Linux Kernel HOWTO. Make a directory /usr/src/linux-2.2.19-mosix and copy the contents of the vanilla kernel /usr/src/linux-2.2.19 there with the command

cp -rp linux-2.2.19/* linux-2.2.19-mosix/

This gives you a clean backup kernel to fall back on if something goes wrong. Remove the /usr/src/linux link (again). Create a link /usr/src/linux to /usr/src/linux-2.2.19-mosix with

ln -s /usr/src/linux-2.2.19-mosix linux

to make life easier. Change to /tmp, copy the Mosix sources there and unpack them with the command

tar xfz MOSIX-0.98.0.tar.gz

Do not unpack the resulting tar archives such as /tmp/user.tar that appear.


5.6.3. Setup

*

 Run the install script /tmp/mosix.install and follow instructions. 

 Mosix should be enabled for run level 3 (full multiuser with network, no xdm) and 5 (full multiuser with network and xdm). There is no run level 4 in SuSE 7.1.

The Mosix install script does not give you the option of creating a boot floppy instead of an image. If you want a boot floppy, you will have to run "make bzdisk" after the install script is through.

 Do not repeat /not/ reboot.

* *

The install script in Mosix 0.98.0 is made for !RedHat distributions

and therefore fails to set up some SuSE files correctly. It tries to put stuff in /sbin/init.d/, which in fact is /etc/init.d/ (or /etc/rc.d/) with SuSE. Also, there is no /etc/rc.d/init.d/ in SuSE. So:

  • *

 Copy /tmp/mosix.init to /etc/init.d/mosix and make it executable with the command

chmod 754 /etc/init.d/mosix

  • * *

    MISSING - MODIFY ATD stuff "/etc/rc.d/init.d/ATD" BY

HAND

  • * *

    MISSING - MODIFY THE "/etc/cron.daily/slocate.cron" FILE

    * *

The other files - /etc/inittab, /etc/inetd.conf, /etc/lilo.conf - are modified correctly.

  • *

* *

Edit the file /etc/inittab to prevent some processes from migrating

to other nodes by inserting the command "/bin/mosrun -h" in the

following lines
Run levels
l0:0:wait:/bin/mosrun -h /etc/init.d/rc 0

l1:1:wait:/bin/mosrun -h /etc/init.d/rc 1 l2:2:wait:/bin/mosrun -h /etc/init.d/rc 2 l3:3:wait:/bin/mosrun -h /etc/init.d/rc 3 l5:5:wait:/bin/mosrun -h /etc/init.d/rc 5 l6:6:wait:/bin/mosrun -h /etc/init.d/rc 6 (Remember, there is no run level 4 in SuSE 7.1)

Shutdown and sulogin:

~:S:respawn:/bin/mosrun -h /sbin/sulogin

ca::ctrlaltdel:/bin/mosrun -h /sbin/shutdown -r -t 4 now sh:12345:powerfail:/bin/mosrun -h /sbin/shutdown -h now THE \ POWER IS FAILING

It is not necessary to prevent the /sbin/mingetty processes from migrating - in fact, if you do, all of the child processes started from your login shell will be locked, too

Fatal Error:

lib/CachedMarkup.php (In template 'browse' < 'body' < 'html'):257: Error: Pure virtual

lib/main.php:944: Notice: PageInfo: Cannot find action page

lib/main.php:839: Notice: PageInfo: Unknown action

lib/BlockParser.php:505: Notice: Undefined property: _tight_top



Fatal PhpWiki Error

lib/CachedMarkup.php (In template 'browse' < 'body' < 'html'):257: Error: Pure virtual