Differences between version 3 and revision by previous author of LinuxFudDispelled.
Other diffs: Previous Major Revision, Previous Revision, or view the Annotated Edit History
Newer page: | version 3 | Last edited on Friday, July 25, 2003 10:14:32 pm | by JohnMcPherson | Revert |
Older page: | version 2 | Last edited on Friday, July 25, 2003 4:39:55 pm | by AristotlePagaltzis | Revert |
@@ -3,13 +3,13 @@
----
!!! Free Software
-This phrase always makes people think of software that is given away at no cost, has no owner, and thus no support or warranty of continuation. In reality the 'free' in free software means that the source code has been liberated and anybody may copy and compile it, but that does not exclude payment. As the Free Software Foundation (FSF, originators of the GNU license) puts it: think free speech not free beer. Most people using Linux for real applications are using commercial distributions which they have paid for and they expect (and get) the same level of quality and support as any other commercial product. As so much confusion has been caused by use of the term free, many developers now prefer to use the term 'Open Source'.
+This phrase always makes people think of software that is given away at no cost, has no owner, and thus no support or warranty of continuation. In reality the 'free' in free software means that the source code has been liberated and anybody may copy and compile it, but that does not exclude payment. As the Free Software Foundation (FSF, originators of the [
GNU]
license) puts it: think free speech not free beer. Most people using [
Linux]
for real applications are using commercial distributions which they have paid for and they expect (and get) the same level of quality and support as any other commercial product. As so much confusion has been caused by use of the term free, many developers now prefer to use the term 'Open Source'.
!!! Linux is a unix clone cut down to run on a PC
-Linux is not cut down. It is fully fledged and continues to evolve with the latest developments in the software industry. Nor is it just for the PC. Stable, commercially used versions exist for x86, Sparc
, DEC Alpha, power
Macintosh and Strong ARM platforms. Beta versions exist, or are in various stages of development for 68xxx
(old MAC, Amiga, Atari), PowerPC (IBM/Motorola type platforms), MIPS, HP, SGI, APX100, The PalmPilot, Merced, and probably more!
+Linux is not cut down. It is fully fledged and continues to evolve with the latest developments in the software industry. Nor is it just for the PC. Stable, commercially used versions exist for [
x86]
, [SPARC]
, [
DEC] [
Alpha]
, Power [
Macintosh]
and Strong [
ARM]
platforms. Beta versions exist, or are in various stages of development for [680x0]
(old MAC, Amiga, Atari), PowerPC ([
IBM]
/Motorola type platforms), MIPS, [
HP]
, [
SGI]
, APX100, The PalmPilot, Merced, and probably more!
!!! Why compromise just to save a few bucks?
Ask Linux users why they use Linux, words like flexibility, openess, reliability and efficient use of hardware are commonly cited. Few people use it to save money, although most acknowledge a lot of money can be saved. At the outset many Linux users were people who wanted to play with UNIX at home but could not afford the 'Workstation' costs. But Linux soon grew beyond that. SCO now make their UNIX version freely available for home/private use, but few take up the offer, preferring the advantages of Linux.
@@ -28,20 +28,19 @@
market, and they are making healthy profits.
!!! The various Unices are fragmenting into a plethora of incompatible versions.
-They DID fragment, around 15 to 20 years ago, and for the last 10 years they have been converging. Using one unix-a-like system is very much like using another. There are fewer differences between the various unices than there are differences between e.g. windows 3.1 -> Windows95 -> Windows NT, moreover Unix systems broadly adhere to ANSI and POSIX standards that allow software to be source compatible across hardware platforms ranging from embedded micro controllers to super computers. The Open Software Foundation (OSF), of which all major OS vendors are members, takes standards further with their X/Open standard. This allows complete source code compatibility AND a common desktop environment across all platforms. The X/open standard was mostly formed by a conglomeration of existing standards and is now well established and functional (see below for Linux X/open issues). X/Open does not exclude the use of alternative standards on any particular system, but it does allow software developers to write a single
-source code which will compile and run on all compliant platforms, while at the same time offering the use of a standard window manager so end users get the same interface regardless of which system they are using. True binary compatibility can only be offered on compatible hardware platforms, and even here progress is being made. On the x86 platform for example, Linux will run SCO binaries, whilst FreeBSD runs Linux binaries, and there is an umbrella group comprising most PC unix implementors which is aiming to achieve complete binary compatibility across their platforms. Other Linux portings are working on binary issues, for example the SUN Linux port will run most SunOS binaries and the DEC ALPHA
Linux port supports Digital Unix binaries.
+They DID fragment, around 15 to 20 years ago, and for the last 10 years they have been converging. Using one unix-a-like system is very much like using another. There are fewer differences between the various unices than there are differences between e.g. windows 3.1 -> Windows95 -> Windows NT, moreover Unix systems broadly adhere to [
ANSI]
and [
POSIX]
standards that allow software to be source compatible across hardware platforms ranging from embedded micro controllers to super computers. The Open Software Foundation (OSF), of which all major OS vendors are members, takes standards further with their X/Open standard. This allows complete source code compatibility AND a common desktop environment across all platforms. The X/open standard was mostly formed by a conglomeration of existing standards and is now well established and functional (see below for Linux X/open issues). X/Open does not exclude the use of alternative standards on any particular system, but it does allow software developers to write a single
+source code which will compile and run on all compliant platforms, while at the same time offering the use of a standard window manager so end users get the same interface regardless of which system they are using. True binary compatibility can only be offered on compatible hardware platforms, and even here progress is being made. On the [
x86]
platform for example, Linux will run [
SCO]
binaries, whilst [
FreeBSD]
runs Linux binaries, and there is an umbrella group comprising most PC unix implementors which is aiming to achieve complete binary compatibility across their platforms. Other Linux portings are working on binary issues, for example the SUN Linux port will run most SunOS binaries and the [
DEC] [Alpha]
Linux port supports Digital Unix binaries.
!!! Linux is fragmenting.
-Difficult to answer this one as there is no credible evidence that this is happening. For the record we may point out that although the commercial Linux distributors differentiate their products, they are compatible and the various companies work together in friendly manner that is uncommon of the software world. They adhere to a common Filesystem Hierarchy Standard (that which determines the layout of the system), and they use kernels and libraries from the same series. The misconception that a standalone package has to be distributed in a different version for each Linux distribution is pure FUD.
+Difficult to answer this one as there is no credible evidence that this is happening. For the record we may point out that although the commercial Linux distributors differentiate their products, they are compatible and the various companies work together in friendly manner that is uncommon of the software world. They adhere to a common Filesystem Hierarchy Standard (that which determines the layout of the system), and they use kernels and libraries from the same series. The misconception that a standalone package has to be distributed in a different version for each Linux distribution is pure [
FUD]
.
!!! Linux does not conform to the X/Open standard.
-The short answer is that Linux does conform, but is not allowed to say so! Let us clarify that. X/Open essentially requires a POSIX system with the OSF Motif GUI libraries and the CDE (Common Desktop Environment) window manager (see above, 'Unix is fragmenting' for more info on X/Open). The Motif libraries and the CDE are not open source and so cannot be included with a free distribution, but they are available. Several commercial Linux vendors and third party companies sell Motif/CDE packs for Linux with an OSF license (which is small, around $100-$200), which renders Linux compatible with X/Open (these 'packs' are not 'ports' but runtimes, compiled from original OSF source code). But there is no certification (yet). This does not bother most people as it is simple to verify that Linux with a Motif/CDE pack does compile unabridged OSF test suites, and that it does have an identical look and feel. Many developers who are targeting X/Open platforms use Linux as their development platform. The problem is
-beurocractic, as the OSF structure and certification program was not designed to take account of open source systems. This is not to say OSF is hostile to Linux, they have ported Linux to their own microkernel (and this is used for running Linux on Power Mac and HP platforms). A recent uniforum conference demonstrated a unanimous desire on the part of X/Open members to find a way of getting X/Open (and hence UNIX) branding to the Linux OS, and it was resolved that a way through the red tape should be found. It is perhaps important to make another point clear, as we may have given the impression that Linux cannot run software developed for X/Open unless a motif license is purchased. In reality, motif software may be distributed in staticly linked form without any license requirements, and many common linux apps (such as Netscape and Acrobat) are freely distributed for Linux in this manner. Vendors of large motif packages generally assume that someone paying several thousand dollars for their package will not
-
baulk at the thought of a $100 OSF license fee.
+The short answer is that Linux does conform, but is not allowed to say so! Let us clarify that. X/Open essentially requires a [
POSIX]
system with the OSF Motif GUI libraries and the CDE (Common Desktop Environment) window manager (see above, 'Unix is fragmenting' for more info on X/Open). The Motif libraries and the CDE are not open source and so cannot be included with a free distribution, but they are available. Several commercial Linux vendors and third party companies sell Motif/CDE packs for Linux with an OSF license (which is small, around $100-$200), which renders Linux compatible with X/Open (these 'packs' are not 'ports' but runtimes, compiled from original OSF source code). But there is no certification (yet). This does not bother most people as it is simple to verify that Linux with a Motif/CDE pack does compile unabridged OSF test suites, and that it does have an identical look and feel. Many developers who are targeting X/Open platforms use Linux as their development platform. The problem is
+beurocractic, as the OSF structure and certification program was not designed to take account of open source systems. This is not to say OSF is hostile to Linux, they have ported Linux to their own microkernel (and this is used for running Linux on Power Mac and HP platforms). A recent uniforum conference demonstrated a unanimous desire on the part of X/Open members to find a way of getting X/Open (and hence UNIX) branding to the Linux OS, and it was resolved that a way through the red tape should be found. It is perhaps important to make another point clear, as we may have given the impression that Linux cannot run software developed for X/Open unless a motif license is purchased. In reality, motif software may be distributed in staticly linked form without any license requirements, and many common linux apps (such as Netscape and Acrobat) are freely distributed for Linux in this manner. Vendors of large motif packages generally assume that someone paying several thousand dollars for their package will not baulk at the thought of a $100 OSF license fee.
!!! Linux has no direction.
Often said without specifying whether they mean 'directors' or long term goals. Lets refute both. Linus Torvalds, the honorary 'President' of the Linux movement has clearly stated the long term goal of Linux, world domination. Yes, one man who wants to dominate the world by means of his software. So Linux is no different to other major OS's. Enough said.
@@ -66,13 +65,13 @@
There is some truth is this. In an NT domain, the primary server must be an NT, or at least something with MS licensed domain code in it (This was true at the time of writing, but things are changing, as we will discuss later). Secondary servers need not be NT, but life is complicated without MS licensed code. This is not, however, a technical issue. You can do the things that an NT domain does without using NT domains, but if you do go down the NT path, it is very difficult to turn back or integrate with other solutions. Also, W9x (which is what most servers have to serve) is highly NT domain orientated. The reason it must be MS licensed code is that MS have put a lot of effort into making NT domains into a complex labrynth of message passing which adhere to no standards and the details of which are a closely guarded secret. The mechanisms are such that they can be 're-complicated' with service packs, so that as eventual third parties decode the labrynth, they can make 'service-pack' upgrades move the goal
posts. The scope is obvious, as companies expand they will forever be purchasing NT licences. If a company wants to add a new departmental workgroup, they COULD look to several solutions which could cost less and/or be technically superior, but the problems of integrating with the 'secret' domain makes NT the simple plug-and-play option. It is ironic that many companies who have happily waved goodbye to the proprietary computing solutions of the 70's and 80's are now going down the same road with software, and yet already they are wasting more money by being locked into software than they would lose by buying completely non-standard hardware. Some have likened NT domains to a 'virus', but perhaps a better simile would be cocaine addiction: at first everything seems great, but in the long term.....
-Since originally writing this, the SAMBA
team (who make SMB compatible network code freely available to UNIX systems), have been unravelling the spaghetti of an NT primary domain controller, and are starting to offer support for this in their software. Many net administrators are overjoyed by this unlocking of the NT domain, but many others are indifferent as they have no desire to implement such methods irrespective of who is supplying them, as the protocol is still proprietary and bloated. Many administrators point to open solutions, particularly Kerberos, and note that NT5 uses Kerberos for domain authentication. It seems increasingly likely that the use of proprietary protocols in domain controllers is destined to die out.
+Since originally writing this, the [Samba]
team (who make [
SMB]
compatible network code freely available to [
UNIX]
systems), have been unravelling the spaghetti of an NT primary domain controller, and are starting to offer support for this in their software. Many net administrators are overjoyed by this unlocking of the NT domain, but many others are indifferent as they have no desire to implement such methods irrespective of who is supplying them, as the protocol is still proprietary and bloated. Many administrators point to open solutions, particularly Kerberos, and note that NT5 uses Kerberos for domain authentication. It seems increasingly likely that the use of proprietary protocols in domain controllers is destined to die out.
!!! Linux is insecure.
-This is a FUD manglers dream phrase, as there is no direct means of refuting it (as system administrators are reluctant to admit break-ins, hard statistics of one OS Vs another are unavailable). Linux is, strictly speaking, a kernel, and an OS kernel is inherently secure as it has no means on its own of communicating with the outside world. Break-ins occur via support programs which offer specific network services, and it is in these programs that weaknesses normally occur. That may sound like a pedantic comment but it is important as virtually all the network support code (ftp, web servers, email etc.) used in Linux is not Linux specific but generally available UNIX software, and in this sense Linux security may be considered no different to UNIX security in general. There is a caveat to this, Linux is available in many distributions including types that are aimed at home users and hackers which put ease of use and flexibility before vulnerability. As such they may be without shadow passwords, and may
+This is a [
FUD]
manglers dream phrase, as there is no direct means of refuting it (as system administrators are reluctant to admit break-ins, hard statistics of one OS Vs another are unavailable). Linux is, strictly speaking, a kernel, and an [
OS]
kernel is inherently secure as it has no means on its own of communicating with the outside world. Break-ins occur via support programs which offer specific network services, and it is in these programs that weaknesses normally occur. That may sound like a pedantic comment but it is important as virtually all the network support code (ftp, web servers, email etc.) used in Linux is not Linux specific but generally available UNIX software, and in this sense Linux security may be considered no different to UNIX security in general. There is a caveat to this, Linux is available in many distributions including types that are aimed at home users and hackers which put ease of use and flexibility before vulnerability. As such they may be without shadow passwords, and may
enable by default esoteric protocols which are known to be risky. But, as one quip put it, you do not cancel your vacation in Florida because of hurricane warnings in Japan. Mainstream Linux distributions are more careful, and offer orthodox levels of security by default. By contrast, some distributions are specificly designed for robustness, such as the Linux Router project who specialise in dedicated routing/fire-walling systems, or Red-Hats secure server which is designed for e-commerce with encrypted credit card authorisation etc. As always, common sense is required in setting up a secure system, but fundamental weaknesses on the part of Linux are unknown.
Some argue that UNIX itself is vulnerable, but the basis for this argument is the high number of security alerts issued for UNIX network service software. But at the same time it should be remembered that 70% of internet traffic is destined for UNIX like servers, with the rest being spread over a variety of proprietary systems. Unix is also the predominant system for academic servers (where highly skilled would be attackers abound). Most other systems sit behind fire-walls or on closed networks, or are just too unoteworthy to warrant any attack. It is impossible to compare UNIX systems with other systems in terms of vulnerability because no other system comes near it in terms of exposure or the range of services offered.
@@ -82,7 +81,7 @@
system.
!!! Linux is not year 2000 compliant.
-The basic unit of time in Linux (and most Unix-like systems) is time_t. This format expresses the time as the number of seconds since midnight, 1 Jan, 1970. It has no concept of what is year 2000, it will pass as just another tick. time_t forms the basis of timekeeping within the kernel, date/time stamps on files, up-times, down-times etc., so as far as the kernel is concerned, no problem. Of course their
is nothing that excludes an application from mis-interpreting the data, but that is true of any OS. Having said that, the libraries, both dynamic and static, that come as standard with linux (and thus used to compile virtually all Linux apps), are also Y2KOK, as are the alternative libraries people use for e.g. SMP. Given that Linux has evolved recently, when developers have been Y2K aware, it is unlikely that even minor 'contributed' apps are not Y2KOK
. Linux and Y2K is no different from any other major OS in this period in terms of compliance, nobody can warranty that no app is available for their system
-that is not Y2KOK, but of course if you have managed to base your mission critical system on an obscure non-compliant utility, if you have the source you can at least warranty that you can fix it! Worth noting that in general Y2K problems are more relevant to electronic appliances than computers per-se. Many 'real-time-clocks' used in embedded devices have the year as 2 digits and the firmware uses this value directly, without taking account of possible rollovers. Unix uses time_t because it is part of a set of standard time calls in the C language (C and unix
were born and bred together), but use of C goes far wider than unix. C is used to write most of all major OS's, and is the predominant language in commercial software applications for all platforms. It is the most likely language to be used for runtime libraries of interpreted languages, it gets everywhere. Of course there is no obligation for programmers to use the standard time_t based functions, but generally speaking they do. So the real
-'date-with-destiny' of the software world is not Y2K, but when time_t rolls over. As, by convention, time_t is a signed 32-bit value, this will happen in the year 2038. Of course, if programmers have stuck to the rules, then it will only require a system level re-definition of time_t and a re-compile to put things right. And of course with computers working towards 64 bit goals, it is quite likely that time_t gets defined as 64 bit on new systems anyway (as has already happened on the DEC Alpha version of Linux). A 64-bit time_t will roll-over sometime around the end of the universe.
+The basic unit of time in Linux (and most Unix-like systems) is time_t. This format expresses the time as the number of seconds since midnight, 1 Jan, 1970. It has no concept of what is year 2000, it will pass as just another tick. time_t forms the basis of timekeeping within the kernel, date/time stamps on files, up-times, down-times etc., so as far as the kernel is concerned, no problem. Of course there
is nothing that excludes an application from mis-interpreting the data, but that is true of any OS. Having said that, the libraries, both dynamic and static, that come as standard with linux (and thus used to compile virtually all Linux apps), are also Y2KOK, as are the alternative libraries people use for e.g. [
SMP]
. Given that Linux has evolved recently, when developers have been Y2K aware, it is unlikely that even minor 'contributed' apps are not Y2K OK
. Linux and Y2K is no different from any other major OS in this period in terms of compliance, nobody can warranty that no app is available for their system
+that is not Y2KOK, but of course if you have managed to base your mission critical system on an obscure non-compliant utility, if you have the source you can at least warranty that you can fix it! Worth noting that in general Y2K problems are more relevant to electronic appliances than computers per-se. Many 'real-time-clocks' used in embedded devices have the year as 2 digits and the firmware uses this value directly, without taking account of possible rollovers. Unix uses time_t because it is part of a set of standard time calls in the [
C]
language ([
C]
and [Unix]
were born and bred together), but use of C goes far wider than unix. C is used to write most of all major OS's, and is the predominant language in commercial software applications for all platforms. It is the most likely language to be used for runtime libraries of interpreted languages, it gets everywhere. Of course there is no obligation for programmers to use the standard time_t based functions, but generally speaking they do. So the real
+'date-with-destiny' of the software world is not Y2K, but when time_t rolls over. As, by convention, time_t is a signed 32-bit value, this will happen in the year 2038. Of course, if programmers have stuck to the rules, then it will only require a system level re-definition of time_t and a re-compile to put things right. And of course with computers working towards 64 bit goals, it is quite likely that time_t gets defined as 64 bit on new systems anyway (as has already happened on the [
DEC] [
Alpha]
version of Linux). A 64-bit time_t will roll-over sometime around the end of the universe.