Penguin

http://www.geocities.com/SiliconValley/Hills/9267/fud2.html


Free Software

This phrase always makes people think of software that is given away at no cost, has no owner, and thus no support or warranty of continuation. In reality the 'free' in free software means that the source code has been liberated and anybody may copy and compile it, but that does not exclude payment. As the Free Software Foundation (FSF, originators of the GNU license) puts it: think free speech not free beer. Most people using Linux for real applications are using commercial distributions which they have paid for and they expect (and get) the same level of quality and support as any other commercial product. As so much confusion has been caused by use of the term free, many developers now prefer to use the term 'Open Source'.

Linux is a unix clone cut down to run on a PC

Linux is not cut down. It is fully fledged and continues to evolve with the latest developments in the software industry. Nor is it just for the PC. Stable, commercially used versions exist for x86, SPARC, DEC Alpha, Power Macintosh and Strong ARM platforms. Beta versions exist, or are in various stages of development for 680x0 (old MAC, Amiga, Atari), PowerPC (IBM/Motorola type platforms), MIPS, HP, SGI, APX100, The PalmPilot, Merced, and probably more!

Why compromise just to save a few bucks?

Ask Linux users why they use Linux, words like flexibility, openess, reliability and efficient use of hardware are commonly cited. Few people use it to save money, although most acknowledge a lot of money can be saved. At the outset many Linux users were people who wanted to play with UNIX at home but could not afford the 'Workstation' costs. But Linux soon grew beyond that. SCO now make their UNIX version freely available for home/private use, but few take up the offer, preferring the advantages of Linux.

But money issues are mostly about corporate use. If corporate staff are familiar with one OS and not Linux, setting up a server with Linux will probably cost more as savings on the OS price get eaten up by time on the learning curve. But once installed and setup, Linux systems require low maintenance, and subsequent systems will require less time and thus save money. But beyond saving a few bucks, the license issue goes further. Many system administrators prefer the thin server approach, were services are spread across multiple low cost machines rather than centred on big central boxes, with the load being split by service rather than users . This approach is more easily scalable and limits downtime. It also makes it easier to upgrade and maintain individual system elements. But if you need per user licenses on each box, it can become an economic non-starter. Linux not only makes the thin server model financially viable, the high efficiency of the OS means that desktop machines no longer considered good enough for the latest desktop OS may be recycled as non critical servers and routers.

Linux is neither warranted nor supported.

Who does warranty software products? Most software (including major OS's) are sold on an 'as is' agreement, and it is up to the user to check that the software is suitable for the application. That said, as we noted in the first point, commercial distributions of Linux are widely available, and include support options. They offer the same kind of deal as other commercial packages and there is no reason to 'differentiate' these products just because they have an open source base. On the other hand, users of open source software do have the source, so they may (and often do) put in that little bug fix or enhancement that they need to do their work. If they think their enhancement may be useful to others, they release it to be folded back into the main Linux release. No software may be 'all things to all men', and many people who swear by open source software do so because in the past they have found themselves locked into a system that is not quite doing all they want, but the necessary 'fix' has not been forthcoming, or, as is more often they case, it finally arrives bundled into a pay for upgrade which includes many other new but undesired elements with a new set of bugs. Open source software allows users to tie the system down to do what they want it to.

Software developers are reluctant to develop for a platform that requires them to release the source code.

That would be very understandable, but there is NO obligation for software developers to release the source of their software, nor is their any obligation to allow the software to be freely copied. There is no difference in selling software for Linux than for any other platform. Open source software developers recognise the need for commercial elements in the system, they believe it is the core code, the application interfaces, and the system utilities that must be open in order to get a truly non proprietary system, extendible by all. At the same time, all would agree that commercial distributors are necessary in order to package the software into end user form with sexy logos, manuals, phone support etc. As such, the open source software community welcomes and encourages companies to work alongside them selling services, packages and support, as well as interfacing and integrating 'licensed' software with freely available packages. There are many companies that have risen to the challenge of this new market, and they are making healthy profits.

The various Unices are fragmenting into a plethora of incompatible versions.

They DID fragment, around 15 to 20 years ago, and for the last 10 years they have been converging. Using one unix-a-like system is very much like using another. There are fewer differences between the various unices than there are differences between e.g. windows 3.1 -> Windows95 -> Windows NT, moreover Unix systems broadly adhere to ANSI and POSIX standards that allow software to be source compatible across hardware platforms ranging from embedded micro controllers to super computers. The Open Software Foundation (OSF), of which all major OS vendors are members, takes standards further with their X/Open standard. This allows complete source code compatibility AND a common desktop environment across all platforms. The X/open standard was mostly formed by a conglomeration of existing standards and is now well established and functional (see below for Linux X/open issues). X/Open does not exclude the use of alternative standards on any particular system, but it does allow software developers to write a single source code which will compile and run on all compliant platforms, while at the same time offering the use of a standard window manager so end users get the same interface regardless of which system they are using. True binary compatibility can only be offered on compatible hardware platforms, and even here progress is being made. On the x86 platform for example, Linux will run SCO binaries, whilst FreeBSD runs Linux binaries, and there is an umbrella group comprising most PC unix implementors which is aiming to achieve complete binary compatibility across their platforms. Other Linux portings are working on binary issues, for example the SUN Linux port will run most SunOS binaries and the DEC Alpha Linux port supports Digital Unix binaries.

Linux is fragmenting.

Difficult to answer this one as there is no credible evidence that this is happening. For the record we may point out that although the commercial Linux distributors differentiate their products, they are compatible and the various companies work together in friendly manner that is uncommon of the software world. They adhere to a common Filesystem Hierarchy Standard (that which determines the layout of the system), and they use kernels and libraries from the same series. The misconception that a standalone package has to be distributed in a different version for each Linux distribution is pure FUD.

Linux does not conform to the X/Open standard.

The short answer is that Linux does conform, but is not allowed to say so! Let us clarify that. X/Open essentially requires a POSIX system with the OSF Motif GUI libraries and the CDE (Common Desktop Environment) window manager (see above, 'Unix is fragmenting' for more info on X/Open). The Motif libraries and the CDE are not open source and so cannot be included with a free distribution, but they are available. Several commercial Linux vendors and third party companies sell Motif/CDE packs for Linux with an OSF license (which is small, around $100-$200), which renders Linux compatible with X/Open (these 'packs' are not 'ports' but runtimes, compiled from original OSF source code). But there is no certification (yet). This does not bother most people as it is simple to verify that Linux with a Motif/CDE pack does compile unabridged OSF test suites, and that it does have an identical look and feel. Many developers who are targeting X/Open platforms use Linux as their development platform. The problem is bureaucractic, as the OSF structure and certification program was not designed to take account of open source systems. This is not to say OSF is hostile to Linux, they have ported Linux to their own microkernel (and this is used for running Linux on Power Mac and HP platforms). A recent uniforum conference demonstrated a unanimous desire on the part of X/Open members to find a way of getting X/Open (and hence UNIX) branding to the Linux OS, and it was resolved that a way through the red tape should be found. It is perhaps important to make another point clear, as we may have given the impression that Linux cannot run software developed for X/Open unless a motif license is purchased. In reality, motif software may be distributed in staticly linked form without any license requirements, and many common linux apps (such as Netscape and Acrobat) are freely distributed for Linux in this manner. Vendors of large motif packages generally assume that someone paying several thousand dollars for their package will not baulk at the thought of a $100 OSF license fee.

Linux has no direction.

Often said without specifying whether they mean 'directors' or long term goals. Lets refute both. Linus Torvalds, the honorary 'President' of the Linux movement has clearly stated the long term goal of Linux, world domination. Yes, one man who wants to dominate the world by means of his software. So Linux is no different to other major OS's. Enough said.

Linux is made up of a lot of little groups running in different directions.

Linux is made up of lots of little groups running in the same direction. A number of mechanisms ensure this. The most important is perhaps the internet, people working on Linux projects are constantly in discussion with each other by means of newsletters and websites, so everybody knows what everybody else is up to. Somebody starting a new project can look to the main Linux websites to see if anybody is already doing something related, and they can post their ideas to related newsletters to get feedback and input from developers of the mechanisms to which the project will interface. This informal approach, for the most part, works fine. Two formal mechanisms exist to resolve conflict. All patches to the Linux kernel pass through the 'President' of the Linux kernel, Linus Torvalds and so he has the final say on what does and does not go in. He also holds the Trademark on the Linux name, so if it does not go through him, it is not Linux. I use the term president because Linux is not a wholly owned product of Linus. The copyright of each piece of software remains with the authors, but they have to release the software with a license that permits free copying and updating for it to become a standard part of the kernel (not, of course, for a Linux application). Nobody can 'own' Linux, and in the event of Linus becoming unavailable there are a large number of other developers sufficiently engaged in kernel development as to be able to fulfil the presidential post. Another mechanism is 'Linux-international', a non-profit umbrella group for Linux related organisations that is supported by its commercial members. Members of Linux international include commercial linux distributors such as Red-Hat and SuSE, applications software companies such as Sunsoft and Netscape, and hardware companies both from a systems level such as Digital and peripheral manufacturers such as Adaptec. Like other industrial umbrella groups, Linux-international derives its authority from the fact that member organisations bind to its conclusions on Linux related matters. It is controlled by a board voted by the members.

Linux is not a technology leader, it is just playing 'catch-up'.

This is a bit like saying NASA is just another aerospace company. Certainly the Linux community does not make splash announcements about what it WILL be doing, nor does it make roadmaps so far into the future that when the dates finally arrive everybody has forgotten what they were promised. Linux is all about people doing what they want to do. The openness of Linux makes it very suitable as a 'lab bench' for testing out new ideas, and the kudos of the linux kernel hackers makes introducing new techniques a personal challenge. Linux was running in 64-bit on the DEC ALPHA from day one. It also runs in 64-bit on Sun platforms. It supports SMP, and can be clustered. You can connect a room full of cheap dual-pentium pro's together to make a low cost super computer. The rendering of the film Titanic was done by 100 DEC-alpha computers running 64-bit Linux round the clock for several months. When people started using Linux on portables, PCMCIA and power saving soon found its way into stable kernels. Generally speaking, new technologies and techniques find their way into the development kernels very quickly, often before they are available for other OS's. New technologies that are much sought after by end users are likely to be made available as 'unofficial kernel patches', so that end users may try out new (beta) toys with otherwise stable kernels. If demand is high, the new toy will get a lot of use and hence be quickly de-bugged sufficiently to be a standard element in the stable kernel releases. If, as sometimes happens, the new idea is unwanted, it just disappears into the internals of the tar-ball and may never appear as a feature in a stable release. There many elements, such as USB support, which have been developed for some time, and are just waiting for popular demand to call for them before hitting the headlines.

The kernel may be advanced, but the apps are old 'second-hand' ports.

Other OS vendors only wish this were true. Certainly, linux users make a good deal of use of old tried and trusted unix apps that have been ported to linux, they are well known and extremely reliable. But Linux's excellence as a software development platform has made it first choice amongst programmers cooking up new ideas, and a flick through Linux archives will show reams of innovative ideas. Most of these are not yet developed enough for practical use, and of course most 'innovation' does not bear fruit. But, the next 'multiplan' is far more likely to come of a Linux platform than any other OS. An important 'feature' of Linux is its modularity, which makes it easier to experiment with app level developments. For example, at the time of writing, Microsoft has its Windows98 software in beta test. To try this you must load up the whole new OS and use it as it is, or boot up the old W95 software with no enhancements. By contrast, an experimental GUI, the 'K' desktop, which is also a HTML based desktop similar to W98 is also in beta, but you can try it out while running your normal Linux distribution. You can run individual K apps with you old GUI (e.g. run the K file manager under the 'classic' fvwm2 window manager), and you may even start the K desktop session on a different virtual screen, so you can flick between the two with a simple keystroke. By running the experimental software as a normal user, your system is protected from possible damage.

Only NT may be a domain server.

There is some truth is this. In an NT domain, the primary server must be an NT, or at least something with MS licensed domain code in it (This was true at the time of writing, but things are changing, as we will discuss later). Secondary servers need not be NT, but life is complicated without MS licensed code. This is not, however, a technical issue. You can do the things that an NT domain does without using NT domains, but if you do go down the NT path, it is very difficult to turn back or integrate with other solutions. Also, W9x (which is what most servers have to serve) is highly NT domain orientated. The reason it must be MS licensed code is that MS have put a lot of effort into making NT domains into a complex labrynth of message passing which adhere to no standards and the details of which are a closely guarded secret. The mechanisms are such that they can be 're-complicated' with service packs, so that as eventual third parties decode the labrynth, they can make 'service-pack' upgrades move the goal posts. The scope is obvious, as companies expand they will forever be purchasing NT licences. If a company wants to add a new departmental workgroup, they COULD look to several solutions which could cost less and/or be technically superior, but the problems of integrating with the 'secret' domain makes NT the simple plug-and-play option. It is ironic that many companies who have happily waved goodbye to the proprietary computing solutions of the 70's and 80's are now going down the same road with software, and yet already they are wasting more money by being locked into software than they would lose by buying completely non-standard hardware. Some have likened NT domains to a 'virus', but perhaps a better simile would be cocaine addiction: at first everything seems great, but in the long term.....

Since originally writing this, the Samba team (who make SMB compatible network code freely available to UNIX systems), have been unravelling the spaghetti of an NT primary domain controller, and are starting to offer support for this in their software. Many net administrators are overjoyed by this unlocking of the NT domain, but many others are indifferent as they have no desire to implement such methods irrespective of who is supplying them, as the protocol is still proprietary and bloated. Many administrators point to open solutions, particularly Kerberos, and note that NT5 uses Kerberos for domain authentication. It seems increasingly likely that the use of proprietary protocols in domain controllers is destined to die out.

Linux is insecure.

This is a FUD manglers dream phrase, as there is no direct means of refuting it (as system administrators are reluctant to admit break-ins, hard statistics of one OS Vs another are unavailable). Linux is, strictly speaking, a kernel, and an OS kernel is inherently secure as it has no means on its own of communicating with the outside world. Break-ins occur via support programs which offer specific network services, and it is in these programs that weaknesses normally occur. That may sound like a pedantic comment but it is important as virtually all the network support code (ftp, web servers, email etc.) used in Linux is not Linux specific but generally available UNIX software, and in this sense Linux security may be considered no different to UNIX security in general. There is a caveat to this, Linux is available in many distributions including types that are aimed at home users and hackers which put ease of use and flexibility before vulnerability. As such they may be without shadow passwords, and may enable by default esoteric protocols which are known to be risky. But, as one quip put it, you do not cancel your vacation in Florida because of hurricane warnings in Japan. Mainstream Linux distributions are more careful, and offer orthodox levels of security by default. By contrast, some distributions are specificly designed for robustness, such as the Linux Router project who specialise in dedicated routing/fire-walling systems, or Red-Hats secure server which is designed for e-commerce with encrypted credit card authorisation etc. As always, common sense is required in setting up a secure system, but fundamental weaknesses on the part of Linux are unknown.

Some argue that UNIX itself is vulnerable, but the basis for this argument is the high number of security alerts issued for UNIX network service software. But at the same time it should be remembered that 70% of internet traffic is destined for UNIX like servers, with the rest being spread over a variety of proprietary systems. Unix is also the predominant system for academic servers (where highly skilled would be attackers abound). Most other systems sit behind fire-walls or on closed networks, or are just too unoteworthy to warrant any attack. It is impossible to compare UNIX systems with other systems in terms of vulnerability because no other system comes near it in terms of exposure or the range of services offered.

Certificates such as C2 certification are an aid in setting up a secure system, but they do not state a system is secure, nor does the lack of a certificate (and Linux does not have one) make a system insecure. A C2 certificate is issued when a company submits a system for testing, and states not "This system is secure" but "In order to reach a C2 security level with this system, the following configuration was used" (The interpretation is the authors, not the actual certificate wording). The fact is you cannot globally state an OS as being secure, because security is so dependent on what protocols are being offered and how the system is configured. By the same logic, you can not say a particular OS is insecure.

Security is taken seriously by the Linux community. It must be, many Linux systems are in the front line and would not last two minutes if problems where not properly tackled. When security alerts are issued, fixes arrive very quickly (if not at the same time). Linux distributors, consultants and large sites were Linux is deployed have people dedicated to security, and as ever in the Linux world, these people collaborate. At the same time the basic motto of Linux is flexibility and user choice. You can make (or buy) a secure system with Linux. But as security is inversely proportional to flexibility and ease of use, you may decide to forgo security and enable lots of network 'gadgetry'. On a closed network or behind a firewall were the users are known (actually the case of most servers), there is a lot to be said in favour of using less trusted protocols. The choice is in the hands of the users, secure or flexible, and Linux offers what is probably the largest range of options to the end user than any other system.

Linux is not year 2000 compliant.

The basic unit of time in Linux (and most Unix-like systems) is time_t. This format expresses the time as the number of seconds since midnight, 1 Jan, 1970. It has no concept of what is year 2000, it will pass as just another tick. time_t forms the basis of timekeeping within the kernel, date/time stamps on files, up-times, down-times etc., so as far as the kernel is concerned, no problem. Of course there is nothing that excludes an application from mis-interpreting the data, but that is true of any OS. Having said that, the libraries, both dynamic and static, that come as standard with linux (and thus used to compile virtually all Linux apps), are also Y2KOK, as are the alternative libraries people use for e.g. SMP. Given that Linux has evolved recently, when developers have been Y2K aware, it is unlikely that even minor 'contributed' apps are not Y2K OK. Linux and Y2K is no different from any other major OS in this period in terms of compliance, nobody can warranty that no app is available for their system that is not Y2KOK, but of course if you have managed to base your mission critical system on an obscure non-compliant utility, if you have the source you can at least warranty that you can fix it! Worth noting that in general Y2K problems are more relevant to electronic appliances than computers per-se. Many 'real-time-clocks' used in embedded devices have the year as 2 digits and the firmware uses this value directly, without taking account of possible rollovers. Unix uses time_t because it is part of a set of standard time calls in the C language (C and Unix were born and bred together), but use of C goes far wider than unix. C is used to write most of all major OS's, and is the predominant language in commercial software applications for all platforms. It is the most likely language to be used for runtime libraries of interpreted languages, it gets everywhere. Of course there is no obligation for programmers to use the standard time_t based functions, but generally speaking they do. So the real 'date-with-destiny' of the software world is not Y2K, but when time_t rolls over. As, by convention, time_t is a signed 32-bit value, this will happen in the year 2038. Of course, if programmers have stuck to the rules, then it will only require a system level re-definition of time_t and a re-compile to put things right. And of course with computers working towards 64 bit goals, it is quite likely that time_t gets defined as 64 bit on new systems anyway (as has already happened on the DEC Alpha version of Linux). A 64-bit time_t will roll-over sometime around the end of the universe.