EntertainmentGames

Operating Systems

Google Sites

Introduction

Linux OS was first created by a student from the University of Helsinki in Finland. The creator’s name was Linus Torvalds and he had an interest which turned into a passion for Minix, a small Unix application which was later developed into a system that surpassed the Minix standards. He started working on the minix in 1991 and worked heavily until 1994 when the first version of Linux kernal 1.0 was released. This Linux kernal sets the foundation to which the OS of Linux is formed. Hundreds of organizations and companies today have hired individuals and used them to release versions of operating systems using Linux kernal.

Linux’s functioning, features and adaptation have made Linux and Windows OS’s are excellent alternatives to other OS’s. IBM and other giant companies around the world support Linux and its ongoing work after a decade from its initial release. The OS is incorporated into microchips using a process called “embedding” and is increasing the performance of of appliances and devices.

History of Linux

Through the 1990’s some computer savy technicians and hobby insistent people with an interest in computers developed desktop management systems. These systems including GNOME and KDE that run on applications on Linux are available to anyone regardless of the persons motive to use the system. Linus Torvalds was interested in learning the capabilities and features of an 80386 processor for task switching. The application originally named Freax was first used with the Minix operating system.

Both the Freax and Minix designs seemed to be sacrificing performance for academic research and studying. Many of the computing specialists now are making assumptions that have changed since the 90’s. Portability is now a common goal for these specialists of the computer industry and this is certainly not a academic requirement for software. Various ports to IA-32, PowerPC, MIPS, Alpha, and ARM along with supporting products being made and sold to wholesalers and retailers, commercial enterprises gave Linus a Alpha based system when tasks on Linus’s priority list moved up to a notably busy point.

History of Windows

Presidents of Microsoft were Bill Gates and Paul Allen they shared the title until 1977, when Bill Gates became president and Paul Allen vice president. In 1978 the disk drives of the Tandy and Apple machines were 5.25-inch. First COMDEX computer show in Las Vegas introduces a 16-bit microprocessor, and from Intel manufacturers they introduce a 8086 chip. Al Gore comes up with the phrase “information highway.” The same year Apple co-founder Steve Wozniak developed the first programming language called Integer Basic, this language was quickly replaced by the Microsoft Applesoft Basic.

Also in 1978, there was a machine that had an integrated, self contained design and was priced at less than $800, known as the Commodore PET which was a Personal Electronic Transactor. On 4/11/78 Microsoft announces its third language product, Microsoft COBOL-80. On the 1st of November in 1978 after their third language introduction, they opened their first international sales office in Japan. Microsoft delegates ASCII Microsoft, locatwed in Tokyo, asits exclusive sales agent for the Far East. And finally on New Years Eve of 1978 Microsoft announced that their year end sales was over $1 million dollars. The following year in April of 1979 Microsoft 8080 BASIC is the first microprocessor to win the ICP Million Dollar Award. The big computers were dominated by software for the mainframe computer, the recognition for the pc computer indicated growth and acceptance in the industry.

Both Allen and Gates return home to Bellevue, Washington and announce plans to open offices in their home town, thus becoming the first microcomputer software company in the Northwest.

Technical Details of both Linux and Windows OS’s

An OS takes care of all input and output coming to a computer. It manages users, processes, memory management, printing, telecommunications, networking, and etc. The OS sends data to a disk, the printer, the screen and other peripherals connected to the computer. A computer can’t work without an OS. The OS tells the machine how to process instructions coming from input devices and software running on the computer. Therefore every computer is built different, commands for in or output will have to be treated differently. In most cases an operating system is not a gigantic nest of programs but instead a small system of programs that operate by the core or kernal. The pc computer system is so compact these small supporting programs it is easier to rewrite parts r packages of the system than to redesign an entire program.

When first created OS’s were designed to help applications interact with the computer hardware. This is the same today, the importance of the OS has risen to the point where the operating system defines the computer. The OS gives off a layer of abstraction between the user and the machine when they communicate. Users don’t see the hardware directly, but view it through the OS. This abstraction can be used to hide certain hardware details from the application and the user.

Applied software is that which is not generic but specifically for one single task machine. The software will not run on any other machine. Applications like this are SABRE, the reservation system of airlines, and defense systems. Computer Aided Software Engineering (CASE) Creating software is an expensive and time consuming process. These programs will support and in some cases replace the engineer in creating computer programs. Cad cam systems is the computer aided design &computer aided manufacturing. The electronic drawing board in a computer program the features are multiplying. Like premanufactured elements, strength calculations, emulations of how a construction will hold in earthquakes.

In Linux there has been a question that has been going back and forth now for a while, is SCSI dead for workstations? There have been many advancements in SATA and the mainstream acceptance of 10K RPM Western Digital Raptor maybe this made SCSI too expensive for what is needed in a workstation. It’s time we take a look at Linux. How does the Western Digital Raptor WD740GD compare to the three latest Ultra320 SCSI drives: the Seagate Cheetah 10K.7, Seagate Cheetah 15K.3, and Seagate Cheetah 15K.4. This section covers the technology of the drives, acoustics, heat, size, and performance.

Lets take a look at the latest generation of the Seagate 10K Cheetah line and 15K Cheetah line. We will also be taking an in depth look at the latest 10K SATA drive from Western Digital the 74GB WD740GD. Starting with the Western Digital Raptor, WD pushes this drive as the low cost answer to SCSI. On their website, they like to show off the drives 1,200,000 hours MTBF(Mean Time Between Failure) which matches the last generation MTBF of the Seagate Cheetah 15K.3 and is very close to the reliability rating of today’s Cheetahs.

In Linux’s datasheet or newsletter, they also mention that the Cheetah drive is designed for “high performance around the clock usage.” Both the Cheetah and the Western Digital Raptor drives have the same amount of cache memory. When you are speaking of operations in a multi-tasking/multi-user environment, the benefit of various queuing techniques is an advantage. All Ultra 320 SCSI drives support what is called Native Command Queuing or NCQ. This technique is where all commands sent to the disk drive can be queued up and reordered in the most efficient order. This stops the drive from having to request service on only one side of the disk, then going to the other side of the disk serving another request, in order to return for the next request.. While some of the SATA drives do support NCQ, the Raptor does not. The Raptor does have another form of queuing called Tagged Command Queuing or TCQ. This method is not as effective as NCQ and requires support in both the drive and host controller. From what they have been able to determine, TCQ support is sparse, even under Windows.

The SATA drive has itself backed up on their durability claim by stating their use of fluid dynamic bearings in their drives. The fluid dynamic bearings replace ball bearings to cut down on drive wear and tear and decrease operating noise.

Microsoft Windows XP technologies make it easy to enjoy games, music, and movies in addition to creating movies and enhancing digital photo’s. Direct X 9.0 technology drives high speed multimedia and various games on the PC. DirectX provides the exciting graphics, sound, music, and three dimensional animation that bring games to life. Direct X is also the link that allows software engineers to develop a game that is high speed and multimedia driven for your PC. Direct X was introduced in 1995 and it’s popularity soared as multimedia application development reached new heights. Today Direct X has progressed to an Application Programming Interface (API) and being applied into Microsoft Windows Operating Systems. This way software developers can access hardware features without having to write hardware code.

Some of the features of the windows media playerb 9 series with smart jukebox gives users more control over their music. With easy cd transfer to the computer, cd burning and compatibility is available on portable players. Users can also discover more with services that have premium entertainment. Windows media player 9 seriers works well with windows xp using the built in digital media features and delivers a state-of- the- art experience.

When Windows Millenium Edition 2000 came out of stores it was specifically designed for home users. It had the first Microsoft version of a video editing product. Movie Maker is used to capture and organize and edit video clips, and then export them for PC or web playback. Movie maker 2, released in 2003, adds new movie making transitions, jazzy titles, and neat special effects. Based on Microsoft Direct Show and Windows Media technologies, Movie Maker was originally included only with Windows Millenium Edition. Now Movie Maker 2 is available for Windows XP Home Edition and Windows XP Professional.

With the release of Windows XP in 2001 came Windows Messenger, bringing instant messaging to users across the internet. Users communicate using Text messages in real time in Windows Messenger. Real time messaging with video conferencing has been available for a long time before now. The first communication tool provided by Windows Messenger used integrated, easy to use text chat, voice and video communication, and data collaboration.

Linux is being developed and thus is freely redistributable in code form. Linux is available and developed over the internet. Many of the engineers who took part in producing it are from over seas and have never meet one another. This operating system is at a source level code and is on a large scale that has led the way to it becoming an featureful and stable system.

Eric Raymond has written a popular essay on the development of Linux entitled The Cathedral. and the bazaar. He describes the way the Linux kernal uses a Bazaar approach that has the code released quickly and very often, and that this requires input that has provided improvement to the system. This Bazaar approach is reported to the Cathedral approach used by other systems like GNU Emacs core. The Cathedral approach is characterized in bringing a more beautiful code that has been released, but unfortunately it is released far less often. A poor opportunity for people outside the group who can not contribute to the process.

Some of the high-lights and success of the Bazaar projects do not include the opening the code for everyone to observe, at the design level of the Bazaar. On the same token the Cathedral approach is widely viewed by everyone and is appropriate. Once debugging the code is executed, it is necessary to open the Bazaar to have everyone find different errors involving the code. If they can fix the code this a great effort and help to the coders.

Advantages and Disadvantages of the two OS’s

The writer of this Linux OS web page Chris Browne, describes the way that Linux efforts are distributed and some of the advantages and disadvantages of the Linux OS. The Linux OS comes with some experimental versions such as the 2.5. x series where version numbers go steadily upwards every week. The stable version changes only when bugs are detected in the system and the bugs must be fixed in the experimental series, and this occurrence does not change very often. Linux users know that this happens, and they work to resolve the bugs.

It is not guaranteed that all users will immediately fix their problems with the systems if they are not being affected (or don’t notice they are affected) by problems, there are fixes quickly available, sometimes distributed across the internet after a few hours of diagnosis. For Linux fixes are available more quickly than commercial vendors like Microsoft, HP, and IBM usually this diagnosis is before they even know there’s a problem. This acknowledgement is in contrast to other companies behavior, Bill Gates claims in his press releases Microsoft code has no bugs. This seems to mean that there are no bugs that Microsoft cares to fix.

Microsoft came to the conclusion that the majority of bugs detected in their systems are present because users don’t use their software correctly. The problems that remain for Microsoft are few in number and are caused by actual errors. There is remaining work to get the stable Linux system, with configured Linux kernels that should and do have suitably configured software on top of the workload the systems have to run for hundreds of days without rebooting the computers. Some of the general public as well as computer professionals like engineers and technicians complain that Linux is always changing. Chris says that “effort and interest of the Linux kernal will stop when people want to stop building and enhancing the Linux kernal.” As long as new technology and devices like the video cards are being constructed and people interested in Linux keep coming up with new improvements for Linux, work on Linux OS will progress.

The disadvantage of the Linux OS is that it may end because of there being a better platform for kernal hacking, or because Linux in the future will be so displaced that it becomes unmanageable. This has not happened yet but many researchers say that in the future of Linux, with various plans for attaining services to the consumer or business, Linux is moving away from the base kernal and into user space which creates less room for data and information. The announcement of a Debian Hurd effort suggests an alternative to the problem of kernal hacking. The Hurd kernal, which runs and is sent as a set of processes on top a microkernal such as MACH, may provide a system for those people that are not satisfied with changes to the linux kernal. Mach has a “message passing” abstraction that allows the OS to be created as a set of components that will work in conjunction with one another.

Competetive, Collaborative Efforts

To start this section I’ll tell about the beginning of the personal computer and it’s roots with IBM. Vertically integrated proprietary de facto standards architectures were the norm for the first three decades of the postwar computer industry. Each computer manufacturer made most if not all of its technology internally, and sold that technology as part of an integrated computer. This systems era was ascendant from IBM’s 1964 introduction of its System 360 until the release of the 1981, personal computer from IBM. This was challenged by two different approaches. One was the fragmentation of proprietary standards in the PC industry between different suppliers, which led Microsoft and Intel to seek industry wide dominance for their proprietary component of the overall system architecture, making what Moschella (1997) terms the “PC era” (1964-1981). The second was a movement by users and second tier producers to cvonstruct industrywide “open” systems, in which the standard was not owned by a single firm.

The adoption of the Linux system in the late 1990s was a response to these earlier approaches. Linux was the most commercially accepted example of a new wave of “open source” software, the software and the source code are freely distributed to use and modify. The advantages of Linux in contrast to the proprietary PC standards, particularly software standards controlled by Microsoft. Product compatibility standards have typically been considered using a simple unidemensional typology, bifurcated between “compatible” and “incompatible.” Further more, to illuminate differences between proprietary and open standards strategies, Gabel’s (1987) multi-dimensional classification attribute, with each dimension assuming one of several (discrete) levels:

“multivintage” compatibility between successive generations of a product:

“product line” compatibility, providing interoperability across the breadth of the company’s

product line-as Microsoft has with its Windows CE, 95/98/ME, and NT/2000 product families.

“multivendors” compatibility, i.e. compatibility of products between competing producers.

The first successful multi-vendor operating system was Unix, developed by a computer science research group at Bell Telephone Laboratories (BTL) in New Jersey beginning in 1969. As with the earlier Multics research project between MIT, BTL and mainframe computer maker General Electric, Unix was a multi-user time-shared OS designed as a research project by programmers for their personal use. Other characteristics key to Unix’s success reflected path dependencies by its developers and early users( Salus 1994):

AT&T was forbidden by its 1956 consent decree from being in the computer business, so it did not sell the OS commercially. After publishing research papers, Bell Labs was flooded with requests from university computer science departments, who received user licenses and source code but a lack of support. Along cam budget constraints that limited BTL researchers to DEC minicomputers opposed to large mainframe computers, Unix was simpler and more efficient than its Multics predecessor, based on the simplified C programming language rather than the more widely used PL/I. Although originally developed DEC minicomputers, Unix was converted to run on other models by users who found programmer time less expensive than buying a supported model, thus setting the stage for it to become a hardware-independent OS.

Maybe one of the most important developments was the licensing of UNIX by the U.C. Berkeley Computer Science Department in 1973. The Berkeley group issued its own releases from 1977 to 1994, with much of its funding provided by the Defense Advanced Research Projects Agency (DARPA). The result of the Berkeley development included (Garud and Kumaraswamy 1993; Salus 1994) :

The first Unix version to support TCP/IP, later the standard protocols of the internet;

Academic adoption of BSD Unix as the preferred OS by many computer science departments throughout the world;

Commercial spread of BSD -derived Unix through Sun Microsystems, cofounded by former BSD programmer Bill Joy;

As they evolved their versions of Unix, fragmentation of Unix developers and adopters into rival “BSD” and “AT&T” camps.

AT&T Unix provided a multivendor standard which, when coupled with the BSD advancements, helped spur the adoption of networked computing. Helped by Sun, whose slogan is “the network is the computer,” Unix rapidly gained acceptance during the 1980s as the preferred OS for networked engineering workstations (Garud and Kumaraswamy 1993). At the same time, it became a true multivendor standard as minicomputer producers with a small amount of customers, weak R&D and immature OS licensed Unix from AT&T. The main exceptions to the Unix push were the early leaders in workstations (Apollo) and minicomputers (DEC), who used their proprietary OS as a source of competitive advantage, and were the last to switch to Unix in their respective segments.

Some of the advocates from the two producers formed a number of trade associations to promote Unix and related operating systems. In doing so fueled the adoption and standardization of Unix, they hoped to increase the amount of application software to compete with sponsored, proprietary architectures(Gabel 1987; Grindley 1995). These two groups promoted these under the rubric “open systems”; the editors of a book series on such systems summarized their goals as follows:

Open systems allow users to move their applications between systems easily; purchasing decisions can be made on the basis of cost-performance ratio and vendor support, rather than on systems which run a users application suite (Salus 1994: v).

Despite these goals, the Unix community spent the 1980s and early 1990s fragmented into AT&T and Berkeley warring factions, each of which sought control of the OS API’s to maximize the software available for their versions. Each faction had its own adherents. To avoid paying old earlier mainframe switching costs, U.S. Department of Defense procurement decisions began to favor Unix over proprietary systems. As AT&T formalized its System V Interface Definition and encouraged hardware makers to adopt System V, it became the multivendor standard required by DoD procurements

BSD group was only developed for DEC minicomputers, its Unix variant was not multivendor and less attractive and appealing for DoD procurements. The numerous innovations of the BSD group in terms of usability, software development tools and networking made it more attractive to university computer scientists for their own research and teaching, making it the minicomputer OS preferred by computer science departments in the U.S., Europe and Japan (Salus 1994). The divergent innovation meant that the two major Unix variants differed in terms of internal structure, user commands and application programming interfaces (APIs). It was the latter difference that most seriously affected computer buyers, as custom software developed for one type of Unix could not directly be recompiled on the other, adding switching costs between the two systems. Also, both the modem-based and DARPA networking facilitated the distribution of user donated source code libraries, that were free but often required site-specific custom programming if the Unix API’s at the users site differed from those of faced by the original contributor.

Microsoft Windows continues to invest in products based on the Itanium processor family, and the Itanium Solutions Alliance will further this investment by helping growth of the ecosystem of applications and solutions available on Windows platform and SQL Server 2005,” said Bob Kelly, general manager, Windows infrastructure, Microsoft Corp. “We look forward to working with the members of the Itanium Solutions Alliance to help IT managers transition from RISC-based Unix servers to Itanium based systems running on the Windows platform.”

Google Sites

Source by Kevin M Papenhaus