Wednesday, February 9, 2011

INTRODUCTION TO OPERATING SYSTEM

.1.1Operating System meaning.
Operating system is a software program that controls the hardware. The
definition of an operating system can be seen in four aspects:
i. A group of program that acts as an intermediary between a user
and the computer hardware.
ii. Controls and co-ordinates the use of computer resources among
various application programs and user
iii. Allow the program to communicate with one another
iv It acts as a manager
in other words...
Operating System is a collections resident program (the program that is always at the memory) that manage, oversee the implementation of manual processes and provide useful services to the security of computer systems.
Basically they are two types of software:
i. System software
System software contains (groups of programs) that control the
hardware.
ii. Application software
Application software is a group of programs that used by the enduser
for various applications such as text processing, spreadsheet,
information processing, etc.
An operating system has three objectives as listed
below:
-Conveniency: An operating system makes a computer more
convenient to be used.
-Efficiency: An operating system allows the computer system
resources to be used in an efficient manner
- Ability to evolve: an operating system is constructed in such a way
as to permit the effective development, testing and introduction of
new system function without at the same time interfering with
service.
-Meanwhile the functions of operating system are to:
􀂾 Control the computer resources
􀂾 Program implementation
􀂾 Manage the data and information

There are 4 types of operating system:
a. Batch system
b. Single user system
c. Multiprogramming system
d. Multiprocessor system
Batch system
-Early computers were physically enormously large machine run from
a console.
-The common input devices were card readers and tape devices.
-The common output devices were line printers, tape drives and card punches.
-The users of such system did not interact directly with the computer systems.
- Rather the user prepared a job which consisted a program, the data, and some
control information about the nature of the job and submitted it to the computer
operator.
-The operating system in this early computer was fairly simple.
- Its major task was to transfer control automatically from one job to the
next. The operating system was always resident in memory.
Single User System
-The input output devices have certainly changed with panel of
switches and card readers replaced with typewriter like keyboard and
mouse line printers and card punches has succumbed to display screen
and to small, fast printer.
-PCs operating system therefore were neither multi user nor multitasking. However,
the goal of this operating systems have changed with time: instead of
maximizing CPU and peripheral utilization, the system options for maximizing user convenience and responsiveness.
-These systems include PCs running Microsoft Windows and the Apple Macintosh.
Multiprogramming system
-Multiprogramming organizes jobs (code and data) so CPU always has
one to execute
- One job selected and run via job scheduling
-When it has to wait (for I/O for example), OS switches to another job
Multiprocessor system
-The architecture of multiprocessor operating system is complicated
because its need of a few processors to operate and implement various
tasks, which are, not synchronize at the same time.
-The reason of using multiprocessor system is to provide reliability and
graceful degradation when failures occur.
-The efficiency of multiprocessor operating systems is influenced by communication
schema, synchronization mechanism and processor communication
structure.
1.1.3Briefing about operating system’s history
The first generation (1945-1955): Vacuum Tubes and Plug
boards.
-In 1940s, Howard Aiken at Harvard, John Von Neumann at the Institute
for Advanced study in Princeton, J Presper Eckert and William Mauchley
at the University of Pennsylvania, and Konrad Zuse in Germany, among
others all succeeded in building calculating engine using vacuum tubes.
These machine were enormous, filling up entire rooms with tens of
thousand of vacuum tubes, but were much slower that even the cheapest
home computer available today.
The second generation (1955-1965): Transistors and batch systems
-The introduction of the transistor in the mid 1950s changed the picture
radically. Computer become reliable enough that they could be
manufactured and sold to customers with the expectation that they would
continue to function long enough to get some useful work to done. There
was a clear separation between designers, builders, operators,
programmers and maintenance personnel.
-When the computer finished whatever job it was currently running, an
operator would go over to the printer and tear off the output and carried it
over to the output room, so that the programmer could collect it later.
-Then he would take one of the card decks that have been brought from
the input room and read it in. If the FOTRAN compiler was needed, the
operator would have to get it from a file cabinet and read it in. Most of
computer time was wasted while operators were walking around the
machine room.
-After each job finished, the operating system automatically read the next job
from the tape and began running it.
The Third generation (1965-1980): ICs and Multiprogramming
-By the early 1960’s, most computer manufactures had two distinct, and
totally incompatible, product lines. There were word oriented, large scale
scientific computers, such as 7094, which were use for numerical
calculation in science and engineering
-The 360 was the first major computer line to use (small scale) integrated
circuit (ICs), thus providing a major price/performance advantage over
the second generation machine, which were built up from individual
transistors. It was an immediate success and the idea of a family of
compatible computers was soon adopted by all the other major
manufactures. The descendants of these machines are still in use at large
computer center today.
-They also popularized several key techniques absent in second generation
operating systems. Probably the most important of this was
multiprogramming. On the 7094, when the current job paused to wait
for a tape or other I/O operation to complete, the CPU simply sat idle
until the I/O finished.
-Another major feature presented in third-generation operating system was
the ability to read jobs from cards onto the disk as soon as they were
brought to the computer room.
The Fourth generation (1980-1990): Personal Computers
-With the development of LSI (Large Scale Integration) circuits, chips
containing thousands of transistors on a square centimeter of silicon, the
age of the personal computer dawned. The most powerful personal
computers use by businesses, universities and government installation are
usually called workstations, but they are really just large personal
computers. Usually they were connected together by a network.
-An interesting development that began taking place during the mid
1980’s is the growth of network of personal computers running network
operating system and distributed operating system. In a network
operating system, the users are aware of the existence of multiple
computers, and can log in into remote machine and copy file from one
machine to another. Each machine run its own local operating system and
has its own user.



Multi Process System


Definition
Is the use of two or more (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor and/or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined on one die multiple dies in one package, multiple packages in one system unit, etc.).
<!--[if !supportLists]-->· <!--[endif]-->Multiprocessing sometimes refers to the execution of multiple concurrent software processes in a system as opposed to a single process at any one instant. However, the terms multitasking ormultiprogramming are more appropriate to describe this concept, which is implemented mostly in software, whereas multiprocessing is more appropriate to describe the use of multiple hardware CPUs. A system can be both multiprocessing and multiprogramming, only one of the two, or neither of the two.
<!--[if !supportLists]-->· <!--[endif]-->A computer system in which two or more CPUs share full access to a common RAM
<!--[if !supportLists]--> <!--[endif]-->Continuous need for faster computers
<!--[if !supportLists]--> <!--[endif]-->shared memory model
<!--[if !supportLists]--> <!--[endif]-->message passing multiprocessor
wi - wide area distributed system


<!--[if !supportLists]-->· <!--[endif]--> Multiprocessor Hardware (3)
NU NUMA Multiprocessor Characteristics
<!--[if !supportLists]-->1. <!--[endif]-->Single address space visible to all CPUs
<!--[if !supportLists]-->2. <!--[endif]-->Access to remote memory via commands
<!--[if !supportLists]-->- <!--[endif]-->LOAD
<!--[if !supportLists]-->- <!--[endif]-->STORE
<!--[if !supportLists]-->3. <!--[endif]-->Access to remote memory slower than to local

Real-Time System
A real-time operating system (RTOS) is an OS intended to serve real time application requests.
A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is jitter. A hard real-time operating system has less jitter than a soft real-time operating system. The chief design goal is not high throughput, but rather a guarantee of a soft or hard performance category. An RTOS that can usually or generally meet a deadline is a soft real-time OS, but if it can meet a deadline deterministically it is a hard real-time OS.
A real-time OS has an advanced algorithm for scheduling. Scheduler flexibility enables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications. Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency, but a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time

Design philosophies

The most common designs are:
  • Event-driven which switches tasks only when an event of higher priority needs service, called preemptive priority, or priority scheduling.
  • Time-sharing designs switch tasks on a regular clock interrupt, and on events, called round robin.
Time-sharing designs switch tasks more often than strictly needed, but give smoother multitasking, giving the illusion that a process or user has sole use of a machine.
Early CPU designs needed many cycles to switch tasks, during which the CPU could do nothing else useful. For example, with a 20 MHz 68000 processor (typical of late 1980s), task switch times are roughly 20 microseconds. (In contrast, a 100 MHz ARM CPU (from 2008) switches in less than 3 microseconds.) Because of this, early OSes tried to minimize wasting CPU time by avoiding unnecessary task switching.

Scheduling
In typical designs, a task has three states:
1) running (executing on the CPU)
2) ready (ready to be executed)
3) blocked (waiting for input/output). Most tasks are blocked or ready most of the time because generally only one task can run at a time per CPU. The number of items in the ready queue can greatly vary, depending on the number of tasks the system needs to perform and the type of scheduler that the system uses. On simpler non-preemptive but still multitasking systems, a task has to give up its time on the CPU to other tasks, which can cause the ready queue to have a greater number of overall tasks in the ready to be executed state . Usually the data structure of the ready list in the scheduler is designed to minimize the worst-case length of time spent in the scheduler's critical section, during which preemption is inhibited, and, in some cases, all interrupts are disabled. But the choice of data structure depends also on the maximum number of tasks that can be on the ready list.
If there are never more than a few tasks on the ready list, then a doubly linked list of ready tasks is likely optimal. If the ready list usually contains only a few tasks but occasionally contains more, then the list should be sorted by priority. That way, finding the highest priority task to run does not require iterating through the entire list. Inserting a task then requires walking the ready list until reaching either the end of the list, or a task of lower priority than that of the task being inserted.




Algorithms
Some commonly used RTOS scheduling algorithms are:
  • Cooperative scheduling
    • Fixed-Priority Scheduling with Deferred Preemption
    • Fixed-Priority Non-preemptive Scheduling
    • Critical section preemptive scheduling
    • Static time scheduling


Intertask communication and resource sharing

Multitasking systems must manage sharing data and hardware resources among multiple tasks. It is usually "unsafe" for two tasks to access the same specific data or hardware resource simultaneously. "Unsafe" means the results are inconsistent or unpredictable.




Message Passing
The other approach to resource sharing is for tasks to send messages in an organizedv scheme. In this paradigm, the resource is managed directly by only one task. When another task wants to interrogate or manipulate the resource, it sends a message to the managing task. Although their real-time behavior is less crisp than semaphore systems, simple message-based systems avoid most protocol deadlock hazards, and are generally better-behaved than semaphore systems. However, problems like those of semaphores are possible. Priority inversion can occur when a task is working on a low-priority message and ignores a higher-priority message (or a message originating indirectly from a high priority task) in its incoming message queue. Protocol deadlocks can occur when two or more tasks wait for each other to send response messages.



Memory Allocation
is more critical in an RTOS than in other operating systems.
First, speed of allocation is important. A standard memory allocation scheme scans a linked list of indeterminate length to find a suitable free memory block. This is unacceptable in an RTOS since memory allocation has to occur within a certain amount of time.
The simple algorithm works quite well for simple because of its low overhead.

Distributed Networking System.
A distributed computer system consists of multiple software components that are on multiple computers, but run as a single system. The computers that are in a distributed system can be physically close together and connected by a local network, or they can be geographically distant and connected by a wide area network. A distributed system can consist of any number of possible configurations, such as mainframes, personal computers, workstations, minicomputers, and so on. The goal of distributed computing is to make such a network work as a single computer.



Scalability
The system can easily be expanded by adding more machines as needed.
Redundancy
Several machines can provide the same services, so if one is unavailable, work does not stop. Additionally, because many smaller machines can be used, this redundancy does not need to be prohibitively expensive.



Disk Operating System - one of the first operating systems for the personal computer. When you turned the computer on all you saw was the command prompt which looked like had to type all commands at the command prompt which might look like . This is called a command-line interface. It was not very "user friendly". It is a master control program that is automatically run when you start your PC. DOS stays in the computer all the time letting you run a program and manage files. It is a single-user operating system from Microsoft for the PC. It was the first OS for the PC and is the underlying control program for Windows 3.1, 95, 98 and ME. Windows NT, 2000 and XP emulate DOS in order to support existing DOS applications. To use DOS, you must know where your programs and data are stored and how to talk to DOS.


UNIX operating systems are used in widely-sold workstation products from Sun Microsystems, Silicon Graphics, IBM, and a number of other companies. The UNIX environment and the client/server program model were important elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers. LINUX, a UNIX derivative available in both "free software" and commercial versions, is increasing in popularity as an alternative to proprietary operating systems.

Windows is a personal computer operating system from Microsoft that, together with some commonly used business applications such as Microsoft Word and Excel, has become a de facto "standard" for individual users in most corporations as well as in most homes. Windows contains built-in networking, which allows users to share files and applications with each other if their PCs are connected to a network. In large enterprises, Windows clients are often connected to a network of UNIX and NetWare servers.

17 comments:

  1. nice try...
    lot info that i will get from your blog...
    thanks umi..
    gud luck...
    :-)

    ReplyDelete
  2. good job.. but i dont understand at all..coz im bad in english.. hahaha
    next,explain to me in malay... :)

    ReplyDelete
  3. well done ummi and haziq because already try the best to present in english.
    I know it not easy but two of u already try the best.

    ReplyDelete
  4. nice presentation umi....soo gooddd....

    ReplyDelete
  5. u do it the best umi... ur voice so loud !!!!

    ReplyDelete