Operating Systems in a Nutshell
March 12, 2017
Since I started using computers as a young child, I have always had a fascination with operating systems. My first operating system my home computer had (an Intel 486 DX4 100MHz) MS-DOS 6.22 with a Windows 3.1 GUI on top of it. I also vaguely remember using System 7.5 on 68K and early PowerPC Macintoshes back in elementary school. Those were some good times. I played lots of computer games and pretended I was a boss at a company while slamming my hands on the keyboard running Microsoft Word 6.0.
Initially, I thought that there was a little man in these computers controlling every aspect of it. The thought was similar to how telephone operators used to route phone calls from one person to another. As it turns out, my imagination was actually pretty close! The little man inside the computer which I had assumed to be controlling everything, was in fact, the operating system! Instead of a physical conscious being actually controlling the hardware of the computer doing its knob twists and turns, we actually have software instead. That's the only difference.
The operating system frequently abbreviated as the OS, is a program in your computer system, or device designed to manage the hardware and software resources along with controlling how processes execute on that specific system. Processes are the applications that are executing in the system. With the ability to run applications, memory becomes an important resource that must be managed properly. The operating system must be mindful about memory management. The job of managing memory is an important responsibility for the operating system as it needs to be able to allocate and deallocate memory to processes as needed. It also runs device drivers, which are programs that tells the computer how to manipulate some hardware device connected to the computer.
File management is also a job for the operating system. Storing your files and having it available for reference at any time is the job for the operating system to consider. With personal files comes for the need of security. The operating system has security mechanisms to protect data and resources from any malicious executable code that may destroy and/or modify data in ways not intended to. This is the very reason why we are able to safely and sanely store the paper we are currently writing, turn off the computer and go for a coffee -- all knowing that we will be able to come back and load that same paper in our word processor at a later time.
Personally, one of the coolest things about computers is the operating system. It is definitely the program that I take for granted and I have always just accepted for the fact that it "just works". Aside from the minor hiccup here and there with some sort of system file or library that needs some tinkering to get working, it is amazing how stable the operating system is. Operating systems these days are pretty robust. It doesn't really matter which computer system you prefer using day to day -- whether it be Mac, Windows or Linux, the operating system is important. In this modern age these operating systems do a lot to manage the physical hardware in your computer. There is a lot of complexity involved.
Although the "it just works" mentality is something we just allow for an operating system, it is important to know how the operating system works and why it does things the way it does.
There are a few things that is for the operating system to be responsible of. Some of the questions the operating system answers through actions are:
- How does the computer know how to boot up and start running the OS?
- How are applications executed and managed by the OS?
- How does the OS resolve any resource conflicts such as deadlocks?
- How does the OS manage a finite amount of memory?
- How does the OS store your files and know where to look?
I'll go through each one at a very high level and hope that this is a good introduction to pave the way into learning more about any of the above topics.
How the Operating System Boots
The operating system is special in that it is the program in which a personal computer knows how to execute upon power-up. This is accomplished with a concept called bootstrapping. To "bootstrap" in the software sense, essentially means to be able to start up and load appropriate resources without further intervention. Hit the switch and everything will just "go". The operating system in this case, knows how to load all the necessary services it needs perform the functions it is responsible for.
To achieve this is not really magic and is rather systematic.
Basically, the personal computer has one special program that is hardcoded into a ROM chip called the BIOS. Although UEFI is now the modern piece of software that takes care of this, for simplicity, I'll be referring to it along with the BIOS as simply "BIOS". Since the CPU needs a specific set of instructions to do something at initial power-on, the CPU is hardcoded to look for a specific memory address in the ROM upon power-up for the BIOS. Then the BIOS serves the purpose to feed instructions to the CPU and get ready to boot into the operating system.
The purpose of the BIOS is to load, test and configure the system hardware at a minimal level and give control of the operating system by calling the boot loader. The boot loader is the special piece of software that will take care in managing the booting process of the operating system once the BIOS has done its job.
The BIOS is a simple program that has several responsibilities. It is in a sense the basic interface between the hardware and the operating system. Here's what occurs during the time when the BIOS has control over the system:
- The BIOS does a power on self test (POST) to check if the basic devices belonging to the computer so that it can boot are all functioning properly.
- Assign any DMA channels and IRQ lines to be used by devices to send interrupts to the CPU.
- Load interrupt handlers.
- Loads other data to registers needed for OS booting.
Note: Interrupt is basically a doorbell in which a CPU listens to in order to fulfill any high priority tasks at any given moment. Interrupt handlers are the routines that the CPU executes to get those things done. When an interrupt occurs, the operating transitions from user mode to kernel mode to handle any privileged operations. Often times, this is to invoke system calls to manipulate the computer hardware.
Each interrupt is associated with an interrupt vector, which is the routine which handles the interrupt.
For more information about the concept of interrupts you can head on over to here!
The BIOS then looks for a device to boot from. In many cases, once a device is found, the CPU will go to a fixed address in that device and executes from there. The term for the portion of data that resides in this bootable device is the master boot record (MBR). Keep in mind this device can be a hard drive, floppy drive, CD-ROM or even a USB drive. The program residing in the MBR is basically the location in where to find the boot loader. The boot loader provides information on where the instruction to boot the operating system lies. Usually the boot loader will immediately go to that address and will begin the operating system boot process.
The operating system boot process is more specific than the first part of the computer boot up process. The operating system has its own program to load all the services and resources it needs to function correctly. The operating system loads up the most important program to it, called the kernel. The kernel is a program that essentially services as a mediator between user programs and hardware. A simple diagram to invoke this depiction is shown below:
An application never really makes a direct call to the hardware to perform a task. These applications actually interact with the kernel by invoking system calls to it. The kernel then receives the calls and translates them into instructions for the particular piece of hardware. The kernel is the interface between the user application and the computer hardware. The operating system as a whole, is the kernel and the software that supports it in which can be a CPU manager, process manager, file manager, network manager and much more.
Processes and Threads
A computer would not be useful if we could not have it perform jobs to solve a problem. That's why we have applications! A process is an application in the executing context. The process is basically a program in execution. The operating system represents processes as collections of resources wrapped in a data structure that represents a state of a program that is currently being executed by the CPU. The operating system manages processes in that it has:
- The state of the registers, stacks, and data structures to describe the current state of the running program.
- The address space information allocated in memory for the running code to "work with". Usually this is abstracted away from physical memory and is represented by virtual memory. This done simply for the reason in that modern process' are very large in size and cannot physically fit in memory. Therefore, it is important for any modern operating system to be able to abstract away physical memory and translate any memory references to and from virtual memory via the CPU's memory management unit.
- The executable code that belongs to the process.
- Any threads that has been created by the process to aid in executing non-blocking tasks.
A process is essentially a large task in the operating system. A process comes in the form of a program. The operating system's job is to be able to schedule that process and be able to execute it. A process is basically a data structure in which the operating system understands to be able to manipulate it. Often times, the service belonging to the operating system that manages processes is called a process manager.
The process information is represented as a process control block (PCB). It is a data structure that wraps the information mentioned above, into an abstraction that can be manipulated by the system. Having just the program code is not enough as the operating system needs to know more information about the current running program in order to manage it properly.
The job of the operating system's process manager is to be able to schedule and manipulate a list of processes to be run and execute them in a fashion that will allow in the best case, to complete eventually.
A process consists of several states and the flow can be described as:
- Every process enters a ready state.
- A process then enters a running state as it is being executed.
The process in the running state can either then go to a:
- Waiting state, usually due to an external resource needed to be obtained, or waiting for something to make it go again.
- Back to the ready state due to CPU time being exhausted, or an interrupt being invoked.
- The process enters the finished state once it has completed its task.
An operating system actually controls processes in a collection of process control blocks (PCB). The PCB basically holds the current processor state in which we can refer to as the "context" of the current execution. What the context consists of is the contents of the processor’s registers and memory information. The OS has a collection PCBs in which it can switch between individual PCBs back and forth. Assuming a single-core CPU, we call this context-switching. This is when the operating system works with the CPU to do switching between processes so fast in the storing of the state of the current process that programs look like they are all running at the same time. Another way to term this is pseudo-concurrency.
When processes are created, the operating system determines a quantum, or allocation in units of time for which the process can run before being context-switched. This can be anywhere from 10ms to 100ms at a time.
The basic flow of process creation involves a parent process invoking a system call to the operating system to
create a child process. In UNIX, this system call is
fork. When a process fork happens, the
child process inherits a copy of all the contents of the parent's PCB. The parent will also acquire the process ID of the child.
From there, another system call such as
exec will load the appropriate program data to the child.
The diagram shows a very simplified version of what a PCB queue looks like along with a process manager accessing it. We can see that the process manager is pointing to the PCB belonging to some arbitrary process named Process 2 (PCB 2) and from here, the PCBs can be rearranged in priority by having one PCB point to the next. Again, oversimplified, but the point is that there is a container of all these PCBs and that it is up to the operating system to manage it appropriately.
The arrows pointing down below essentially symbolizes a context switch. Each time a process is cycled, the OS will save the state of CPU registers, stacks and data to the process's control block and then loads the next process's state information to the CPU. This, paired with the CPU being able to execute instructions at an extremely fast rate, essentially gives us the illusion that all processes are executing at the same time. But it is important to remember that that is not the case and everything is executing one at a time for a single CPU system!
With the ability to have a queue of multiple processes to execute until finish, the concept of CPU scheduling is important here as processes need to be arranged in such a way so that they are cycled and executed appropriately. Typically, these PCBs are stored in some sort of queue or list data structure, and then be manipulated by the operating system. There are several process scheduling algorithms in which can be used to invoke processes. It boils down to just two main types:
- Non-Preemptive Switching
- Preemptive Switching
Most older operating systems would use a form of non-preemptive switching for CPU scheduling. While the modern operating systems tend to use preemptive switching.
The basics of non-preemptive switching is that when a process is ready to be executed, the operating system will only allow that process to execute and not be interrupted until it has completed the job.
In preemptive switching, the operating system schedules processes specifically for an allocated amount of time using several different algorithms of choice. With preemptive switching, the process manager can simulate the illusion of multi-tasking. I won't discuss in detail about the algorithms here, but it is important to know that these two types of context-switching exist.
When a context-switch happens, the process that is now active enters the ready state until it needs a resource, or it has exhausted the amount of time allocated to it by the CPU. When either of the two happens, the process moves to the waiting state, and the operating system will switch out the data in the PCB with the PCB of a new process. This new process then enters the ready state.
Processes also consist of threads. A thread in most cases, is just like a process. In fact, a lot of resources floating around the internet tend to make the claim that "threads are lightweight processes". It is actually really hard to visualize what that means if a person is just beginning to learn about processes. I'm really going to try and keep the explanation of threads brief, but hope to actually give enough information so that a difference can be seen between a thread and a process.
A lot of confusion happens when we try to distinguish between the concept of a process and a thread. Threads are similar to processes in that:
- Threads can get their own processor time (stack, register and program counter state.)
- Threads are also scheduled.
The main differentiating attribute that draws threads away from processes is that threads can only be created by the process itself. Therefore, a thread must belong to a process. Knowing this fact, a thread belonging to a process essentially can access the process's address space. This is not true for processes. A process has its own address space and cannot directly access another process’s address space without the need for some inter-process communication or remote call to another process.
In this case, a thread is essentially a spawn, or underling of a process. It is a smaller entity that can do the job of a process, but using the resources that was allocated to the process by the operating system. Many threads then, can access the address space of the process that created them and do separate work. The reason for having multiple threads is that it allows each thread within the process to be scheduled and run by the OS. That is key.
When we think about this further, we can say that a single process is essentially one thread. This is because in some systems, an individual thread can consist of its own state of stack, CPU register state and program counter. When multiple threads get created, these threads then obtain their own set of stack, CPU register state and program counter. All these threads are then handled by a thread scheduler and are able to communicate with each other through the same address space in memory.
When each thread is assigned a CPU quantum by the scheduler, the system can effectively do more work by context-switching between each thread in the process. There are two ways for an OS to implement this.
- A thread scheduler within the process in user mode.
- A thread scheduler within kernel mode.
There are advantages and disadvantages to both. If the thread table, the list of threads that need to be scheduled, is kept within the process, then we can have multiple tables in each process in user mode. Therefore we need an external thread manager to actually schedule these threads properly. This can get very complicated. But on the flip side, we gain the advantage of not having to manage threads at the kernel level as it can be expensive to invoke a kernel system call and then have it perform a context swtich. So at the expense of more management at the user level, we can potentially gain more speed in driving threads.
What about management of threads at kernel level? Well, the first advantage is that we do not need any external management for our threads. The thread table lies within the kernel and allows the kernel to schedule another thread for execution whenever there is a blocking system call being invoked by another thread.
In the real world and practical sense, it may be wise to consider a hybrid approach in implementation of the two.
Good applications of threads we can immediately think of is a video game. In a video game, we have a main loop that is executing the game logic infinitely. Within that loop, several threads are created to handle other tasks in the background such as playing the background music, handling persistent AI, background file manipulation and other things. A thread of a process is scheduled for execution along with other threads of the same process by the operating system. The result of this the ability for one process to perform the illusion of several tasks of its own at once.
Now, the discussion comes to whether or not spawning lots of processes can yield better performance or not. If we have many processes and threads, the system can in theory, get more work done. This is not so as process switching can be quite expensive. Consider the following steps the operating system must take in order to perform a context switch:
- The operating system transitions from user mode to kernel mode.
- The state of the current process must be saved.
- A scheduling algorithm must be run in order to choose the next process to be executed.
- A context switch will always need a cache flush. This forces dynamic reload of data from main memory twice.
As you can see, it is not all that obvious that many processes to allow for less blockage can be good. Lots of processes will result in perceptual slowness.
Deadlocks and Resolution
When processes are switched back and forth, often times a process will obtain the use of a resource that is not sharable by other processes in the system. This could be accessing a printer, or some other file that cannot be read or written at the same time by another process.
The pattern is typically like so:
- Process A grabs a non-sharable resource to be run and wants to grab second non-sharable resource, but it is in use by Process B.
- Process A then goes into a paused state, and waits for the second non-sharable resource currently held by P rocess B to be released. But in the transition to the waiting state, Process A never releases the first non-sharable resource it had grabbed earlier.
- Process C comes and tries to grab the first non-sharable resource that Process A had grabbed, but it is unavailable, so it goes to a waiting state.
A deadlock will now occur because of the following:
- B ties up a resource A needs.
- A goes to a waiting state, but holds a resource in which C needs.
- C goes into a waiting state because A has the resource.
A and C are in a perpetual waiting state until B has released its resource. If B never releases the resource which has caused A wait and in turn C to wait, then we now have a deadlock.
When a situation like this occurs, there is a potential case of a deadlock that may happen with a process. Formally speaking, a deadlock is when a process is in a constant waiting state for some dedicated resource (one that cannot be shared) of a system because another process has grabbed usage of it and never releases it. This causes a waiting game between the two processes and could result in a system crash if multiple processes are in a deadlocked state.
Resolving deadlocks can be pretty complicated. Preventing them is even more so. Usually when a deadlock happens, the operating system can perform several operations that can put the CPU and process management back into a normal state. They can be:
- Figuring out which process is the one causing the deadlock, killing this process and restarting it. The process that was then killed becomes the sacrifice.
- Forcing the user to take a manual intervention. Telling the user the system unresponsive due to a process not responding will allow them to take some sort of action.
- Halt any processes in the ready state, and focus on running a process at a time to completion. This sometimes will cause the computer system to feel slow. The hope here is to eventually have the deadlock resolve itself.
Most of the time, a process under normal conditions will release the resources that are needed by others and a deadlock will usually resolve itself. But when the occurrence of a deadlock happens, the most popular thing Windows users would usually do is the three-finger salute of ALT-CTRL-DEL.
Avoiding race conditions to maintain consistency of data is also important. This becomes especially so when multiple processes begin to communicate with each other through IPC or RPC.
How doe we avoid race conditions? The following can help with avoiding race conditions:
- Define critical regions of code. We can find some way to prohibit more than one process from reading and writing shared data at the same time through the use of locks.
- Use mutual exclusion to make sure that only one process is using a shared resource.
Achieving mutual exclusion can be done in a few ways:
Disable interrupts for a short while.
- This is the easiest solution for a process to do, but if interrupts are disabled, context switching is disabled. This is potentially bad in that a process can forget to turn on interrupts again.
Use lock variables.
- Have a single shared lock variable where one process can only acquire a lock and can enter a critical section of code when it has this lock.
- Take turns entering in a critical region of code. Use busy-waiting, or a spin-lock to achieve this.
- Allow a process to sleep when a critical region is not to be entered, and have another process wake it up.
In the perfect world, memory is infinite and we would never have to care about managing it. Unfortunately, this isn't true in our world. But, modern operating systems have gotten really good at managing the computer's memory when it comes to executing our jobs.
Historically in older systems, a job was only able to execute if all of its instructions and data was able to fit in memory. Nowadays, with the concepts of paging and virtual memory, those days are essentially a thing of the past.
Older techniques to partition memory used by operating systems were single-user allocation, fixed partition, dynamic partition and best-fit memory allocations all required that processes be loaded for a job only if the complete program code and data would fit in memory. Several of these techniques had draw backs such as not being able to load any processes that required more memory than what was available in main physical memory, or fragmentation issues such as internal and external fragmentation.
Internal fragmentation is caused by memory management algorithms that could be loaded by fixed partitioning -- where multiple jobs get a fixed allocated portion in memory. In fixed partitioning, internal fragmentation is caused when processes are unloaded from main memory due to a completion of a job, but the available space as a result of this is not enough for any other process to be loaded for another job.
External fragmentation is introduced with dynamic partitioning methods of processes loading into memory. With dynamic partitioning, jobs are loaded into memory but with the flexibility of it having a portion of memory dynamically allocated just to fit the job of interest. However, because of this, when a process is unloaded from main memory and a new one is introduced, the limitation is that the new process being loaded to fill this free space must be less than or equal to the size of the previous process. If the new process is smaller than the space available, we have essentially wasted memory.
Regardless of all this, older memory management techniques all have one great weakness. That weakness is that the operating system is confined to only the amount of physical memory available to the system. Going back to my old 486, which only had 8 megabytes of RAM, using older memory management techniques would limit the amount of processes to only be able to use those 8 megabytes.
Modern memory management techniques make the use of a concept called paging and virtual memory.
Paging is a very interesting concept in that now, jobs are actually divided to a fixed size of memory that the operating system specifies. Going back to our theoretical system, of 8 megabytes of RAM, if our operating system defines 1 page of memory to be 512 bytes, we would essentially have 16,384 pages to store our jobs (this is 8 MB/ 512 B).
The advantage of paging is that each job can be split into a page and be dispersed anywhere in memory. The operating system keeps track of the order through the use of a page table. Think of it as an index of all the pages that correspond to a process. This eliminated the requirement that processes must be loaded in whole before the operating system could begin execution of it. Basically pages could be swapped in and out of memory as needed. With the page table, the operating system can maintain some coherency regarding which piece of code to go to. If a page is not available in memory, then the operating system will then look into disk and load the appropriate page. This is the basis of virtual memory.
Virtual memory takes the concept of paging a bit further. Virtual memory allows the operating system to be able to address more memory that is physically available. The operating abstracts the main memory, and a file which resides in secondary storage such as a hard disk called the page file. The page file can be seen as an extension of the main memory. All of this is kept under the same address space which abstracts the two and transparently treats them as one space. Again, yet another oversimplified diagram is shown here:
With virtual memory, only bits and pieces of a process can be loaded into the main memory, while the rest of it can lie in disk. The main advantage is that now, processes can be as large as needed and the operating system will be able to swap in and out pages into main memory. The disadvantage is that if a lot of swapping happens, we end up with thrashing. Accessing memory is in the order of nanoseconds, while accessing secondary storage can be in the order of milliseconds. If a lot of disk hits are needed to access a page in memory due to main memory being too constrained in size, then system performance will suffer.
File Management and Filesystems
What is an operating system without a filesystem!? Files are just data in a storage medium. To make things simple, I’ll only refer to secondary storage as the hard drive. When a user creates a file in the operating system, a bunch of bytes are written into the hard drive. It is up to the operating system to wrap this information with information telling it how to read the file properly at a later time should it need to again. This can be things such as the file format, and the file path to the file.
A file path is just basically the traversal of tree nodes to the file. We can imagine a file being a leaf of a N-ary tree. The root node is then a root directory. In a Windows system, this is typically the C:\ directory, while in UNIX based systems it is just /. In this N-ary tree, the file path represents the traversal needed to get to the file. Each node that is passed through this traversal can be thought as a directory.
This system described above is referred to as the filesystem of the operating system. The filesystem is a sort of structure in which the operating system needs in order to organize data. This is basically a database. With a filesystem, the operating system can confidently save and load files on disk to give the illusion of persistence.
The filesystem is the index of where everything gets stored and with that, it is important for an operating system to have an efficiently designed filesystem as it can determine the number of files allowed to be stored along with how fast files can be read and written. Think of it as the cart catalog for all your data in your computer. You remember card catalogs at the library, right?
Phew, we made it through all that! Though operating systems are complex pieces of software, there is a lot of importance on the elegance to the design and architecture to them. It is the driver of the computer system. Without it, computers will be less interesting and honestly, probably less versatile than what we are accustomed to.
Hopefully this article gave you an interesting high level overview on the basic operating system concepts and now that you have an idea on how operating systems work, go learn more!