Thursday, September 2, 2021

,




An algorithmic design utilizes a set of instructions to do tasks. A data structure is a specific way of formatting to include information in a certain structure such as a stack or queue. Both algorithmic designs and data structures are used to help design programs. The goal is to use the best program or data structure for the information to maximize efficiency.

When looking at the complexity of an algorithm it is good to consider the time complexity and the space complexity. Time complexity focuses on the time it takes for memory accesses to be performed. This includes how many accesses, the time in between the accesses, and how many inner loops were executed (www.cs.utexas.edu, n.d.). Space complexity focus on ensuring an algorithm works to make space the most efficient, when looking to access memory (www.cs.utexas.edu, n.d.).

According to TutorialsPoint, Algorithms can include the following to enhance time and space efficiency:

·      Order (sort) – This puts the items in a certain order to find it faster

·      Find (Search) – This looks to find a certain item.

·      Add (Insert) – Adds an item.

·      Remove/Delete – This takes out an item.

·      Updates− This changes an item to be updated.

These are some of the functions that an algorithm can help to create efficiency (www.tutorialspoint.com/, n.d.)

Here are some examples of different types of searching and sorting algorithms.

Linear Search is simply searching through all elements in order. An example of this when scrolling through the DVR of recorded programs. I hit the down arrow until I find the show I want to watch (Lysecky, Vahid, & Givargis, 2017).

Binary search starts in the middle and then searches the first half followed by the second half until the element is found. This would be similar to looking up a phrase in a physical encyclopedia, when they used to be in books. A person does not read every entry but finds the entry they are looking for.

Selection sort looks for the next smallest item and puts it in the sorted part of the array. It uses the formula 0(N2), where n equals the size of the array. An example of this is like when a farmer picks strawberries and organizes them from smallest to largest.

Merge sort is thought of as dividing and conquering. It splits it up in in half until it all items are individualized. Then it makes temporary arrays, before putting them back into larger arrays in order. This ends with one final merge to sort the array.  A real-life example of this could be if a bride asks her 2 bridesmaids to help with the seating chart. They might each break it up into a list and then merge those lists together to have one final seating chart (GeekforGeeks, 2021).

Insertion sort works from one direction to the next, comparing the items to right as it goes. It breaks it up in to sorted and not sorted, then places each item where it needs to be put in the correct part of the sorted area. It can be very time consuming, especially if every item is out of order. It has a worst case time complexity in that way. It uses the formula 0(N2) for this worst case scenario. The best case scenario could be O(N). An example could be moving your playing cards around in your hand if you are playing a card game and want the cards in your hand in order. Another example is a stack of books that need to be put away at the library. If you have a stack of books, and you put the books in the correct place one by one until they are all in order before you put them away (Programming with Mosh, 2020).

Shell Sort takes advantage of the good parts of insertion sort. It sorts a group into smaller arrays by dividing it up by the gaps. For example, it looks at every third element and moves those to be sorted down the line, Then all the bigger elements are on the right and smaller elements are on the left. It can keep going this way or it can move to insertion sort, which will now be faster, because not every single element needs be moved. This leaves insertion sort with the best case scenario (GeekforGeeks, 2021). A good algorithm for this would be O(N2/3). Please this this chart from a book I found online (Miller & Ranum, n.d.). It shows how the sublists sort until the entire list is sorted.

Quick Sort is recursive. It must have a pivot. It must meet 3 requirements. In must be in the right position (the final position), everything to the left is smaller, while everything to the right is bigger.  First, move the pivot to the very end of the array for now, so it is not in the way. Secondly, look for an item from the left is that is larger and to the right for an item that is smaller. Then swap the left and right item. Repeating this process, swapping until the the item from left is bigger than item from right.  Lastly, swap the pivot for the last item.  Then do this for the first and second half of the array until both are sorted in this same way. It has a worst case scenario of O (N2), and a best case algorithm of 0(nlogn) (Sambol, 2016). A real life example of this would a shopping list at grocery store, when you are rushing. You would look for items closer together first and hopefully kind of near the one side of the store, then the other.

Radix sorts the array with counting sort by the last digit, then the second digit, then third and so on. This is considered a stable algorithm. When n is the number of elements in the array (length of the array), d is the number of digits in the elements, and b is the base of the system.  O(d(n+b)) is the algorithm, so it is pretty fast (CS DoJo, 2017).

When looking at stacks and queues, the best time to use this is when the information needs to be arranged in a certain way rather than an array.

It seems the best algorithm to choose depends on what the program is attempting to accomplish. It seems like a linear search would work better for smaller arrays, while a binary search will work better for larger arrays. This is because it takes less time to look through elements if there are less of them than to break then down first. If array is bigger, then divide and conquer would be more efficient. It also depends on the best and worst case scenarios as listed above.

My personal favorite is quick sort because, just like the name states it is faster. It is a recursive sort and that helps with large data. With a best case scenario, it can be the most efficient.

References

CS DoJo. (2017, March 26). Radix Sort Algorithm Introduction in 5 Minutes. Retrieved from Youtube: https://www.youtube.com/watch?v=XiuSW_mEn7g

GeekforGeeks. (2021, July 20). ShellSort. Retrieved from GeekforGeeks: https://www.geeksforgeeks.org/shellsort/

Lysecky, R., Vahid, F., & Givargis, T. (2017). Data Structures Essentials. Retrieved from https://zybooks.zyante.com/#/zybook/DataStructuresEssentialsR25/chapter/1/section/3

Miller, B., & Ranum, D. (n.d.). Problem Solving with Algorithms and Data Structures using Python. Retrieved from Pythonds: https://runestone.academy/runestone/books/published/pythonds/index.html

Programming with Mosh. (2020, June 29). Insertion Sort Algorithm Made Simple [Sorting Algorithms]. Retrieved from Youtube: https://www.youtube.com/watch?v=nKzEJWbkPbQ

Sambol, M. (2016, August 14). Quick sort in 4 minutes. Retrieved from Youtube: https://www.youtube.com/watch?v=Hoixgm4-P4M

www.cs.utexas.edu. (n.d.). Complexity Analysis. Retrieved from www.cs.utexas.edu: https://www.cs.utexas.edu/users/djimenez/utsa/cs1723/lecture2.html

www.tutorialspoint.com/. (n.d.). Data Structures & Algorithms - Quick Guide. Retrieved from www.tutorialspoint.com/: https://www.tutorialspoint.com/data_structures_algorithms/dsa_quick_guide.htm

 

Wednesday, August 4, 2021

,


Hello, and welcome back to the blog. If you are new to programming, this is a good place to start. This post will go over object oriented programming, how to download the programs you will need and how to create a simple program.

Object oriented programming (OOP) is techniques and guidelines that are used to create a program that uses objects. An object is an item in the physical world. Essentially an Object-Oriented Programming System (OOPs) is a way to make a program that utilizes classes and objects. An object can be thought of as an instance of a class. One example of this when comparing it to the real world is a box of crayons. The crayons are objects in the box, which is the class. The class is the collection of crayons, the objects.  (JavaPoint, n.d.).

If wanting to practice with java, it will need to be downloaded. Here is a link to download java:

https://www.java.com/en/.  

You can click on download and then agree and start free trial. The folder will download, and then find the folder, likely called downloads. After unzipping it, then it will install the latest version of java.

It will also require a program to run and test it. The IDE I used was intellij. Here is the link for this:

https://www.jetbrains.com/idea/. I clicked download in the upper right and then chose the community version. After it was downloaded, I clicked the folder (in downloads again), and allowed it to make changes to my computer. I opened the program, and it let me create a new project.

After you get the IDE, you can make a new program. This super simple program tutorial is located at this link: https://docs.oracle.com/javase/tutorial/getStarted/index.html. It can be followed to make the simple program hello world.

Here are some key concepts that are related to programming with java:

Inheritance is when an object takes on all the characteristics of a parent object. This can help to accomplish runtime polymorphism and codes can then be reusable. Polymorphism just means one job done in various ways. Just like love in English is the word love, in French it is amour.

Encapsulation is when each class has their own private capsule. The data and code are tied together in a unit. This keeps the data safe by making the variables private. The data can still be accessed via a method.

One method that is utilized to ask an object about itself is the assessor. When looking at OOP, this most often is a type of property. The assessor process is seen as a “get” method. This is just an example, and an assessor can be any method that gives information about the state of the object.

Abstraction is  focusing on the important stuff. It can be thought of as a little group of methods known publicly. They are methods which any other class can use, but they don’t even have to know how they work (raymondlewallen, 2005).

The mutators are methods that are public. They can change the object’s state, but it hides precisely how it did this. The mutators are usually part of an abstraction, but its considered the set method. This allows it to work in the background (raymondlewallen, 2005).

I hope this helps any new programmers!

References

JavaPoint. (n.d.). Java OOPs Concepts. Retrieved from JavaPoint: https://www.javatpoint.com/java-oops-concepts

raymondlewallen. (2005, July 19). Codebetter.com. Retrieved from 4 major principles of Object-Oriented Programming: http://codebetter.com/raymondlewallen/2005/07/19/4-major-principles-of-object-oriented-programming/

Sunday, March 28, 2021

,

 

Operating Systems Uncovered

By: Jasmin Wind

CPT304: Operating Systems Theory & Design (IND2109A)

Dr. Reichard 

03/28/2021

Describe features of contemporary operating systems and their structures

Operating System’ functions can be categorized into two sections. One section is the parts of the operating system that are beneficial to the user. The other section is parts of the operating system that are essential to ensuring the operations run.

The parts of an operating system that are beneficial to the user include user interface, I/O operations, file-system manipulation, communications, and error detection. The parts of an operating system that are essential to ensuring the operations run are resource allocation, accounting, program execution and protection and security.  The following few paragraphs will detail each factor (Silberschatz, Galvin, & Gagne, 2014, pp. 55-56).

User interfaces come in a few forms. Graphical user interfaces are prevalent and used for most desktops, laptops, and smartphones. This is a visible means of cooperating with a computer utilizing elements such as icons, menus, and widows. A command-line interface is just as it sounds and handles instructions from a line of text. One example of this is when Python executes the program from the shell. The batch interface has commands that go into files, and the files are executed, for the most part. Sometimes, those commands are not seen in any visible way (Silberschatz, Galvin, & Gagne, 2014, p. 56).

I/O Operations, “eye-oh.” is input/output. This is a way for data to be moved from the world outside the computer to the computer. One example is reading information from a disk to the computer memory  (Silberschatz, Galvin, & Gagne, 2014, p. 56).

File-System manipulation gives the user a place to create and delete files. For example, this can be done on Windows explorer on a windows computer  (Silberschatz, Galvin, & Gagne, 2014, p. 56).

Communications on an operating system runs between both processes on a computer and outside the computer. Communications occur between separate computers on networks as well. This can happen by shared memory or message passing (Silberschatz, Galvin, & Gagne, 2014, p. 57).

Error Detection needs to always be running. Error detection can fix some errors, notify the user to fix an error (such as add ink to the printer), or sometimes stops the entire system from running (Silberschatz, Galvin, & Gagne, 2014, p. 57).

Figure 1

Operating System



Resource allocation includes sending resources from one program that was closed to other programs being used and open. Some devices keep a lot of the apps constantly running in the background while others might close it completely to send resources to the open programs. It takes many factors into considerations such as the CPU speed, registers currently being used, and the jobs at hand (Silberschatz, Galvin, & Gagne, 2014, p. 57).

Accounting keeps tracks of usage. This can be used to determine what users utilize the most or for charging the customer fees for usage (Silberschatz, Galvin, & Gagne, 2014, p. 57).

Protection and security ensure that access in controlled. Security is important and includes authentication from users that are authorized to use the operating system or programs. Conversely, it should stop unauthorized users from hacking in. (Silberschatz, Galvin, & Gagne, 2014, p. 57).

Program execution is simply running the program into memory. It must end either way at some point. Hopefully, the program runs and ends in a normal way, but it can end by error as well (Silberschatz, Galvin, & Gagne, 2014, p. 56).

Operating systems have so many moving parts, and it is good be able to categorize and organize the types of factors that might be needed for a system. This paper went over both types of functions in an operating system, the kind that are beneficial to the user and the kind that ensure the systems runs smoothly. It talked about the functions of the interface, program execution, I/O operations, file-system manipulation, communications, and error detection. It also discussed resource allocation, accounting and protection and security. Operating System’s functions work in unison to keep the device not only running, but runny smoothly and in harmony with itself (Silberschatz, Galvin, & Gagne, 2014, p. 57).

 

If you zoom in on the concept map, you can see that this is a broad generalization on what an operating system can do, and the section on ensuring operations run smoothly has the capacity to cover resource allocation, accounting, protection and security as well as program execution.

Figure 2

Ensuring operations run smoothly, part of the concept map  Figure 1, but zoomed in.



All of these factors contribute to the system running smoothly. Different operating systems might have more space allocated for protection and security or program execution, but this is a way to show that these categories fall into the section of the OS running without complication.

Discuss how operating systems enable processes to share and exchange information

Processes are when a program is executed. It switches from program to process when it performs the program’s tasks (Silberschatz, Galvin, & Gagne, 2014, p. 108).

Process states are the different stages the process goes through. An example is new, ready, running, waiting, running, terminated. Please see the chart from the lecture and reading based on figure 3.2 (Silberschatz, Galvin, & Gagne, 2014, p. 108)

Figure 3

Diagram of Process State



 

Process Control Blocks are information structures. The computer utilizes these blocks to store up all the data about the job or jobs. Please see the chart from the lecture figure 3.3  (Silberschatz, Galvin, & Gagne, 2014, p. 108):

Figure 4

Process Control Block

 


Single- and multi-threaded motivations and models

There is an excellent advantage to the multi-threaded models because multiple tasks can be implemented at the same time. The user now expects multiple threads and to be able to do many things on a computer at once. An example of this is when using spellcheck in a word processor. Users expect also to be able to type, as well as receive a notification for a new email to pop up. Students might expect this to all happen while running Grammarly.  The pros also include responsiveness, resource sharing, economy, scalability, and multiple programming. Please see the chart from the reading Figure 4.1 (Silberschatz, Galvin, & Gagne, 2014, p. 164):

Figure 5

Single and Multi-Threaded Process



Critical-Section Problem with A Software Solution

The critical- selection problem is that only one job can run at one time in the critical section. The solution is to synchronize the different processes. The way to fulfill this is through mutual exclusion (the processes get in line), progress (processes let other processes go if it is not in use), and bounded waiting (time limits on waiting) that can be run through software. (Barnes, 2018) It uses an OS scheduler to determine what process to run.

Figure 6

Process State and Sharing the Critical Section



Figure 7

Process State and Sharing the Critical Section zoomed in on the termination. 

This part of the zoomed in figure above illustrates a crucial part of the process. Each process needs to terminate after using the critical section. This makes room for more processes to use the critical section as well as more room in the memory.

Explain How Main Memory and Virtual Memory Can Solve Memory Management Issues

The goal of memory management is to use the memory most efficiently by utilizing different techniques to load and store only processes that are currently being used (Tutorials Point (India) Ltd., 2018). The memory manages the processes back and forth between main memory and the disk. It also keeps track of the memory locations and free space. Some objectives and functions of memory include basic hardware, the binding of symbolic memory addresses to actual physical addresses, and the distinction between logical and physical addresses. (Silberschatz, Galvin, & Gagne, 2014, p. 326)

Individual memory areas are needed for each process. In order to measure if there is enough space for the process, it checks the base register and the limit register. The base register shows the lowest space available, while the limit gauges the range (Silberschatz, Galvin, & Gagne, 2014, p. 326). Please see the concept map below. This illustrates the memory being used.

Figure 8  

Base and Limit Register


It shows that the memory must leave some space for the operating system and the separate processes that are using the memory. The map shows each processes base and limit (Silberschatz, Galvin, & Gagne, 2014, p. 327).

The CPU keeps the memory secure by comparing all addresses from the user mode to the registers. To prevent a user from writing over codes, data, or accessing unauthorized memory, the CPU hardware handles these attempts as a fatal error. As long as the instructions come from the kernel mode, the CPU can measure the base and limit registers. The following concept map 7.2 illustrates how it allows the instructions to go through if it fits between the registers base and limit (Silberschatz, Galvin, & Gagne, 2014, p. 327).

Figure 9

Hardware Access Protection with Base and Limit Registers

 


 
The binding of symbolic memory addresses to actual physical addresses can be done during the compile time, load time, or execution time. Please see the map below, 7.3 from the reading. If it is already known where the process will be stored at the compile time, then it generates an absolute code. If it does not know where the process will be stored at compile-time,  it will create relocatable codes during the load lime. The most popular type of binding symbolic memory addresses to actual physical addresses is when it is done during the execution of the process. This mapping is completed by the memory management unit (MMU). This relocation register adds the logical and physical together as an address in memory (Silberschatz, Galvin, & Gagne, 2014, pp. 330-331).

Figure 10

Multistep processing of a user program




 

Figure 11

Logical and Physical Addresses



Physical Address Space Vs Virtual Address Space As They Relate To Different Memory Mapping Techniques In Operating Systems

Logical address space is generated by the CPU while physical address space is seen by the memory. The logical address space is referred to as a virtual address. A logical address space is a set of all logical addresses generated by the program.  The physical address is a set of all physical addresses generated by the program. Logical and physical addresses are the same in compile-time and load time. They are both different in execution time (Education 4u, 2018).

It is easier to visualize each new program from line 0 and goes until it is done. So each process will be stored in its own logical addresses. When the process gets stored in the memory, the process must be stored sequentially one after the other. This concept map is based on the drawing from the educational YouTube video Solving Skills (Solving Skills, 2020, 7:10). 

Figure 12, Zoomed in part of Figure 13

Logical and Physical Addresses 



With dynamic loading, the programmers can tell the program not to load the entire program at once. It loads the routines as they are called. This helps because then the memory does not need to have room for the entire program at once to run it.  This can be done for the dynamically linked libraries as well. If the system supports more than static linking, then it can use dynamic linking to pull a stub which has the place the information is located. If the information has not already been pulled, it will find it as it is needed (Silberschatz, Galvin, & Gagne, 2014, pp. 331-332).

Another process to help save memory space is swapping. It can be swapped out momentarily and be brought back later.  This way, processes can run even if there is not enough physical address space (Silberschatz, Galvin, & Gagne, 2014, p. 332). Part of my concept map is based on figure 7.2 from the reading (Silberschatz, Galvin, & Gagne, 2014, p. 330). Part of my concept map is based on figure 7.5 from the reading (Silberschatz, Galvin, & Gagne, 2014, p. 332). The middle section is referenced above from the YouTube video (Solving Skills, 2020, 7:10). 

Here is my concept map to illustration these memory models:

 


 

Figure 13


Memory



The operating system breaks up the segments of memory as a memory management tool. There is segmented memory, which has blocks of information and there is paged memory which splits up memory into small equal sized sections. The memory can be stored and swapped from the physical memory to the hard drive and back again. When using paged memory, it stores the locations on a page table (Silberschatz, Galvin, & Gagne, 2014, p. 357-358).

Binding can be done at any time, during compile, load, or execution time. Hardware is required, but the address binding can be delayed until it actually runs.  (Silberschatz, Galvin, & Gagne, 2014, p. 329).

 

Explain How Files, Mass Storage, And I/O Are Handled in A Modern Computer System

For most users of computers, the use of files and file storage is second nature. This is where the operating system shows what is being stored. “In general, a file is a sequence of bits, bytes, lines, or records, the meaning of which is defined by the file’s creator and user” (Silberschatz, Galvin, & Gagne, 2014, p. 478). Files must be named and put into a folder to be saved.

File System’s aim to give I/O storage that is can be organized and accessed by users. This helps to keep track of files as well as have less files with damaged data. One important aspect of a file management system is that it aids the operating system with a standardized view and access of processes. It can also allow more than one user to utilize input and output in a multiuser system’s environment. File systems can coordinate and manage essential information as well as offer a catalog that is able to be combed through for a fast retrieval (GURU99, n.d.).

File operations include creating, writing, and reading a file. They also include repositioning within a file, deleting and truncating a file. Some file systems also can add information on the end of a file and are able to be renamed. Opening and closing a file is also an essential task that must be included in file operations. (Silberschatz, Galvin, & Gagne, 2014, pp. 480-481).

In order for open files to be able to perform the functions above, they use file pointers to track the locations of the process. The file system count keeps track of what is open and closed. When processes are closed, it makes room for new processes to be opened. It keeps track of this on the open-file table. An open file’s access rights and location can also be managed (Silberschatz, Galvin, & Gagne, 2014, p. 482).

File directories are the system table that hold the files and shows them with names. A directory has many functions including being able to search, create, delete, list, rename, and navigate through files (Silberschatz, Galvin, & Gagne, 2014, p. 492).

 Single level directories are the plainest type of directory. All files are in one place. This can create issues, because files cannot have the same name (Silberschatz, Galvin, & Gagne, 2014, p. 493). Please see the concept map below. It is based on the concept map from the reading (Silberschatz, Galvin, & Gagne, 2014, p. figure 10.9).

 

Figure 14

Single File Directory

  


Two level directories have the advantage of allowing users to save files with the same name. It also protects files from other users deleting them if those user’s don’t have access. The problem with the two-level directory is that some files would be better shared between users. The solution is to allow sharing. Each user has their own directory (UFD) and it can be accessed by a file path (Silberschatz, Galvin, & Gagne, 2014, pp. 493-495). The concept map below is modeled after figure 10.10 in the reading (Silberschatz, Galvin, & Gagne, 2014, p. 493).

Figure 15

Two Level Directory



 Tree structured directories are an extension of the two-level directory.  In this set-up, the directory has a root (just like a tree) and can hold many files and subdirectories for the users (Silberschatz, Galvin, & Gagne, 2014, p. 495). This concept map below is based on the figure from the reading 10.11 (Silberschatz, Galvin, & Gagne, 2014, p. 495).

Figure 16

Tree Structured Directories



Acyclic-Graph Directories are made so that subdirectories and files can be shared. When a file is shared, then any user can make changes and the original document will be changed for all users. This is great when working as a team. There are two ways to accomplish the shared file, one way is to duplicate the file, and another is to leave a link with a pointer to the correct location.  (Silberschatz, Galvin, & Gagne, 2014, pp. 497-498).  The figure below is based on the figure 10.12 from the reading (Silberschatz, Galvin, & Gagne, 2014, p. 497).

Figure 17

Acyclic-Graph Directories



General graph directories are better, because they can stop the issue of cycles that are created by the acrylic graphs directories. When acrylic graph directories create sharable links, this can produce endless loops of searching.  The general graph directory was created to help with moving between the shared directories and files. It allows cycles and is more flexible. A disadvantage is the cost.  Also, this type of directory will need a trash or garbage feature (Silberschatz, Galvin, & Gagne, 2014, p. 499). The model below is based on the model from the reading 10.13 (Silberschatz, Galvin, & Gagne, 2014, p. 499).

 

Figure 18

General Graph Directory

 


  

One essential duty of an operating system is to control their I/O devices. This can be divided into the categories of hardware and software.  

Hardware can include monitors, keyboards, the mouse, the power switch, USB drives, printers, and any other I/O that can be considered a physical piece component of the operating system.  The hardware can be divided into two categories as well. One type is the block devices that transmits chunks of information. An example of this is a hard disk or USB drive. The other is a character device that can deliver and obtain solo characters. An example of this would be a sound card or a serial port (Tutorials Point, n.d., a).

Device drivers are used to help the operating system use the hardware. An example of this is when a person plugs in a wireless mouse for the first time. It takes a moment for the computer to get the driver working correctly. A printer can also illustrate this scenario. The printer is plugged into the computer by plug and socket, potentially a USB for standardization. The device controller and socket are connected. Either the I/O goes at the same time as the central processing unit (CPU) called synchronous or the CPU waits for the I/O called asynchronous I/O (Tutorials Point, n.d., a).

When connecting between the I/O and the CPU, it can happen in three ways. The first is special instruction. This is when it is sent specifically for this one device.   Secondly, memory-mapped I/O is another way, and it has one location shared by both I/O and device. This way, it can transfer information directly back and forth between memory, avoiding the CPU altogether. This is used for the majority of disks and communication interfaces because it is faster. The third way that I/O and CPU connect to hardware is Direct Memory Access (DMA). This happens when the CPU allows the I/O to read and write directly to avoid interruptions (Tutorials Point, n.d., a).  

Now, let’s discuss software. Software falls into three categories. They are user-level libraries (interface for I/O programming), Kernal Level Modules (device driver interacts), and hardware. Hardware, in this case, refers to that software that interacts with the hardware as well as physical hardware (Tutorials Point, n.d., b).

One of the software’s main goals is that it should work with any device with I/O capabilities.  Software components that are made to deal with a specific device are called devise drivers. Device drivers are meant to receive the call from the device’s software, I/O management, and make sure it runs smoothly (Tutorials Point, n.d., b).

One way software is able to keep devices running smoothly is by utilizing interrupt handlers. This is a call-back function in a device driver. When there are interrupts, it figures out why and fixes it, if it can. It requests a stored set of locations to pull the correct routine or function to repair it.

Device-independent software aims to provide the things that will be needed for all software and make it more uniform to use on many devices. It must include attributes such as device naming, error reporting, and device protection. This is different than user-Space I/O software. User I/O software is mainly libraries that allow an easier way to access the kernel.  Accessing the kernel leads to the device driver, and these libraries store procedures for the most part (Tutorials Point, n.d., b).

The kernal I/O subsystem oversees some vital tasks. It can control the scheduling, buffering, caching, spooling, and device reservation, as well as error handling (Tutorials Point, n.d., b).

 

Figure 19

Zoomed in on general graph directory.

 


The general graph directory moves between the shared directories and files. As you can see, the arrows come back and forth between Directory 4 and sub-directory 3. It also can have 2 directories sharing one subdirectory as the arrows moving from D2 and D3 both go to SD1.

Outline the mechanisms necessary to control the access of programs or users to the resources defined by a computer system

The protection of the files from users is achieved through the kernel on some operating systems. The kernels check and protect each request to view the resources. This is a big task for the OS to handle, so sometimes hardware is installed to help, or the system designer forfeits some aspects of security. This can happen if the environments are bigger than needed or if the system is not built with enough flexibility.  The goal of protecting resources has morphed into checking both the resource and how the resource is being accessed. Because of this change, application’s designers need to be able to grant access rights along with the OS designers. For this reason, access can be granted through a programming language when designing the program (Silberschatz, Galvin, & Gagne, 2014, p. 620). This goes over the objectives of the domain and language-based protection in a modern-day computer system, utilizing an access matrix, and how security is used to safeguard computers and networks.

This type of protection, compiler-based enforcement, has pros and cons. One advantage of this includes being able to be written and declared, rather than be programmed on the kernel. They can also be autonomous of the OS. The subsystem does not need to have enforcement already, and the protection can be designed precisely for the type of data it is protecting.  One disadvantage is that the protection will not be as good as it would be from the kernel (Silberschatz, Galvin, & Gagne, 2014, p. 621). There are a few ways that a programming language can enforce protection. It depends on what the OS already has in place. To use the system currently there, a language implementation might refer to wording that is already in use. When looking at protections from the kernel versus a compiler, the kernel provides more security and less flexibility. Efficiency is tricky to measure because if the kernel has hardware supporting it, then language-based enforcement has the advantage. It can be customized to meet the needs and the kernel calls can be used less, which is better overall (Silberschatz, Galvin, & Gagne, 2014, p. 622).

An access matrix is a graph of protection. It is symbolized as a matrix graph. It shows the privileges of each process in the domain and which user or object has access granted. Please see the sample access matrix below. This is based on model 13.5 from the reading (Silberschatz, Galvin, & Gagne, 2014, p. 610). The example below the matrix shows how domain 2 is not allowed to print or execute function 2. This user may not need access or potentially has not been granted access yet. This can help to protect the files. The user with the most access to function 1 is domain 3 because they can execute the file, read, and write, while domain 1 can only execute and read. This is one way to protect the system. Each domain can be a user, process, or procedure (Silberschatz, Galvin, & Gagne, 2014, p. 605).

Figure 20

Access Matrix

 

Function 1

Function 2

printer

Domain 1

Execute, read

execute

print

Domain 2

Execute, read

 

 

Domain 3

Execute, read, and write

Execute, write

Print

 

Security is used to protect systems from threats as well. While protection mainly deals with internal threats, security focuses on external threats such as attacks by hackers or code crackers. The attacks try to break through security by violating confidentiality, integrity, or availability and theft of service or denial of service. Security must consider external threats such as the environment where the network or operating system(s) are being utilized. It is crucial to keep the information safe from users with malicious intent. It is impossible to protect a system from all threats but being aware of the different types of dangers can help to block them.  Encryption and authentication are good ways to fight against some attacks. Another way to add security is to have a monitoring system for intrusion detection software. Good firewalls can also improve security. There is no way to protect from all threats, but some of these tools can help (Silberschatz, Galvin, & Gagne, 2014, p. 678). Please see the concept map of potential threats I created below.

 

Figure 21

Security, Protection and Threats



Figure 22

Zoomed in on the access matrix from figure 12 above.



This shows the access matrix allowing Domains 1-3 to execute and read on Function 1. Only Domain 3 has permissions to write in Functions 1. Domain 3 has the least access, as this domain does not have permission for function 2 or the printer. This access matrix depicts the permissions and can be updated.

Recommend How You Will Use These Concepts About Operating Systems Theory In Future Courses And/Or Future Jobs

Operating systems on the inside were new to me at the beginning of this course. I knew it was complicated and intricate, but not much else. After meticulously studying the pages of the reading, I discovered this subject does not come easy to me. I would read and re-read the selections for the assignments. I struggled through it, but the discussions and additional texts helped. This is important information for a major in computer science. It is an essential material to build upon in future courses because there is no part of the information technology field that will not come back around to how the actual operating system works. This course will be a great base to build the rest of the information technology courses on top of securely. When we talk about any subject such as a memory or security and protection, I will now have a visual in my mind and a strong understanding of the exact components that are being dealt with in the OS. I will take this knowledge and develop it throughout my future courses and my career. The information gained in this class will enrich future courses due to my newfound understanding. It will also help me to be better equipped for my job ahead.

                      


 

References

Barnes, R. (2018, October 10). Critical Section Problem. Retrieved from TutorialsPoint: https://www.tutorialspoint.com/critical-section-problem

Education 4u. (2018, May 9). Logical vs Physical address space | OS | Lec-32 | Bhanu Priya. Retrieved from YouTube: https://www.youtube.com/watch?v=dDs53dBjErA

GURU99. (n.d.). File Systems in Operating System: Structure, Attributes, Type. Retrieved from GURU99: https://www.guru99.com/file-systems-operating-system.html

Operating System - Processes. (n.d.). Retrieved from tutorialspoint: https://www.tutorialspoint.com/operating_system/os_processes.htm

Silberschatz, A., Galvin, P. B., & Gagne, G. (2014). Operating System Concepts Essentials (Second ed.). Danvers, MA, United States of America: John Wiley & Sons. Retrieved from https://platform.virdocs.com/r/s/0/doc/547369/sp/174454196/mi/561747206?cfi=%2F4%2F4&menu=table-of-contents

Solving Skills. (2020, June 8). Main Memory Management [by OS]. Retrieved from Youtube: https://www.youtube.com/watch?v=Ag4p5yCqte8

Tutorials Point (India) Ltd. (2018, January 18). Dynamic Loading, Linking & Overlay. Retrieved from YouTube.com: https://www.youtube.com/watch?v=lWVQsld8hMI

Tutorials Point. (n.d., a). Operating System - I/O Hardware. Retrieved from Tutorials Point: https://www.tutorialspoint.com/operating_system/os_io_hardware.htm

Tutorials Point. (n.d., b). Operating System - I/O Softwares. Retrieved from Tutorials Point: https://www.tutorialspoint.com/operating_system/os_io_software.htm