Thursday, September 2, 2021

Algorithmic Design and Data Structure Techniques




An algorithmic design utilizes a set of instructions to do tasks. A data structure is a specific way of formatting to include information in a certain structure such as a stack or queue. Both algorithmic designs and data structures are used to help design programs. The goal is to use the best program or data structure for the information to maximize efficiency.

When looking at the complexity of an algorithm it is good to consider the time complexity and the space complexity. Time complexity focuses on the time it takes for memory accesses to be performed. This includes how many accesses, the time in between the accesses, and how many inner loops were executed (www.cs.utexas.edu, n.d.). Space complexity focus on ensuring an algorithm works to make space the most efficient, when looking to access memory (www.cs.utexas.edu, n.d.).

According to TutorialsPoint, Algorithms can include the following to enhance time and space efficiency:

·      Order (sort) – This puts the items in a certain order to find it faster

·      Find (Search) – This looks to find a certain item.

·      Add (Insert) – Adds an item.

·      Remove/Delete – This takes out an item.

·      Updates− This changes an item to be updated.

These are some of the functions that an algorithm can help to create efficiency (www.tutorialspoint.com/, n.d.)

Here are some examples of different types of searching and sorting algorithms.

Linear Search is simply searching through all elements in order. An example of this when scrolling through the DVR of recorded programs. I hit the down arrow until I find the show I want to watch (Lysecky, Vahid, & Givargis, 2017).

Binary search starts in the middle and then searches the first half followed by the second half until the element is found. This would be similar to looking up a phrase in a physical encyclopedia, when they used to be in books. A person does not read every entry but finds the entry they are looking for.

Selection sort looks for the next smallest item and puts it in the sorted part of the array. It uses the formula 0(N2), where n equals the size of the array. An example of this is like when a farmer picks strawberries and organizes them from smallest to largest.

Merge sort is thought of as dividing and conquering. It splits it up in in half until it all items are individualized. Then it makes temporary arrays, before putting them back into larger arrays in order. This ends with one final merge to sort the array.  A real-life example of this could be if a bride asks her 2 bridesmaids to help with the seating chart. They might each break it up into a list and then merge those lists together to have one final seating chart (GeekforGeeks, 2021).

Insertion sort works from one direction to the next, comparing the items to right as it goes. It breaks it up in to sorted and not sorted, then places each item where it needs to be put in the correct part of the sorted area. It can be very time consuming, especially if every item is out of order. It has a worst case time complexity in that way. It uses the formula 0(N2) for this worst case scenario. The best case scenario could be O(N). An example could be moving your playing cards around in your hand if you are playing a card game and want the cards in your hand in order. Another example is a stack of books that need to be put away at the library. If you have a stack of books, and you put the books in the correct place one by one until they are all in order before you put them away (Programming with Mosh, 2020).

Shell Sort takes advantage of the good parts of insertion sort. It sorts a group into smaller arrays by dividing it up by the gaps. For example, it looks at every third element and moves those to be sorted down the line, Then all the bigger elements are on the right and smaller elements are on the left. It can keep going this way or it can move to insertion sort, which will now be faster, because not every single element needs be moved. This leaves insertion sort with the best case scenario (GeekforGeeks, 2021). A good algorithm for this would be O(N2/3). Please this this chart from a book I found online (Miller & Ranum, n.d.). It shows how the sublists sort until the entire list is sorted.

Quick Sort is recursive. It must have a pivot. It must meet 3 requirements. In must be in the right position (the final position), everything to the left is smaller, while everything to the right is bigger.  First, move the pivot to the very end of the array for now, so it is not in the way. Secondly, look for an item from the left is that is larger and to the right for an item that is smaller. Then swap the left and right item. Repeating this process, swapping until the the item from left is bigger than item from right.  Lastly, swap the pivot for the last item.  Then do this for the first and second half of the array until both are sorted in this same way. It has a worst case scenario of O (N2), and a best case algorithm of 0(nlogn) (Sambol, 2016). A real life example of this would a shopping list at grocery store, when you are rushing. You would look for items closer together first and hopefully kind of near the one side of the store, then the other.

Radix sorts the array with counting sort by the last digit, then the second digit, then third and so on. This is considered a stable algorithm. When n is the number of elements in the array (length of the array), d is the number of digits in the elements, and b is the base of the system.  O(d(n+b)) is the algorithm, so it is pretty fast (CS DoJo, 2017).

When looking at stacks and queues, the best time to use this is when the information needs to be arranged in a certain way rather than an array.

It seems the best algorithm to choose depends on what the program is attempting to accomplish. It seems like a linear search would work better for smaller arrays, while a binary search will work better for larger arrays. This is because it takes less time to look through elements if there are less of them than to break then down first. If array is bigger, then divide and conquer would be more efficient. It also depends on the best and worst case scenarios as listed above.

My personal favorite is quick sort because, just like the name states it is faster. It is a recursive sort and that helps with large data. With a best case scenario, it can be the most efficient.

References

CS DoJo. (2017, March 26). Radix Sort Algorithm Introduction in 5 Minutes. Retrieved from Youtube: https://www.youtube.com/watch?v=XiuSW_mEn7g

GeekforGeeks. (2021, July 20). ShellSort. Retrieved from GeekforGeeks: https://www.geeksforgeeks.org/shellsort/

Lysecky, R., Vahid, F., & Givargis, T. (2017). Data Structures Essentials. Retrieved from https://zybooks.zyante.com/#/zybook/DataStructuresEssentialsR25/chapter/1/section/3

Miller, B., & Ranum, D. (n.d.). Problem Solving with Algorithms and Data Structures using Python. Retrieved from Pythonds: https://runestone.academy/runestone/books/published/pythonds/index.html

Programming with Mosh. (2020, June 29). Insertion Sort Algorithm Made Simple [Sorting Algorithms]. Retrieved from Youtube: https://www.youtube.com/watch?v=nKzEJWbkPbQ

Sambol, M. (2016, August 14). Quick sort in 4 minutes. Retrieved from Youtube: https://www.youtube.com/watch?v=Hoixgm4-P4M

www.cs.utexas.edu. (n.d.). Complexity Analysis. Retrieved from www.cs.utexas.edu: https://www.cs.utexas.edu/users/djimenez/utsa/cs1723/lecture2.html

www.tutorialspoint.com/. (n.d.). Data Structures & Algorithms - Quick Guide. Retrieved from www.tutorialspoint.com/: https://www.tutorialspoint.com/data_structures_algorithms/dsa_quick_guide.htm

 

No comments:

Post a Comment