Algorithms cheat-sheet — Program efficiency, asymptotic behavior, complexity cases…
“Algorithm” defined:
Algorithm is a process or a set of rules to be followed in calculations or other problem solving operations, especially by a computer. Rather than just consulting to the solution of any real world problem, algorithm is also concerned with efficiency of the solution regarding time, resource (memory) and efforts (operations).
Example of an computer algorithm:
One of the simplest algorithm is to find the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be stated in high-level description English prose:
- If there are no numbers in the set, then there is no highest number.
- Assume the first number in the set is the largest number.
- For each remaining number in the set, if this number is larger number, consider this number to be largest number in the set.
- When there are no number left in the set to iterate over, consider the largest number to be the largest number in the set.
Classification of algorithm:
One of the way to classify algorithms is by implementation means:
- Recursion
- Logical
- Serial, parallel or distributed
- Deterministic or non-deterministic
- Exact or appropriate
- Quantum algorithm
Algorithm can also be classified by their design paradigm:
- Brute-force
- Divide and conquer
- Search and enumeration
- Randomized algorithm
- Asymptotically optimal algorithm or reduction of complexity
Implementation complexity
Algorithms can be classified by the amount of time they need to complete compared to their input size:
Constant time O(1)
If they time needed by the algorithm is the same, regardless of input size. E.g. an access to an array element
Linear time O(n)
If the time is proportional to input size. E.g. the transverse of the list
Logarithmic time O(log n)
If the time is logarithmic function to the input size. E.g. binary search algorithm
Polynomial time O(√ n)
If the time is power of input size. E.g. bubble sort algorithm
Exponential time O(nⁿ , where n=1, 2, 3, … n)
If the time is an exponential function of an input size. E.g. brute search algorithm
Scope of algorithm
Algorithm engineering focuses on the design, analysis, implementation, optimization, profiling and experimental evaluation of computer algorithms, bridging the gap between algorithm theory and practical applications of algorithms in software engineering.
It is the general study of the algorithmic method.
Sorting Algorithm
In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) which require input data to be in sorted lists; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions:
- The output is non-decreasing order (each element is no smaller than the previous element according to desired total order.
- The output is permutation (rendering but with all the original elements) of the input.
Further, the data is often taken to be in an array, which allows random access, rather than a list, which only allows sequential access, though often algorithms can be applied with suitable modification to either type of data.
Classification of sorting algorithms:
Factors by with sorting algorithms can be classified are:
- Computational complexity (worst, average and best behavior)
- Computational complexity (swaps)
- Memory usage
- Recursion (recursive, iterative, or both like merge sort)
- Stability
- General method: insertion, selection, merging, exchange, etc.
- Serializability
Comparison of sorting algorithm
Popular sorting algorithms
Insertion sort
Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly sorted lists, and is often used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list.
Selection sort
Selection sort is an in-place comparison sort. It has O(n2) complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity, and also has performance advantages over more complicated algorithms in certain situations.
Merge sort
Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4…) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list.
Heap sort
In computer science, heapsort is a comparison-based sorting algorithm. Heapsort can be thought of as an improved selection sort: like that algorithm, it divides its input into a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting the largest element and moving that to the sorted region. The improvement consists of the use of a heap data structure rather than a linear-time search to find the maximum.
Quick sort
Quicksort is a divide and conquer algorithm which relies on a partition operation: to partition an array an element called a pivot is selected. All elements smaller than the pivot are moved before it and all greater elements are moved after it. This can be done efficiently in linear time and in-place. The lesser and greater sublists are then recursively sorted. This yields average time complexity of O(n log n), with low overhead, and thus this is a popular algorithm.
Bubble sort
Bubble sort is a simple sorting algorithm. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass.
Binary search algorithm
In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array; if they are unequal, the half in which the target cannot lie is eliminated and the search continues on the remaining half until it is successful. If the search ends with the remaining half being empty, the target is not in the array.
Binary search runs in at worst logarithmic time, making O(log n) comparisons, where n is the number of elements in the array, the O is Big O notation, and log is the logarithm. Binary search takes constant (O(1)) space, meaning that the space taken by the algorithm is the same for any number of elements in the array.[6] Although specialized data structures designed for fast searching — such as hash tables — can be searched more efficiently, binary search applies to a wider range of problems.
Although the idea is simple, implementing binary search correctly requires attention to some subtleties about its exit conditions and midpoint calculation.
There are numerous variations of binary search. In particular, fractional cascading speeds up binary searches for the same value in multiple arrays, efficiently solving a series of search problems in computational geometry and numerous other fields.
Sources and references of research:
- Wikipedia
- YouTube (for benchmarks study and algorithm visualization)