An algorithm is a set of instructions to solve a problem or perform a calculation. Algorithms are like recipes for making computer programs. Follow the instructions of the algorithm: create a program that solves a problem.

Algorithms come in many different sizes and complexities. Take, for example, the following algorithm for adding two numbers:

```
def addTwoNumbers(a, b):
return a + b
```

This algorithm is very basic, but every time you perform the step(s), it produces the correct answer.

Some algorithms are very complicated, and some algorithms solve very complicated problems with very small small solutions.

Most of the commonly used algorithms in computer science can be organized into a handful of different categories.

## Sorting Algorithms

Sorting algorithms provide the solution to the sorting problem. The sorting problem is a common occurrence in computer science. It is the need to produce a list of data elements in order, according to a particular set of rules. For example, given a list of book titles, produce the list in alphabetical order by title.

There are many different algorithms that provide solutions to the sorting problem. Some sorting algorithms may provide more academic use than practical use, but studying them all is good practice for understanding algorithms in general. These are some of the most frequently used sorting algorithms in computer science:

### Sorting Algorithms

- Bubble Sort
- Selection Sort
- Insertion Sort
- Quicksort
- Merge Sort
- Heap Sort
- Timsort

## Searching Algorithms

Searching algorithms provide the solution to the searching problem. The searching problem is also a common occurrence in computer science. It is the need to find a data element or elements from a list of data elements that meet some criteria. For example, given a list of book titles, find where a given book title is in the list, if it is in the list at all.

As with sorting algorithms, there are many different algorithms to solve the searching problem. Different searching algorithms have different tradeoffs, and provide different benefits for different types of data elements.

### Searching Algorithms

- Linear Search
- Binary Search
- Hash-table Search

## Recursive Algorithms

Recursive algorithms provide solutions to many different types of problems. A recursive algorithm works by breaking the problem into smaller variants of the problem, until the problem can be solved trivially. This trivial solution is called a “base case” in the recursive algorithm.

Recursive algorithms are best explained through example. One of the simplest algorithms for explaining recursion is an algorithm to solve the factorial computation problem. In mathematics, a factorial is given by the expression `n!`

where n is a non-negative integer. The solution to `n!`

is equal to `n * (n - 1) * (n -2 ) * ... * 1`

— that is, the product of all positive integers equal to or less than `n`

. For example, `4! = 4 * 3 * 2 * 1 = 24`

.

As a recursive algorithm, the factorial value of `n`

is represented as `n * factorial( n - 1 )`

. That is, we can break down the problem of n-factorial into the problem of n, multiplied by “n – 1” factorial. This problem, “n – 1” factorial can be further broken down into: `(n - 1) * factorial(n - 1 - 1)`

. We continue this trend until we reach our “base case” – `factorial(1)`

which is given to be equal to 1.

The factorial algorithm can be represented very simply in Python as:

```
def factorial(n):
if n == 1:
return 1
else:
return n * factorial(n - 1)
```

### Recursive Algorithms

- Fibonacci (Recursive Solution)
- Greatest Common Denominator
- Binary Search

## Greedy Algorithms

Greedy algorithms are used to solve many different types of problems in computer science and mathematics. Greedy algorithms work by making the best-available choice at a given point in the algorithm’s execution. Some greedy algorithms produce optimal results, while others simply produce correct results. While greedy algorithms may not produce optimal solutions they often provide useful solutions or good approximations to the optimal solution. In many cases, optimal solutions to the problems solved by greedy algorithms are not known, such as the Traveling Salesperson Problem.

### Greedy Algorithms

- Knapsack
- Job Scheduling
- Minimum Spanning Tree
- Huffman Coding

## Dynamic Programming Algorithms

Dynamic programming (DP) algorithms are similar to recursive algorithms. Dynamic programming algorithms work by breaking a problem into smaller subproblems, and using the solution to the subproblems to answer the larger problems. Dynamic programming algorithms can work in one of two ways: memoization or tabulation. The differences between memoization and tabulation can be thought of as solving a problem top-down versus bottom-up.

Using memoization, a problem is solved as many subproblems (top-down). Each subproblem may be broken down into further subproblems. Whenever a solution for a subproblem is found, the solution is memorized. When the subproblem is seen again, the solution is recalled from memory (this is called memoization). In many cases, this prevents many repeated operations, as a subproblem may be broken into multiple subproblems, and those subproblems may be part of the solution to other subproblems.

This is best illustrated using the dynamic programming solution to the Fibonacci problem. The Fibonacci sequence defines the solution `fibonacci(n) = fibonacci(n - 1) + fibonacci(n - 2)`

. The base cases to this problem are `fibonacci(1) = 1`

and `fibonacci(2) = 1`

.

Take for example, the problem `fibonacci(5)`

. We can break this problem down into subproblems as so:

Looking at this graph, it is easy to see that in order to calculate `fibonacci(5)`

we have to repeat several calculations. `fibonacci(3)`

is performed twice, `fibonacci(2)`

is performed three times, and `fibonacci(1)`

is performed two times. But, if we memorize the solution to the subproblems the first time we see a subproblem, we can recall that answer if we see the subproblem again later. For example, the first time we encounter `fibonacci(3)`

and we calculate this value to be equal to two, we can memorize this answer, and the second time we see `fibonacci(3)`

we can recall the solution from memory, and skip breaking `fibonacci(3)`

into separate subproblems. The graph simplifies like so:

Thanks to memoization, we perform fewer calculations and still arrive at the same answer.

Tabulation solves a dynamic programming problem from the bottom-up. Where the memoization technique starts at the problem statement, and divides the problem into subproblems until reaching the bases cases, tabulation starts at the base-cases and builds the problem up until it reaches the original problem statement.

Writing an algorithm in this form is often times much more complicated than writing the algorithm using memoization, however the algorithms often perform much better, and use much less memory.

### Dynamic Programming Algorithms

- Coin Change
- Longest Common Subsequence
- Fibonacci (Dynamic Programming Solution)

## Divide and Conquer Algorithms

Divide-and-conquer algorithms use recursive solutions to solve problems by breaking the problem into smaller subproblems, and combining the results of the subproblems to arrive at the solution.

The binary search algorithm is a good example of a divide-and-conquer algorithm. The algorithm takes in a list of date elements, and determines if a given element is present in the list.

In this case, we are given a list of numbers `{5, 8, 3, 4, 7, 8, 5, 0}`

and are asked to determine if the element `4`

exists in the list. The problem is solved by breaking the problem down into smaller sub-problems, until it becomes a trivial problem (a list of one number).

`Search({5, 8, 3, 4, 7, 8, 5, 0}, 4)`

becomes `Search({5, 8, 3, 4}, 4)`

and `Search({7, 8, 5, 0}, 4)`

and those problems break down further. Each subproblem reports back its result, until we reach the final answer of “yes.”

### Divide and Conquer Algorithms

- Karatsuba
- Quicksort
- Strassen Algorithm

## Brute Force Algorithms

Brute-force algorithms are the most basic of algorithms. Brute-force algorithms work by calculating every possible solution to a problem, and stopping whenever a solution is found.

An example of a brute-force algorithm would be to determine if a number is prime by checking if every number greater than 1 but less than the number can evenly divide the value.

For example:

```
def isPrime(n):
for j in range(2, n):
if n % j == 0:
return False
return True
```