# Classic problems

*By Afshine Amidi and Shervine Amidi*

## Traveling salesman

Given $n$ cities $c_1, ..., c_n$, the traveling salesman problem (TSP) is a classic problem that aims at finding the shortest path that visits all cities exactly once and then returns to the starting city.

The distance between each pair of cities $(c_i, c_j)$ is noted $d_{i,j}$ and is known.

A naive approach would consist of enumerating all possible solutions and finding the one that has the minimum cumulative distance. Given a starting city, there are $(n-1)!$ possible paths, so this algorithm would take $\mathcal{O}(n!)$ time. This approach is impracticable even for small values of $n$.

Luckily, the Held-Karp algorithm provides a bottom-up dynamic programming approach that has a time complexity of $\mathcal{O}(n^22^n)$ and a space complexity of $\mathcal{O}(n2^n)$.

Suppose $c_1$ is the starting city. This arbitrary choice does not influence the final result since the resulting path is a cycle. We define the following quantities:

- $\mathscr{C}_k$ contains all sets of $k$ distinct cities in $\{c_2, ..., c_n\}$:$\textrm{for }k\in[\![0, n-1]\!],\quad\quad\mathscr{C}_k=\Big\{C_k | C_k\subseteq\{c_2, ..., c_n\}, \#C_k=k\Big\}$We note that $\mathscr{C}_k$ has a size of $\displaystyle\binom{n-1}{k}$.
- $g(C_k, c_j)$ is the distance of the shortest path that starts from $c_1$, goes through each city in $C_k\in\mathscr{C}_k$ exactly once and ends at $c_j\notin C_k$.

The solution is found iteratively:

*Initialization:*The shortest path between the starting city $c_1$ and each of the other cities $c_j$ with no intermediary city is directly given by $d_{1, j}$:$\forall j\in[\![2, n]\!], \quad\boxed{g(C_0, c_j) = d_{1, j}}$*Compute step:*Suppose that for all $C_{k-1}\in\mathscr{C}_{k-1}$ and $c_i\not\in C_{k-1}$, we know the distance of the shortest path $g(C_{k-1}, c_i)$ between $c_1$ and $c_i$ via the $k-1$ cities in $C_{k-1}$.Let's take $C_k\in\mathscr{C}_k$. For all $c_i\in C_k$, we note that $C_k\backslash\{c_i\}\in\mathscr{C}_{k-1}$ and that (naturally) $c_i\not\in C_k\backslash\{c_i\}$, which means that $g(C_k\backslash\{c_i\}, c_i)$ is known.

We can deduce the distance of the shortest path between $c_1$ and $c_j\not\in C_k$ via $k$ cities by taking the minimum value over all second-to-last cities $c_i\in C_k$:

$\forall C_k\in\mathscr{C}_k, \forall c_j\notin C_k,\quad\boxed{g(C_k, c_j) = \underset{c_i\in C_k}{\textrm{min}}\Big\{g(C_k\backslash\{c_i\}, c_i)+d_{i, j}\Big\}}$Each step $k$ has the following complexities:

Time Space $\displaystyle k\times(n-1-k)\times\binom{n-1}{k}$ $\displaystyle(n-1-k)\times\binom{n-1}{k}$ *Final step:*The solution is given by $g(\{c_2, ..., c_{n}\}, c_1)$.Since $C_{n-1}=\{c_2, ..., c_{n}\}$, the last step of the algorithm gives:

$\boxed{g(C_{n-1}, c_1)=\underset{i\in[\![2, n]\!]}{\textrm{min}}\Big\{g(C_{n-1}\backslash\{c_i\}, c_i)+d_{i,1}\Big\}}$

## Knapsack

The 0/1 knapsack problem is a classic problem where the goal is to maximize the sum of values of items put in a bag that has a weight limit. Here, "0/1" means that for each item, we want to know whether we should include it (1) or not (0).

More formally, we have a bag of capacity $C$ and $n$ items where each item $i\in[\![1, n]\!]$ has value $v_i$ and weight $w_i$. We want to find a subset of items $\mathscr{I}\subseteq\{1, ..., n\}$ such that:

A naive approach would consist of trying out every combination of the $n$ items and take the one with the maximum value which also satisfies the cumulative weight constraint. Such an approach has a time complexity of $\mathcal{O}(2^n)$.

Luckily, this problem can be solved with a bottom-up dynamic programming approach that has a time complexity of $\mathcal{O}(nC)$ and a space complexity of $\mathcal{O}(nC)$.

We note $V_{i, j}$ the maximum value of a bag of capacity $j\in[\![0, C]\!]$ that contains items among $\{1, 2, ..., i\}$. Our objective is to get $V_{n, C}$.

*Initialization:*The maximum values of the following bags are known:Bags of capacity 0: $V_{i, 0}=0$.

Bags with 0 item: $V_{0, j}=0$.

*Compute maximum value:*Starting from $i=1$ and $j=1$, we iteratively fill the 2D array of maximum values from left to right, top to bottom.In order to determine the maximum value $V_{i, j}$ of a bag of capacity $j$ containing items among $\{1, ..., i\}$, we need to choose between two hypothetical bags:

*Bag 1: contains item $i$.*$V_{B_1}$ is found by adding the value $v_i$ of item $i$ to the maximum value of the bag of capacity $j-w_i$ containing items among $\{1, ..., i-1\}$.$\boxed{V_{B_1}=V_{i-1, j-w_i} + v_i}$*Bag 2: does not contain item $i$.*$V_{B_2}$ is already known: it is the maximum value of the bag of capacity $j$ containing items among $\{1, ..., i-1\}$.$\boxed{V_{B_2}=V_{i-1, j}}$

We choose the best potential bag:

When the 2D array is filled, the desired value is $V_{n, C}$.

*Get final items:*In order to know which items were selected, we start from position $(i, j)=(n, C)$ of the 2D array and traverse it iteratively:*Case $V_{i, j}\neq V_{i-1, j}$:*This means that item $i$ was included. We go to position $(i-1, j-w_i)$ and include item $i$ in the set of included items.*Case $V_{i, j} = V_{i-1, j}$:*This means that item $i$ was not included. We go to position $(i-1, j)$.We repeat this process until retrieving all items.

## $N$-Queens

Given a chessboard of size $N\times N$, the $N$-Queens problem aims at finding a configuration of $N$ queens such that no two queens attack each other.

A queen is a chess piece that can move horizontally, vertically or diagonally by any number of steps.

The solution can be found using a backtracking approach:

*Step 1:*Put the $1^{st}$ queen on the chessboard.*Step 2:*Put the $2^{nd}$ queen in the second column of the chessboard. As long as the partial configuration is invalid, keep changing its position until conditions are satisfied.*Step 3:*If there is nowhere to place the next queen, go back one step and try the next partial configuration.*Step 4:*Continue this process until finding a valid solution.

## Coin change

Given an unlimited number of coins of values $\{c_1, .., c_k\}$, the coin change problem aims at finding the minimum number of coins that sum up to a given amount $S$. By convention, we assume that $c_1<...<c_k$.

In other words, we want to find the minimum value of $x = x_1 + ... + x_k$ such that

The bottom-up dynamic programming solution iteratively computes the minimum amount of coins $d_s$ for each amount $s\in\{1, ..., S\}$.

This approach runs in $\mathcal{O}(S)$ time and takes $\mathcal{O}(S)$ space:

*Initialization:*Initialize array $D=[d_0, ..., d_S]$ with zeros.

For each $i\in[\![1, ..., k]\!]$, set $d_{c_i}$ to $1$ since we already know that the corresponding amount only needs 1 coin.

*Compute step:*Starting from $s=c_1+1$, we iteratively fill array $D$.In order to obtain amount $s$, we distinguish two cases:

*Case*$\exists i, d_{s-c_i} > 0$: We look back at valid values $d_{s-c_i}$ and see which coin $c_i$ minimizes the total number of coins needed to obtain amount $s$.$\boxed{d_s=\underset{\substack{\\i\in[\![1, k]\!]\\d_{s-c_i}>0}}{\textrm{min}}\Big\{d_{s-c_i}+1\Big\}}$*Case*$\forall i, d_{s-c_i} = 0$: We cannot obtain amount $s$ using coins $c_i$.At the end of this step, amounts $s\in\{1, .., S\}$ with $d_s=0$ cannot be obtained with the given coins.

The answer to this problem is given by $d_S$.

**Subscribe here** to be notified of new Super Study Guide releases!