Key Takeaways: The 0/1 Knapsack Problem is a classic Dynamic Programming problem. It teaches us how to maximize the total value of items placed in a bag with a strict weight limit, where items cannot be broken into smaller fractions (you either take an item entirely, or leave it).
The 0/1 Knapsack Problem is one of the most famous optimization problems in Computer Science. It perfectly illustrates the power of Dynamic Programming and is a frequent topic in top-tier technical interviews.
Imagine you are a thief who has broken into a jewelry store. You have a single knapsack (a bag) that can carry a maximum weight of W.
In the store, there are n items. Each item has:
wt).val).Your goal is to steal the optimal combination of items to maximize your total profit, without exceeding the maximum weight limit W of your knapsack.
The "0/1" in the name means you have only two choices for every item:
1 (Take it): You put the entire item in your bag.0 (Leave it): You leave the item behind.Let's look at a classic example. Your knapsack can hold exactly 50 kg. There are three items available:
Calculating the possible combinations:
The optimal solution is to take Item 2 and Item 3, achieving a total value of $220.
There are several ways to approach this problem, starting from a naive solution to a highly optimized one.
The simplest way is to evaluate every single possible subset of items. For each item, we branch out into two possibilities: we either include the item in the knapsack, or we exclude it.
O(2^n) because we make 2 recursive calls for every item.1,125,899,906,842,624 combinations. It is completely unscalable!In the brute force approach, we end up recalculating the same subproblems repeatedly. We can use a 2D array (cache) to store the results of subproblems as we compute them.
O(n * W)O(n * W) for the cache + O(n) for the recursive call stack.This is the industry standard. We build a 2D array from the ground up, avoiding the overhead of deep recursive calls entirely.
Let's build a DP Table dp[i][w] where:
i represents the number of items considered so far.w represents the current maximum weight capacity of the knapsack.dp[i][w] represents the maximum profit we can achieve.The Core Logic:
For each item i and each weight capacity w, we ask:
w? dp[i-1][w].Item_Value + Profit_of_Remaining_Weight. (val[i-1] + dp[i-1][w-wt[i-1]])dp[i-1][w].Here is the highly optimized Bottom-Up Dynamic Programming approach written in Python.
def knapsack(W, wt, val, n): # Create a 2D array filled with 0s. # Rows = n + 1 (items), Cols = W + 1 (weights) dp = [[0 for x in range(W + 1)] for x in range(n + 1)] # Build the DP table in a bottom-up manner for i in range(n + 1): for w in range(W + 1): # Base Case: 0 items or 0 weight capacity yields 0 profit if i == 0 or w == 0: dp[i][w] = 0 # If item weight is less than or equal to current capacity elif wt[i-1] <= w: # Max of: (Value of current item + Value of remaining weight) OR (Value without current item) dp[i][w] = max(val[i-1] + dp[i-1][w-wt[i-1]], dp[i-1][w]) # If item weight is greater than current capacity, we cannot take it else: dp[i][w] = dp[i-1][w] # The bottom-right cell contains the maximum profit return dp[n][W]# Given inputs from our visual example values = [60, 100, 120] weights = [10, 20, 30] max_weight = 50 n_items = len(values)
max_profit = knapsack(max_weight, weights, values, n_items) print(f"The Maximum profit is: ${max_profit}")
O(N * W) where N is the total number of items and W is the maximum capacity of the knapsack. The nested for loops iterate strictly N times and W times.O(N * W). We require a 2D array of size (N+1) * (W+1) to store the intermediate subproblem results. (Note: Space complexity can be further optimized to O(W) using only a 1D array since we only ever look back at the previous row).Why is this specific algorithmic variation called the "0/1" Knapsack problem?
What is the Time Complexity of the Dynamic Programming (Tabulation) solution for the 0/1 Knapsack Problem?