Programming #8: Knapsack Problem

In computer science, complexity classes help us talk about the difficulty of a problem. If there is a fast way to the find the solution of a problem (search), the problem is said to be Polynomial-time (P). If there is a fast way to check the solution of a problem (verification), the problem is said to be Nondeterministic Polynomial-time (NP).

For example, there exists a fast way to solve the sorting problem and a fast way to verify that the sort is correct. The sorting problem is thus both P and NP. For a problem like finding the solution in Sudoku, it is easy to check if a solution is correct, just see if the numbers add up, but hard to search for the solution in the first place, try every combination? The Sudoku problem NP, until it can otherwise be proved that a quick solution exists.

NP-hard is another class of problems that may or may not be in NP, but if solved would give us the answer to every NP problem. NP-complete is another class of problems that consist of problems that are both NP and NP-hard. Solving an NP-complete problem in a reasonable amount of time would be world-changing. The Knapsack problem is an NP-complete problem.

Modeling the problem

Imagine that your house just caught fire and you are inclined to save some of your possessions before fleeing the inferno. However, you are limited in what you can take based on the size of a knapsack that you will be using to carry the items. Furthermore, let’s say that your goal is to maximize the dollar value of items you escape with. The question becomes: what items do you take given the objective and the capacity constraint? More formally:

Given a set of items I, each item i in I characterized by:

  • Its weight Wi
  • Its value Vi
  • A capacity for the knapsack K

Find the subset of items in I:

  • That has the maximum value
  • Does not exceed the capacity K of the knapsack

Our decision variables are the weights, values, and capacity of the knapsack (W, V, K). Our restraint is that the sum of the chosen item’s weights must be less than the knapsack’s capacity (∑Wi < K). Our objective is to maximize the chosen item’s values (∑Vi).

Brute force solution

It’s almost always a good strategy to start with the simplest solution. For this problem, we could simply try all possible combinations of items and pick the combination that works best. For example, if we had 3 items, each item would have one of two choices, either we take it or we don’t. The total number of combinations would be 23, or 8. If we had 4 items, it would be 16, and if we had 10 items, it would be 1024.

It becomes apparent that our algorithm is experiencing exponential growth in run time. In fact, given 50 items, this algorithm would take approximately 1,285,273,866 centuries to complete. We are going to need to do better.

Greedy solution

A greedy solution takes advantage of our intuition by creating heuristics that allow us to make locally optimal choices at each stage. The hope is that if we make enough locally optimal choices, we will fall into a global optimum. For example, by taking each item one by one, we could have a simple heuristic that says if the item is blue, we take it. Of course this is a silly heuristic, but you get the point.

In our case, a more practical heuristic might be that more items is best, so we start with the lightest item and go forward from there until the knapsack is full. Another heuristic might be that the more valuable items should go first, so we start with the most expensive item and go from their. Let’s put solid numbers on this, consider the following 4 items and a weight capacity of 11:

Item Value Weight
Hat 8 4
Jacket 10 5
Laptop 15 8
Shoes 4 3

Using the more items is better heuristic, we would end up taking Shoes and Hat for a value of 12. Using the highest value heuristic, we would end up taking Laptop and Shoes for a value of 19. We can play around with this, for example we could take items based on value per weight. Consider the following implementation of a greedy solution where we take the highest value/weight items until the pack is full.

# Import libraries
from collections import namedtuple

# Prepare input datadata
Item = namedtuple("Item", ['index', 'value', 'weight'])
hat = Item(0, 8, 4)
jacket = Item(1, 10, 5)
laptop = Item(2, 15, 8)
shoes = Item(3, 4, 3)
items = [hat, jacket, laptop, shoes]
k = 11

# Greedy Algorithm
def greedyoptimize(items, capacity):
    """
    A greedy solution to the knapsack problem.
    """
    # Initialize variables
    weight = 0
    value = 0
    taken = [0]*len(items)
    
    # Sort input by value / weight
    sort = sorted(items, key=lambda x: x[1]/x[2], reverse=True)
    
    # Take highest value per weight items first
    for item in sort:
        if weight + item.weight <= capacity:
            taken[item.index] = 1
            value += item.value
            weight += item.weight
            
    # Output format
    output_data = str(value) + '\n'
    output_data += ' '.join(map(str, taken))
    
    return output_data

The problem with this approach is that a heuristic that does great on one input, might completely fail on another. There is no guarantee that the quality of our answer will be up to par. Instead, we will want to find higher-quality solutions that will be robust to different inputs.

Dynamic programming solution

A dynamic programming approach takes advantage of a divide and conquer framework to reduce the problem down into smaller problems. For example, could we solve this problem in the case that we are only allowed 1 item? How about for 2 items? Is the solution for 2 items better than the solution for 1 item? and so on.

When we solve the smallest version of the problem, we can store that answer in a table and then refer back to it when solving a larger version of the problem. Consider the following implementation of a dynamic programming solution.

# Dynamic Programming Algorithm
def dynamicoptimize(items, capacity):
    """
    Dynamic Programming solution to knapsack problem.
    """
    # Initialize lookup table
    lookup = [[0 for w in range(capacity+1)] for i in range(len(items)+1)]
    
    # Fill in table
    for i in xrange(1, len(items) + 1):
        wt, val = items[i-1].weight, items[i-1].value
        for w in xrange(1, capacity + 1):
            if wt > w:
                lookup[i][w] = lookup[i-1][w]
            else:
                lookup[i][w] = max(lookup[i-1][w],
                                   lookup[i-1][w-wt] + val)
    
    # Get solution
    taken = [0]*len(items)
    value = lookup[len(items)][capacity]
    w = capacity
    
    for i in range(len(items), 0, -1):
        if lookup[i][w] != lookup[i-1][w]:
            wt = items[i-1].weight
            taken[i-1] = 1
            w -= wt
    
    # Output format
    output_data = str(value) + '\n'
    output_data += ' '.join(map(str, taken))
    
    return output_data

This solution ends up taking O(NW) computer time. Where N is the length of items, and W is the weight capacity of the knapsack. Since we are storing everything in a table, our space complexity will also be O(NW). We are able to get a precise answer when our input size is modest, but if our input space is huge, our solution will still be exponential.

Branch and Bound solution

In the case that our input size is huge, we would be better off using the greedy algorithm or trying something different. For example, a branch and bound approach attempts to solve the problem using two steps.

  1. Branch the problem into a number of sub-problems .
  2. Bound the problem with an optimistic estimate of the best solution to the sub-problem.

For example, if we relax the assumption that we have to take a whole item, we could easily come up with the best arrangement of items (sort by value per weight then take all items and a fraction of the last item). Using this value as our bound, we can explore branches of different solutions and stop early when the current path falls short of our current best guess. Consider the following implementation of a branch and bound solution.

import Queue
class Node:
    def __init__(self, level, value, weight, bound, contains):
         self.level = level
         self.value = value
         self.weight = weight
         self.bound = bound
         self.contains = contains
            
            
def upper_bound(u, k, n, v, w):
    """
    Find the bound for each level of branch search.
    """
    # Base bound
    if u.weight > k:
        return 0
    else:
        bound = u.value
        wt = u.weight
        j = u.level
        
        # Fill knapsack
        while j < n and wt + w[j] <= k:
            bound += v[j]
            wt += w[j]
            j += 1
            
        # Fill knapsack with fraction of remaining
        if j < n:
            bound += (k - wt) * float(v[j])/ w[j]
            
        return bound


def branchboundoptimize(items, capacity):
    """
    Branch and bound solution to the knapsack problem.
    """
    # Initialize variables
    n = len(items)
    k = capacity
    value = 0
    taken = [0] * n
    best = set()
    v = [0] * n
    w = [0] * n
    
    # Sort the input and set storage variables
    sort = sorted(items, key=lambda x: x[1]/x[2], reverse=True)
    for item in sort:
        v[item.index] = item.value
        w[item.index] = item.weight
            
    # Build queue and set root   
    q = Queue.Queue()
    root = Node(0, 0, 0, 0.0,[])
    root.bound = upper_bound(root, k, n, v, w)
    q.put(root)
    
    # Branch through queue
    while not q.empty():
        c = q.get()
        if c.bound > value:
            level = c.level + 1
        
        # Check left node if added
        left = Node(level, c.value + v[level-1], c.weight + w[level-1], 0.0, c.contains[:])
        left.bound = upper_bound(left, k, n, v, w)
        left.contains.append(level)
        if left.weight <= k:
            if left.value > value:
                value = left.value
                best = set(left.contains)
            if left.bound > value:
                q.put(left)
        
        # Check right node if not added
        right = Node(level,c.value, c.weight, 0.0, c.contains[:])
        right.bound = upper_bound(right, k, n, v, w)
        if right.weight <= k:
            if right.value > value:
                value = right.value
                best = set(right.contains)
            if right.bound > value:
                q.put(right)
                
    # Build out solution
    for b in best:
        taken[b-1] = 1
    value = sum([i*j for (i,j) in zip(v,taken)])
    
    # Output format
    output_data = str(value) + '\n'
    output_data += ' '.join(map(str, taken))
    
    return output_data

There is a trade-off based on how we bound the problem. We lose out on the quality of our answer as we gain speed and vice versa.

Conclusion

None of these solutions work great in every case, but they give us a framework for solving complex combinatorial optimization problems. These problems pop up everywhere, from allocating resources efficiently to setting daily fantasy sports lineups.

In the real world, it often takes a hybrid of these approaches and a constant trade-off between quality and scalability.

References:
https://en.wikipedia.org/wiki/Knapsack_problem

Advertisements

Author: David Shahrestani

"I have the strength of a bear, that has the strength of TWO bears."

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s