Previously, we explored how a Recurrent Neural Network could be used to translate French text into English text and how a Convolutional Neural Network could be used to predict a dog’s breed based on a picture. Today, we’ll be playing around with combining the two in order to solve a difficult natural language processing problem:
Given a user comment from the internet, classify whether the comment is:
As always, the full code for this project can be found on my GitHub.
Continue reading “Programming #15: NLP Classification”
This will be the first post in a series of posts on some advanced Python features. Future topics will include:
- Context Managers
Continue reading “Programming #14: Decorators”
Last time, we explored how a Convolutional Neural Network could be trained to recognize and classify patterns in an image. With a slight modification, a CNN could also be trained to generate new images. But what if we were given a series of frames in an animation and wanted our CNN to predict the next frame? We could feed it a bunch of two frame pairs and see if it could learn that after frame ‘a’ usually came frame ‘b’ but this wouldn’t work that great.
What we really need is a neural network that is able to learn from longer sequences of data. For example, if all the previous frames show a ball flying in an arc, the neural network might be able to lean how quickly the ball is moving in each subsequent time period and make a prediction on the next frame based off that. This is where Recurrent Neural Networks (RNN) come in.
Today, we’ll be conceptualizing and exploring RNN’s by building a deep neural network that functions as part of an end-to-end machine translation pipeline. Our completed pipeline will accept English text as input and return the French translation as output. You can follow along with the code here.
Continue reading “Programming #13: Recurrent Neural Networks”
Last time, we explored how a simple MLP neural network could be used to classify the MNIST dataset. Today, we will work on a messier problem. We will use a modified version of the Stanford dogs dataset to train a neural network that can classify dog breeds. Since inter-class variations are small, and an obscure detail could be the deciding factor, we will need a model that can capture more detail. This is where convolutional neural networks (CNN) come in.
As always, we will start by explaining some of the high-level concepts. You can follow along with the code here.
Continue reading “Programming #12: Convolutional Neural Networks”
In this post, we will attempt to conceptualize Deep Neural Networks (DNN) and apply one to a common problem. We’ll train a version of a DNN called a Multilayer Perceptron (or vanilla network) to classify images from the MNIST database. The MNIST database contains 70,000 handwritten digits from 0-9 and is one of the most famous datasets in machine learning. If all this sounds confusing so far, don’t worry we’ll start at the beginning.
If you want to follow along with the code, the notebook can be found here.
Continue reading “Programming #11: Deep Neural Networks”
Sudoku is one of those NP-Complete problems that brute force solutions have a problem with. Consider a board with a single blank space, we would have to work through 9 possibilities to find the right answer. For two blank spaces, we would work through 9 possibilities for the first space, and then for each of those possibilities, 9 for the second.
This simplifies to a time complexity of O(n^m) where n is the number of possibilities for each square (9 in normal Sudoku) and m is the number of blank spaces. A hard Sudoku problem with 50 blank spaces would take about 5.15 * 1047 computations, which would take longer than the age of the universe to solve with a decent computer.
This is where artificial intelligence (AI) comes into play. Think of AI less as a Skynet robot and more as a set of hacks to solve exponential problems like Sudoku. Code for this post can be found here.
Continue reading “Programming #10: AI Sudoku”
We would be remiss in our study of programming if we did not devote some time to the crafting of quality code. This subject can be a bit subjective and many programmers have a dogmatic attachment to what they believe qualifies as quality code. Luckily for us, some standards have emerged. The Python Enhancement Proposal (PEP 8) is the go-to style guide for Python code. The Google Python Style Guide is another great resource. For an interactive guide, consider Code Like a Pythonista.
This post will go through an example of crafting readable code to solve a problem. The code can be found on Github. We will stick to some best practices as summarized nicely in the Zen of Python:
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Special cases aren’t special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one—and preferably only one—obvious way to do it.
Although that way may not be obvious at first unless you’re Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it’s a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea—let’s do more of those!
Continue reading “Programming #9: Style and Testing”
In computer science, complexity classes help us talk about the difficulty of a problem. If there is a fast way to the find the solution of a problem (search), the problem is said to be Polynomial-time (P). If there is a fast way to check the solution of a problem (verification), the problem is said to be Nondeterministic Polynomial-time (NP).
For example, there exists a fast way to solve the sorting problem and a fast way to verify that the sort is correct. The sorting problem is thus both P and NP. For a problem like finding the solution in Sudoku, it is easy to check if a solution is correct, just see if the numbers add up, but hard to search for the solution in the first place, try every combination? The Sudoku problem NP, until it can otherwise be proved that a quick solution exists.
NP-hard is another class of problems that may or may not be in NP, but if solved would give us the answer to every NP problem. NP-complete is another class of problems that consist of problems that are both NP and NP-hard. Solving an NP-complete problem in a reasonable amount of time would be world-changing. The Knapsack problem is an NP-complete problem.
Continue reading “Programming #8: Knapsack Problem”
To illustrate the basic approach to developing and analyzing algorithms, let’s consider a simple case of the dynamic connectivity problem. Basically, if we are given a list of connections between elements, and then given two elements, we want to return whether or not they are connected. More formally:
Continue reading “Programming #7: Dynamic Connectivity Problem”
Now that we have some of the programming basics under our belt, we can start using them to solve problems. Let’s start with one of the most famous problems in computer science, the shortest path problem.
As the name implies, the shortest path problem is about finding the shortest path in a graph. Imagine that we have a graph that represents our home town. Vertices or nodes would represent locations, and edges would represent streets connecting those locations. For simplicity, let’s say that each street goes both directions (undirected graph). We can also add weights to each edge that tell us the distance between two locations along a a given street. In this example, the shortest path would be the path between two locations, where the sum of the distance between streets is as small as possible.
Continue reading “Programming #6: Shortest Path Problem”