Monday, June 5, 2017

Training an RNN on the Archer Scripts


So all the hype these days is around "AI", as opposed to "machine learning" (though I've yet to hear an exact distinction between the two), and one of the tools that seems to get talked about most is Google's Tensorflow.

I wanted to get playing around with Tensorflow and RNN's a little bit, since they're not the type of machine learning I'm most familiar with, with a low investment in time to see what kind of outputs I could come up with.


A little digging and I came across this tutorial, which is a pretty good brief overview intro to RNNs, and uses Keras and computes things character-wise.

This is turn lead me to word-rnn-tensorflow, which expanding on the works of others, uses a word-based model (instead of character based).

I wasn't about to spend my whole weekend rebuilding RNNs from scratch - no sense reinventing the wheel - so just thought it'd be interesting to play around a little with this one, and perhaps give it a more interesting dataset. Shakespeare is ok, but why not something a little more culturally relevant... like I dunno, say the scripts from a certain cartoon featuring a dysfunctional foul-mouthed spy agency?

Data Acquistion
Googling the Archer scripts turns up the bunch of them at Springfield! Springfield!.

Unfortunately since it looks like the scripts have been laboriously transcribed by ardent fans, there isn't any dialogue tagging like you'd see in a true script, but this is a limitation of the data set we'll just have to live with. Hopefully the style of the dialogue and writers will still come through when we train the RNN on it (especially since sometimes there is not too much difference between the difference characters' dialogue, given how terrible they all are, and the amount of non sequitur in the show).

I suppose I could have gone through and copy-pasted all 93 episodes into a corpus for training, but I'm pretty sure that would have taken longer than just putting together the Python script I did using BeautifulSoup and building on some previous work:

from bs4 import BeautifulSoup
import urllib2

def soupify(url):

    # Open the request and create the soup
    req = urllib2.Request(url)
    response = urllib2.urlopen(req, timeout = 10.0)
    soup = BeautifulSoup(, "lxml")
    return soup

def get_script(url):
    soup = soupify(url)
    script = soup.findAll("div", {"class":"episode_script"})[0]
    # Clean
    for br in script.find_all("br"):
    scripttext = script.text
    scripttext = scripttext.replace('-',' ').replace('\n',' ')
    scripttext = scripttext.strip()

    return scripttext

def get_episode_urls(showurl):
    soup = soupify(showurl)

    # Get the urls and add the base URL to each in the list
    urls = soup.findAll("a", {"class":"season-episode-title"})
    baseurl = ''
    urls = map(lambda x: baseurl + '/' + x['href'], list(urls))

    return urls

### MAIN

def do_scrape():

 # Scrape the script from each URL and add to a list
 episodes = list()

 # Get the episode list from the main page
 urls = get_episode_urls('')

 for url in urls:
  print url
 # Write the output to a file

 f = open('archer_scripts.txt','w')
 for episode in episodes:

Basically the script gets the list of episode URLs from the show page, then scrapes each script in turn and exports to a text file. And I didn't even have to do any error handling, it just worked on the first shot! Wow. (Isn't it nice when things just work?)

After a little manual data cleansing, we are ready to feed the data to the RNN model.

Training the model
Since this is the easy part that we are relying on the already built model for, there's not much to say here. Just rename the file and plunk into a data directory like the demo file, then run
python --data_dir data/archer
And let the model do its thing. My poor little laptop doesn't even have a GPU so the model was grinding away overnight and then some but eventually finished.

The end of the grind and testing the model output.

word-rnn-tensorflow also conveniently pickles your model, so you can conveniently use it again at a later time, or continue training a previously trained model. I'd have made the model files available, however unfortunately they are rather large (~200 MB).

Anyhow, once the training is done you can get the model to spit out outputs by running:
Here are some sample outputs from the model which I split up and tidied a bit:

Oh, what do you mean "Lana hasn't called"? 
I mean, you don't know how to tell you how to discipline my servant! 
I think I was gonna say "flap hammered. " 
Oh, what are you talking about? 
Hitting on the communist insurgency. 
I don't do dossiers. 
Besides, this is a booty call, I'm flattered, but Oh my God, BPA! 
I Transgendered frogs! [frog bribbit]
Shh, shh, shh. 

Coyly. relieve in the cerebrospinal fluid at the dimmer switch in the bullpen, maybe spilled a miracle. And so sorry if you don't clamp that artery! 
One! Brett? What the hell is wrong with you?! And what are you doing? 
Esatto! Che cosa stiamo facendo?! 
Aww man, we go to Plan Manfred. And then Ruby Foo's. 
Yeah, I don't know what grits are, or hush puppies! 
Are you sure? 
I don't know. 
Push it out of that shitheap of a weak, like the rest of our business! Oh, and speaking of an exquisitely prepared entre... 
No, I don't even know what grits are, or hush puppies! 
Are you sure? 
That's what I was gonna say "flap hammered. " 
Oh, how are you bleaching mucho. 
But I don't know what grits are, or hush puppies! 
Are you sure? 
That's what I was gonna say "flap hammered. " 
Oh, how are you bleaching mucho. 
But I don't know what grits are, or hush puppies! 
Are you finished? 
No, no, no, no, no! [crashing] [crashing stops] [coughing] 
Oh, shit, uh whatcha madoo HVAC. 
God damn it. Off! 
peasy zombie squeezy. 

Yeah, of the sake of argument, let's leave him Fister Roboto. 
But it looks like bubble wrap. 
What is your proposal? 
I know the burns. And if you were "confused verging on angry" before...
Aaagh! Son of a fanboy request. 
And you don't know how to share, beyotch. 
Easy, tough guy. 
When do you think it was squash, sir. 
I don't know. I don't know. Warranty's in raspy Street, you know. 
No, coordinate with Rome, then let me go. (wheezy coughing) (gasping) 
Well, I am just a DJ?

Learning experience? Well, joke's on sale, will you not?
She's battled ivory poachers, raging rivers, even tuberculosis, and now Oscar winner Rona Thorne takes on the planet. 
Look: CIA, Ml6, Interpol. 
We can't believe you don't clamp that artery! 
One! Brett? 
What the hell was that? 
It was all the shocks damaged my frontal lobe. 
In the brain's language center?

About the output you would expect. Nothing that's going to pass the Turing test, but if you've watched the show you can picture Archer, Lana, and Cyril having an argument that might contain some of the above (with maybe a couple other cast members thrown in... like that Italian line from The Papal Chase). And it seems to stitch together whole phrases or following lines since many are unique.

Some of the output is not that bad - there's what could be some comedic gems in there if you look hard enough, that aren't verbatim from the original scripts (e.g. "son of a fanboy request!")


A fun little romp doing some web scraping and playing with RNNs. Unfortunately with using someone else's code, this model was even more of a black box than neural networks usually are, however this was just for fun. If you want to know more or play around yourself, check out the resources below, and what I've saved on github.


Google TensorFlow

Creating a Text Generator Using A Recurrent Neural Network


Archer scripts (at Springfield! Springfield!)

Python code and model input on github:

Monday, March 13, 2017

When to Use Sequential and Diverging Palettes


I wanted to take some time to talk an about important rule for the use of colour in data visualization. 

The more I've worked in visualization, the more I have come to feel that one of the most overlooked and under-discussed facets (especially for novices) is the use of colour. A major pet peeve of mine, and a mistake I see all too often, is the use of a diverging palette instead of a sequential one or vice-versa. 

So what is the difference between a sequential and diverging palette, and when is it to correct to use each? The answer is one that arises very often in visualization: it all depends on the data, and what you're trying to show.

Sequential vs. Diverging Palettes

First of all, let's define what we are discussing here. 

Sequential Palettes
A sequential palette ranges between two colours (typically having one "main" colour) ranging from white or a lighter shade to a darker one, by varying one or more of the parameters in the HSV/HSL colour space (usually only saturation or value/luminosity, or both). 

For me, at least, varying hue is going between two very distinct colours and is usually not good practice if your data vary linearly, as it is much closer to a diverging palette which will discuss next. There are others reasons why this is bad visualization practice, and, of course, exceptions to this rule, which we will discuss later in the post.

A sequential palette (generated in R)

Diverging Palettes
In contrast to a sequential palette, a diverging palette ranges between three or more colours with the different colours being quite distinct (usually having different hues). 

While technically a diverging palette could have as many colours as you'd like in a (such as in the rainbow palette which is the default in some visualizations like in MATLAB), diverging palettes usually range only between two contrasting colours at either end with a neutral colour or white in the middle separating the two.

A diverging palette (generated in R)

When to Use Which

So now that we've defined the two different palette types of interest, when is it appropriate and inappropriate to use them?

The rule for the use of diverging palettes is very simple: they should only be used when there is a value of importance around which the data are to be compared.

This central value is typically zero, with negative values corresponding to one hue and positive the other, though this could also be done for any other value, for example, comparing numbers around a measure of central tendency or reference value.

A Simple Example
For example, looking at the Superstore dataset in Tableau, a visualizer might be tempted to make a map such as the one below, with colour encoding the number of sales in each city:

Here points on the map correspond to the cities and are sized by total number of sales and coloured by total sales in dollars. Looks good, right? The cities with the highest sales clearly stick out in the green against the dark red?

Well, yes, but do you see a problem? Look at the generated palette:

The scale ranges from the minimum sales in dollars ($4.21) to max (~$155K), so we cover the whole range of the data. But what about the midpoint? It's just the dead center point between the two, which doesn't correspond to anything meaningful in the data - so why would the hue change from red to green at that point?

This is a case which is better suited using a sequential palette, since all the values are positive and were not highlighting a meaningful value which the range of data falls around. A better choice would be a sequential palette, as below:

Here, the range is full covered and there is no midpoint, and the palette ranges from light green to dark. The extreme values still stand out in dark green, however there is no well-defined center where the hue arbitraily changes, so this is a better choice.

There are other ways we could improve this visualization's encoding of quantity as colour, for one, by using endpoints that would be more meaningful to business users instead of just the range of the data (say, $0 to $150K+), and another which we will discuss later.

Taking a look at the two palettes together, it's clearer which is a better choice for encoding the always positive value of the metric sales dollars across its range:

Going Further
Okay, so when would we want to use a diverging palette? As per the rule, if there was a meaningful midpoint or other important value you wanted to contrast the data around.

For example, in our Superstore data, sales dollars are always positive, but profit can be positive or negative, so it is appropriate to use a diverging palette in this case, with one hue corresponding to negative values and another to positive, and the neutral colour in the middle occurring at zero:

Here it is very clear which values fall at the extremes of the range, but also which are closer to the meaningful midpoint (zero): that one city in Montana is in the negative, and the others don't seem to be very profitable either; we can tell they are close to zero by how washed out their colours are.

Tableau is smart enough to know to set the midpoint at zero for our diverging palette. Again, you could tinker with the range to make the end-points more meaningful (e.g. round values), as well as varying the range: sometimes a symmetrical range for a diverging palette is easier to interpret from a numerical standpoint, though of course you have to keep in mind how perceptually this going to impact the salience of the colour values for the corresponding data.

So could we use a diverging palette for the always positive sales data? Sure. There just needs to be a point around which we are comparing the values. For example, I happen to know that the median sales per city over the time period in question is $495.82 - this would be a meaningful value to use for the midpoint of a diverging palette, and we can redo our original sales map as such:

No we have a better version of our original sales map, where here the cities coloured in red are below the median value per city, and those coloured in green are above. Much better!

But now something strange seems to be going on with the palette - what's that all about?

No Simple Answers
So what is going on with the palette in the last map from our example above? And what of my promise to discuss other ways the palette scaling can be improved, and of exceptions to the rule of not using differing hues in a continuous scale?

Well, the reason that the map looks good above but the scale looks wrong has to do with how the data are distributed: the distribution of sales by city is not normal, but follows a power law, with most of the data falling in the low end, so our palette looks the same when the colours are scaled linearly with the data:

One way to fix this is to transform the data by taking the log, and seeing that the resulting palette looks more like we'd expect:

Though of course now the range is between transformed values. It's interesting to not that in this case the midpoint comes out being nearly correct automatically (2.907 vs. log(495.82) ~= 2.695).

Further complicating all this is the fact that human perception of colour is not linear, but follows something like the Weber-Fenchner Law depending on the various properties. Robert Simmon writes on this in his excellent series of posts while he was at NASA which is definitely worth a read (and multiple re-reads).

There he also notes an exception to my statement that you shouldn't use continuous palettes with different hues, as sometimes even that can be appropriate, as he notes in the section on figure-ground when talking about earth surface temperature.


So there you have it. Once again: use diverging palettes only when there is a meaningful point around which you want to contrast the other values in your data.

Remember, it all depends on the data. What is the ideal palette for a given data set, and how should you choose it? That's not an easy question to answer, one always left up to the visualization practitioner, which only comes with the knowledge of proper visualization technique and the theoretical foundations that form it.

There are no right or wrong answers, only better or worse choices. It's all about the details.

References and Resources

Subtleties of Colour (by Robert Simmon)

Understanding Sequential and Diverging Palettes in Tableau

How to Choose Colours for Maps and Heatmaps

Saturday, January 14, 2017

How Often Does Friday the 13th Happen?


So yesterday was Friday the 13th.

I hadn't even thought anything of it until someone mentioned it to me. They also pointed out that there are two Friday the 13ths this year: the one that occurred yesterday, and there will be another one in October.

This got me to thinking: how often does Friday the 13th usually occur? What's the most number of times it can occur in a year?

Sounds like questions for a nice little piece of everyday analytics.


A simple Google search revealed over a list of all the Friday the 13ths from August, 2010 up until the end of 2050 over at It was a simple matter to plunk that into Excel and throw together some simple graphs.

So to answer the first question, how often does Friday the 13th usually occur?

It looks like the maximum number of times it can occur per year is 3 (those are the years Jason must have a heyday and things are really bad at Camp Crystal Lake) and the minimum is 1. So my hypothesis is:
a. it's not possible to have a year where a Friday the 13th doesn't occur, and 
b. Friday the 13th can't occur more than 3 times in a year, due to the way the Gregorian calendar works.

Of course, this is not proof, just evidence, as we are only looking at a small slice of data.

So what is the distribution of the number of unlucky days per year?

The majority of the years in the period have only one (18, or ~44%) but not by much, as nearly the same amount have 2 (17, or ~42%). Far less have 3 F13th's, only 6 (~15%). Again, this could just be an artifact of the interval of time chosen, but gives a good idea of what to expect overall.

Are certain months favoured at all, though? Does Jason's favourite day occur more frequently in certain months?

Actually it doesn't really appear so - they look to be spread pretty evenly across the months and we will see why this is the case below.

So, what if we want even more detail. When we say how frequently does Friday the 13th occur, and we mean how long is it between each occurrence of Friday the 13th? Well, that's something we can plot over the 41-year period just by doing a simple subtraction and plotting the result.

Clearly, there is periodicity and some kind of cycle to the occurrence of Friday the 13th, as we see repeated peaks at what looks like 420 days and also at around 30 days on the low end. This is not surprising, if you think about how the calendar works, leap years, etc. 

If we pivot on the number of days and plot the result, we don't even get a distribution that is spread out evenly or anything like that; there are only 7 distinct intervals between Friday the 13ths during the period examined:

So basically, depending on the year, the shortest time between successive Friday the 13ths will be 28 days, and the greatest will be 427 (about a year and two months), but usually it is somewhere in-between at around either three, six, or eight months. It's also worth noting that every interval is divisible by seven; this should not be surprising at all either, for obvious reasons.


Overall and neat little bit of simple analysis. Of course, this is just how I typically think about things, by looking at data first. I know that in this case, the occurrence of things like Friday the 13th (or say, holidays that fall on a certain day of week or the like) are related to the properties of the Gregorian calendar and follow a pattern that you could write specific rules around if you took the time to sit down and work it all out (which is exactly what some Wikipedians have done in the article on Friday the 13th).

I'm not a superstitious, but now I know when those unlucky days are coming up and so do you... and when it's time to have a movie marathon with everyone's favourite horror villain who wears a hockey mask.

Monday, January 9, 2017

Top 100 CEOs in Canada by Salary 2008-2015, Visualized

I thought it'd been a while since I'd some good visualization work with Tableau, and noticed that this report from the Canadian Centre on Policy Alternatives was garnering a lot of attention in the news.

However, most of the articles about the report did not have any graphs and simply restated data from it in narrative to put it in context, and I found the visualizations within the report itself to be a little lacking in detail. It wasn't a huge amount of work to extract the data from the report and quickly throw it into Tableau, and put together a cohesive picture using the Stories feature (best viewed on Desktop at 1024x768 and above).

See below for the details, it's pretty staggering, even for some of the bottom earners. To put things in context, the top earner had $183M a year all-in, which, if you work 45 hours a week and only take two weeks of vacation per year, translates to about $81,000 and hour.

Geez, Looks like I need to get into a new line of work.