Pages

Performing runtime benchmarks with Python Monitoring Tool Benchy

Friday, March 22, 2013


Hi all,

I've been working on in the last weeks at a little project that I developed called benchy.  The goal of benchy is answer some trivial questions about which code is faster ?  Or which algorithm consumes more memory ?  I know that there are several tools suitable for this task, but I would like to create some performance reports  by myself using Python.   

Why did I create it ?  Since the beginning of the year I decided to rewrite all the code at Crab, a python framework for building recommender systems.  And one of the main components that required some refactoring was the pairwise metrics such as cosine, pearson, euclidean, etc.  I needed to unit test the performance of several versions of code for those functions. But doing this manually ? It's boring. That's why benchy came for!


What benchy can do ?

Benchy is a lightweight Python library for running performance benchmarks over alternative versions of code.  How can we use it ?

Let's see the cosine function, a popular pairwise function for comparing the similarity between two vectors and matrices in recommender systems.




Let's define the benchmarks to test:



With all benchmarks created, we could test a simple benchmark by calling the method run:


The dict associated to the key memory represents the memory performance results. It gives you the number of calls repeat to the statement, the average consumption usage in units . In addition, the key 'runtime' indicates the runtime performance in timing results. It presents the number of calls repeat following the average time to execute it timing in units.

Do you want see a more presentable output ? It is possible calling the method to_rst with the results as parameter:


Benchmark setup
import numpy
X = numpy.random.uniform(1,5,(1000,))

import scipy.spatial.distance as ssd
X = X.reshape(-1,1)
def cosine_distances(X, Y):
    return 1. - ssd.cdist(X, Y, 'cosine')
Benchmark statement
cosine_distances(X, X)
namerepeattimingloopsunits
scipy.spatial 0.8.0318.3610ms


Now let's check which one is faster and which one consumes less memory. Let's create a BenchmarkSuite. It is referred as a container for benchmarks.:

Finally, let's run all the benchmarks together with the BenchmarkRunner. This class can load all the benchmarks from the suite and run each individual analysis and print out interesting reports:



Next, we will plot the relative timings. It is important to measure how faster the other benchmarks are compared to reference one. By calling the method plot_relative:




As you can see the graph aboe the scipy.spatial.distance function is 2129x slower and the sklearn approach is 19x. The best one is the numpy approach. Let's see the absolute timings. Just call the method plot_absolute:



You may notice besides the bar representing the timings, the line plot representing the memory consumption for each statement. The one who consumes the less memory is the nltk.cluster approach!

Finally, benchy also provides a full repport for all benchmarks by calling the method to_rst:




Performance Benchmarks

These historical benchmark graphs were produced with benchy.
Produced on a machine with
  • Intel Core i5 950 processor
  • Mac Os 10.6
  • Python 2.6.5 64-bit
  • NumPy 1.6.1

scipy.spatial 0.8.0

Benchmark setup
import numpy
X = numpy.random.uniform(1,5,(1000,))

import scipy.spatial.distance as ssd
X = X.reshape(-1,1)
def cosine_distances(X, Y):
    return 1. - ssd.cdist(X, Y, 'cosine')
Benchmark statement
cosine_distances(X, X)
namerepeattimingloopsunits
scipy.spatial 0.8.0319.1910ms

sklearn 0.13.1

Benchmark setup
import numpy
X = numpy.random.uniform(1,5,(1000,))

from sklearn.metrics.pairwise import cosine_similarity as cosine_distances
Benchmark statement
cosine_distances(X, X)
namerepeattimingloopsunits
sklearn 0.13.130.18121000ms

nltk.cluster

Benchmark setup
import numpy
X = numpy.random.uniform(1,5,(1000,))

from nltk import cluster
def cosine_distances(X, Y):
    return 1. - cluster.util.cosine_distance(X, Y)
Benchmark statement
cosine_distances(X, X)
namerepeattimingloopsunits
nltk.cluster30.010241e+04ms

numpy

Benchmark setup
import numpy
X = numpy.random.uniform(1,5,(1000,))

import numpy, math
def cosine_distances(X, Y):
    return 1. -  numpy.dot(X, Y) / (math.sqrt(numpy.dot(X, X)) *
                                     math.sqrt(numpy.dot(Y, Y)))
Benchmark statement
cosine_distances(X, X)
namerepeattimingloopsunits
numpy30.0093391e+05ms

Final Results

namerepeattimingloopsunitstimeBaselines
scipy.spatial 0.8.0319.1910ms2055
sklearn 0.13.130.18121000ms19.41
nltk.cluster30.010241e+04ms1.097
numpy30.0093391e+05ms1

Final code!

I might say this micro-project is still a prototype, however  I tried to build it to be easily extensible. I have several ideas to extend it, but feel free to fork it and send suggestions and bug fixes.  This project was inspired by the open-source project vbench, a framework for performance benchmarks over your source repository's history. I recommend!

For me, benchy will assist me to test several pairwise alternative functions in Crab. :)  Soon I will publish the performance results that we got with the pairwise functions that we built for Crab :)

I hope you enjoyed,

Regards,

Marcel Caraciolo

Graph Based Recommendations using "How-To" Guides Dataset

Friday, March 1, 2013


Hi all,

In this post I'd like to introduce another approach for recommender engines using graph concepts to recommend novel and interesting items. I will build a graph-based how-to tutorials recommender engine using the data available on the website SnapGuide (By the way I am a huge fan and user of this tutorials website), the graph database Neo4J and the graph traversal language Gremlin.

What is SnapGuide ?

Snapguide is a web service for anyone who wants to create and share step-by-step "how to guides".  It is available on the web and IOS app. There you can find several tutorials with easy visual instructions for a wide array of topics including cooking, gardening, crafts, projects, fashion tips and more.  It is free  and anyone is invitide to submit guides in order to share their passions and expertise with the community.  I have extracted from their website for only research purposes the corpus of tutorials likes. Several users may like the tutorial and this signal can be quite useful to recommend similar tutorials based on what other users liked.  Unfortunately I can't provide the dataset for download but the code you can follow below for your own data set.

Snapguide 



Getting Started with Neo4J


To create your own graph with Neo4J you will need to use Java/Groovy to explore it.  I found Bulbflow, it is a open-source Python ORM  for graph databases and supports puggable backends using Blueprints standards.  In this post I used it to connect to Neo4j Servers.  The snippet code below is a simple example of Bulbflow in action by creating some edges and vertexes.


>>> from people import Person, Knows
>>> from bulbs.neo4jserver import Graph
>>> g = Graph()
>>> g.add_proxy("people", Person)
>>> g.add_proxy("knows", Knows)
>>> james = g.people.create(name="James")
>>> julie = g.people.create(name="Julie")
>>> g.knows.create(james, julie)

Generating our tutorials Graph


I decided to define my graph schema in order to map the raw data into a property graph so the traversals required to get recommendations of which tutorials to check could be natural as possible.


SnapGuide Graph Schema


The data will be inserted into the graph database Neo4J  The code belows creates a new Neo4J graph with all the data set.

#-*- coding: utf-8 -*-
from bulbs.neo4jserver import Graph
from nltk.tag.hunpos import HunposTagger
from nltk.tokenize import word_tokenize

ht = HunposTagger('en_wsj.model')

likes = open('likes.csv')
tutorials = open('tutorials.csv')
users = open('users.csv')
g = Graph()
def filter_nouns(words):
   return [word.lower() for word, cat in words if cat in ['NN', 'NNP', 'NNPS']]
#Loading tutorials and categories
for tutorial in tutorials:
    tutorial = tutorial.strip()
    try:
 ID, title, likes, category = tutorial.split(';')
    except:
 try:
      ID, title, category = tutorial.split(';')
 except:
      t = tutorial.split(';')
      ID, title, category = t[0], t[1].replace('&Yuml', ''), t[-1] 
   
     tut =  g.vertices.create(type='Tutorial', tutorialId=int(ID), title=title)
     keywords = filter_nouns(ht.tag(word_tokenize(tutorial.split(';')[1])))
     keywords.append(category)

     for keyword in keywords:
 resp = g.vertices.index.lookup(category=keyword)
 if resp is None:
      ct = g.vertices.create(type='Category', category = keyword)
 else:
      ct = resp.next()
 g.edges.create(tut,'hasCategory', ct)
#Loading user dataset.

for user in users:
     user = user.strip()
     username = user.split(';')[0]
 
     user = g.vertices.create(type='User', userId=username)
#Loading the likes dataset.
for like in likes:
    like = like.strip()
    item_id, user_id = like.split(';')
    p = g.vertices.index.lookup(tutorialId=int(item_id))
    q =  g.vertices.index.lookup(userId=user_id)
    g.edges.create(q.next(), 'liked', p.next())
There are three input files: tutorials.dat, users.dat and likes.dat. The file tutorials.dat contains the list of  tutorials. Each row has 2 columns: tutorialId, title and category. The file users.dat contains the list of users.  Each row contains the columns:  userID, user name.  Finally  the likes.dat includes the tutorials that a user marked their interest. Each row of the raw file has : userId and movieId.

Given that there are more than 1 million likes, it will take some time to process all the data. An important note before going on. Don't forget to create the vertices indexes,  if you forget your queries it will take ages to proccess.


  1. //These indexes are a must, otherwise querying the graph database will take so looong
  2. g.createKeyIndex('userId',Vertex.class)
  3. g.createKeyIndex('tutorialId',Vertex.class)
  4. g.createKeyIndex('category',Vertex.class)
  5. g.createKeyIndex('title',Vertex.class)


Before moving on to recommender algorithms, let's make sure the graph is ok.

For instance,  what is the distribution of keywords amongst the tutorials repository ?

  1. //Distribution frequency of Categories
    def dist_categories(){
      m = [:]
     g.V.filter{it.getProperty('type')=='Tutorial'}.out('hasCategory').category.groupCount(m).iterate() 
    return m.sort{-it.value}
    }
>>> script = g.scripts.get('dist_categories')
>>> categories = g.gremlin.execute(script, params=None)
>>> sorted(categories.content.items(), key=lambda keyword: -keyword[1])[:10]
[(u'food', 4537), (u'make', 3840), (u'arts-crafts', 1609), (u'cook', 1362), (u'desserts', 1247), (u'beauty', 1108), (u'technology', 943), (u'drinks', 587), (u'home', 508), (u'chicken', 452)]

What about the average number of likes per tutorial ?

  1. //Get the average number of likes per tutorial
    def avg_likes(){
    return  
    g.V.filter{it.getProperty('type')=='Tutorial'}.transform{it.in('liked').count()}.mean() 
    }

>>> script = g.scripts.get('avg_likes')
>>>likes = g.gremlin.command(script, params=None)
>>>likes
111.089116326

Trasversing the Tutorials Graph

Now that the data is represented as graph, let's make some queries. Behind the scene what we make are some traversals.  In recommender systems  there are two general typs of recommendation approaches: the collaborative filtering and content-based one.

In collaborative, the liking behavior of users is correlated in order to recommend the favorites of one user to another, in this case let's find the similar user.

I like the tutorials Amanda preferred, what other tutorials does Amanda like that I haven't seen ?

Otherwise, the content-base strategy is based on the features of a recommendable item. So the attributes are analyzed in order to find other similar items with analogous features.

I  like food tutorials, what other food tutorials are there ?


Making Recommendations

Let's begin with collaborative filtering.  I will use some complex traversal queries at our graph.  Let's start with the tutorial: "How to Make Sous Vide Chicken at Home".  Yes,  I love chicken! :)

Great dish by the way!
Which users liked Make Sous Vide Chicken at Home ?
  1. //Get the users who liked a tutorial
    def users_liked(tutorial){
       v = g.V.filter{it.getPropery('title') == tutorial}
       return v.inE('liked').outV.userId[0..4]
    }
>>> tuts = g.vertices.index.lookup(title='Make Sous Vide Chicken at Home')
>>> tut = tuts.next()
>>> tut.title 
Make Sous Vide Chicken at Home
>>> tut.tutorialId
11890
>>> tut.type  
Tutorial
>>> script = g.scripts.get('n_users_liked')
>>> users_liked = g.gremlin.command(script, params={'tutorial': 'Make Sous Vide Chicken at Home'})
>>> users_liked
1000
This traversal doesn't provide us useful information, but we could put in action now the collaborative filtering with a extended query:

Which users liked Make Sous Vide Chicken at Home and what other tutorials did they liked most in common to ?


  1. //Get the users who liked the tutorial and what other tutorials did they like too ?
    def similar_tutorials(tutorial){
    v = g.V.filter{it.getProperty('title') == tutorial}
    return v.inE('liked').outV.outE('liked').inV.title[0..4]
    }


>>>> script = g.scripts.get('similar_tutorials')
>>>> similar_tutorials = g.gremlin.execute(script, params={'tutorial': 'Make Sous Vide Chicken at Home'})
>>> similar_tutorials.content
[u'Make Potato Latkes', u'Make Beeswax and Honey Lip Balm', u'Make Sous Vide Chicken at Home', u'Cook the Perfect & Simple Chicken Ramen Soup', u'Make a Simple (But Authentic) Paella on Your BBQ']

What is the query above express ?

It filters all users that liked the tutorial (inE('liked')) and find out what they liked (outV.outE('liked')), fetching the title of those tutorials (inV.title) . It returns the first five items ([0..4])

In recommendations we have to find the most-common purchased or liked itens.  Using Gremlin, we can work on a simple collaborative filtering algorithm by joining several steps together.

  1. //Get similar tutorials
    def topMatches(tutorial){
        m = [:]
    v = g.V.filter{it.getProperty('title') == tutorial}
    v.inE('liked').outV.outE('liked').inV.title.groupCount(m).iterate()
        return m.sort{-it.value}[0..9]

    }


>>> script = g.scripts.get('topMatches')
>>> topMatches = g.gremlin.execute(script, params={'tutorial': 'Make Sous Vide Chicken at Home'})
>>> sorted(topMatches.content.items(), key=lambda keyword: -keyword[1])[:10]
{u'Make Cake Pops!!': 75, u'Make Sous Vide Chicken at Home': 1000, u'Make Potato Latkes': 124, u'Make Incredible Beef Jerky at Home Easily!': 131, u'Cook the Perfect & Simple Chicken Ramen Soup': 96, u'Make Mint Juleps': 74, u"Solve a 3x3 Rubik's Cube": 89, u'Cook Lamb Shanks Moroccan Style': 74, u'Make Beeswax and Honey Lip Balm': 75, u'Make an Aerium': 74}

This traversal will return a list of tutorials.  But you may notice if you get all matches, ther are many duplicates. It happens because who like  How to Make sous Vide Chicken At Home also like many of the same other tutorials.  The similarity between users in represented at collaborative filtering algorithms.


How many of How to Make sous Vide Chicken At Home highly correlated tutorials are unique ?

  1. //Get the number of unique similar tutorials
    def n_similar_unique_tutorials(tutorial){
    v = g.V.filter{it.title == tutorial}
    return v.inE('liked').outV.outE('liked').inV.dedup.count()
    }

    //Get the number of similar tutorials
    def n_similar_tutorials(tutorial){
    v = g.V.filter{it.getProperty('title') == tutorial}
    return v.inE('liked').outV.outE('liked').inV.count()
    }

>>> script = g.scripts.get('n_similar_tutorials')
>>> similar_tutorials = g.gremlin.command(script, params={'tutorial': 'Make Sous Vide Chicken at Home'})
>>> similar_tutorials
37323
>>> script = g.scripts.get('n_similar_unique_tutorials')
>>> similar_tutorials = g.gremlin.command(script, params={'tutorial': 'Make Sous Vide Chicken at Home'})
>>> similar_tutorials
8766

There are 37323 paths from Make Sous Vide Chicken at Home to other tutorials and only  8766 of those tutorials are unique. Using this information we can use these duplications to build a ranking mechanism to build recommendations.

Which tutorials are most highly co-rated with How to Make Soous Vide Chicken ?


>>> script = g.scripts.get('topMatches')
>>> topMatches = g.gremlin.execute(script, params={'tutorial': 'Make Sous Vide Chicken at Home'})
>>> sorted(topMatches.content.items(), key=lambda keyword: -keyword[1])[:10]
[(u'Make Sous Vide Chicken at Home', 1000), (u'Make Incredible Beef Jerky at Home Easily!', 131), (u'Make Potato Latkes', 124), (u'Cook the Perfect & Simple Chicken Ramen Soup', 96), (u"Solve a 3x3 Rubik's Cube", 89), (u'Make Cake Pops!!', 75), (u'Make Beeswax and Honey Lip Balm', 75), (u'Make Mint Juleps', 74), (u'Cook Lamb Shanks Moroccan Style', 74), (u'Make an Aerium', 74)]

So we have the top similar tutorials. It means, people who like  Make Sous Vide Chicken at Home, also like Make Sous Viden Chicken at Home, oops! Let's remove these reflexive paths, by filtering out the Sous Viden Chicken.

  1. //Get similar tutorials
    def topUniqueMatches(tutorial){
        m = [:]
        v = g.V.filter{it.getProperty('title') == tutorial}
        possible_tutorials = v.inE('liked').outV.outE('liked').inV
        possible_tutorials.hasNot('title',tutorial).title.groupCount(m).iterate()
        return m.sort{-it.value}[0..9]
    }




>>>> script = g.scripts.get('topUniqueMatches')
>>>> topMatches = g.gremlin.execute(script, params={'tutorial': 'Make Sous Vide Chicken at Home'})
>>> topMatches.content
[(u'Make Incredible Beef Jerky at Home Easily!', 131), (u'Make Potato Latkes', 124), (u'Cook the Perfect & Simple Chicken Ramen Soup', 96), (u"Solve a 3x3 Rubik's Cube", 89), (u'Make Cake Pops!!', 75), (u'Make Beeswax and Honey Lip Balm', 75), (u'Make Mint Juleps', 74), (u'Cook Lamb Shanks Moroccan Style', 74), (u'Make an Aerium', 74), (u'Make a Leather iPhone Flip Wallet', 73)]

The recommendation above starts from a particular tutorial (i.e. Make Sous Vide Chicken), not from a particular user. This collaborative filtering method is called item-based filtering.   

Given an tutorial that a user likes, who else like this tutorial, and from those what other tutorials do they like that are not already liked by the initial user.

And the recommendation for a particular user ?  That comes the user-based filtering.


Which tutorials that similar users liked are recommended given a specified user  ?


  1. def userRecommendations(user){
      m = [:]
      v = g.V.filter{it.getProperty('userId') == user}
     v.out('liked').aggregate(x).in('liked').dedup.out('liked').except(x).title.groupCount(m).iterate()
      return m.sort{-it.value}[0..9]
    }
>>>> script = g.scripts.get('topRecommendations')
>>>> recommendations = g.gremlin.execute(script, params={'user': 'emma-rushin'})
>>> recommendations.content
[(u'Create a Real Fisheye Picture With Your iPhone', 1156), (u'Make a DIY Galaxy Print Tshirt', 933), (u'Make a Macro Lens for Free!', 932), (u'Make Glass Marble Magnets With Any Image', 932), (u'Make DIY Nail Decals', 932), (u'Make a Five Strand Braid', 929), (u'Create a Pendant Lamp From Coffee Filters', 928), (u'Make Avocado Toast', 926), (u'Make Instagram Magnets for Less Than $10', 923), (u'Make a Recycled Magazine Tree (Christmas Tree)', 923)]

Emma Rushin will really like art and crafts suggestions! :D

Ok, we have interesting recommendations, but if I desire to make another styles of chicken like Chicken Ramen Soup for my dinner, I probably do not want some tutorial of How to Solve a Rubik Cube 3x3.  To adapt to this situation, it is possible to mix collaborative filtering and content-based recommendation into a traversal so it would recommend similar chicken and food tutorials based on similar keywords.
Now let's play with content-based recommendation! 
Which tutorials are most highly correlated with Sous Vide Chicken that share the same category of food?

  1. //Top recommendations mixing content + collaborative sharing all categories.
    def topRecommendations(tutorial){
      m = [:]
      x = [] as Set
     v = g.V.filter{it.getProperty('title') == tutorial}
     tuts =v.out('hasCategory').aggregate(x).back(2).inE('liked').outV.outE('liked').inV
    tuts.hasNot('title',tutorial).out('hasCategory').retain(x).back(2).title.groupCount(m).iterate()
      return m.sort{-it.value}[0..9]
    }
>>>> script = g.scripts.get('topRecommendations')
>>>> recommendations = g.gremlin.execute(script, params={'tutorial': 'Make Sous Vide Chicken at Home'})
>>> topMatches.content
[(u'Make Incredible Beef Jerky at Home Easily!', 131), (u'Make Potato Latkes', 124), (u'Cook the Perfect & Simple Chicken Ramen Soup', 96), (u'Make Cake Pops!!', 75), (u'Make Beeswax and Honey Lip Balm', 75), (u'Make Mint Juleps', 74), (u'Cook Lamb Shanks Moroccan Style', 74), (u'Cook an Egg in a Basket', 72), (u'Make Banana Fritters', 72), (u'Prepare Chicken With Peppers and Gorgonzola Cheese', 71)]

This rank makes sense, but it still has a flaw.  The tutorial like Make mint Juleps may not be interesting for me. How about only considering those tutorials that share the same keyword 'chicken' with Vide Chicken  ?

Which tutorials are most highly co-rated with Vide Chicken that share the same keyword 'chicken' with  Vide Chicken?
  1. //Top recommendations mixing content + collaborative sharing the chicken category.
    def topRecommendations(tutorial){
     m = [:]
     v = g.V.filter{it.getProperty('title') == tutorial}

     v.inE('liked').outV.outE('liked').inV.hasNot('title',tutorial).out('hasCategory').
     has('category' ,'chicken').back(2).title.groupCount(m).iterate()

     return m.sort{-it.value}[0..9]
    }

>>>> script = g.scripts.get('topRecommendations')
>>>> recommendations = g.gremlin.execute(script, params={'tutorial': 'Make Sous Vide Chicken at Home'})
>>> topMatches.content
{u'Make a Whole Chicken With Veggies in the Crockpot': 28, u'Bake Crispy Chicken With Doritos': 30, u'Cook Chicken Rollatini With Zucchini & Mozzarella': 28, u'Make Beer Can Chicken': 23, u'Roast a Chicken': 54, u'Cook the Perfect & Simple Chicken Ramen Soup': 96, u'Pesto Chicken Roll-Ups Recipe': 31, u'Cook Chicken in Roasting Bag': 23, u'Make Chicken Enchiladas': 29, u'Prepare Chicken With Peppers and Gorgonzola Cheese': 71}


Conclusions
In this post I presented one strategy for recommending items using graph concepts. What I explored here is the flexibility of the property graph data structure and the notion of derived and inferred relationships. This strategy could be further explored to use other features available at your dataset (I will be sure that SnapGuide has more rich information to use such as Age, sex and the category taxonomy).  I am working on a book for recommender systems and I will explain with more details about graph based recommendations, so stay tunned at my blog!

The performance ?  Ok, I didn't test in order to compare with the current solutions nowadays.  What I can say is that Neo4J can theoretically hold billions entities (vertices + edges) and the Gremlin makes possible advanced queries. I will perform some tests, but based on what I studied, depending on the complexity of the the graph structure, runtimes vary. 

I also would like to thank Marko Rodriguez with his help at the Grenlim-Users community with his post to inspire me to take a further look into Neo4J + Grenlim! It amazed me! :)

Regards,

Marcel Caraciolo