Everybody can classify

Creating your own classifiers has never been easier, we have developed a Click’n’Classify Graphical User Interface (GUI). This means that you can manually create and train your classifiers without knowing any programming at all. This is very good way to test an idea, if the classifier works well – build your web site around it or use it for whatever purpose.

The GUI allows you to do everything that you can do via our Application Programming Interface (API). Also, just like phpMyAdmin shows the SQL queries our uClassify GUI will show the XML queries so you can easily understand and use the API from your site.

Features

  • Create and remove classifiers
  • Add and remove classes
  • Train and untrain classes
  • See basic information about your classifiers

Screenshot – Create a classifier

This shows a screenshot of how it looks like when you are about to create a classifier, just log in and try it yourself!

Creating a classifier is easy

Screenshot – Training a classifier

Just copy and paste the texts you want to use as training data.

Training a classifier is easy

Happy classifying!

Developing the development

Since we released the beta version a couple a weeks ago we have seen a few websites pop up building on the uClassify techonology. This is very encouraging for us! Right now we are trying to reach out to more users who want to use our classifier API.

We have spent a lot of time on development of our service – making it parallel – robust – low on memory – fast etc. This is what we are really good at. The remaining part which is as important – to reach out to users – advertise ourselves and being seen on the right places is not our sharpest skill.

Besides writing this blog and posting the uClassify link on a couple of sites we haven’t done much to show our muscles – yet! We thought that we perhaps would use our own API ourselves – that is probably an easier way to create some buzz! We have a couple of ideas make us seen (feel free to use these ideas for yourself):

Build an Anti Spam Comment Plugin for WordPress?

We are quite confident that we could do really well as the classifier engine has shown really good results in Cactus Spam Filter. This would compete or be a good complement to Akismet, Defensio and similar. Is there anyone who needs another blog spam comment filter?
antispamspam

Build a Spam Blog Filter?

This seems to be a problem for many blog communities, building a splogs (spam blogs) filter could give us some good attention. What would be really nice is if somebody could provide us with dynamic training data on slogs and blogs – then we could automate the training process and find the undetected spam! Anyone who want to donate their spam? :)

Implement a JSON API for uClassify?

Building a JSON API would not only broaden our API (only XML API right now) it would also let users use our classification service via Yahoo! Pipes. Yahoo Pipes let’s you combine different RSS flows into one and use external web services (via JSON) – which is madly cool.

Language Detection – talar du svenska?

We already have a language detection classifier (not published yet) that only needs training data refinement (removal of noise such as English words in the Filipino class). It supports 40 languages. This would be fairly simple and could give us some buzz.

Ideas, anyone!

Do you have any ides? Let us know – or use the uClassify API to create your own classifier (spam filter, language detection or whatever comes to your mind).

Classifier performance – part II

In the first part I explained some guidelines to keep in mind when selecting a test copurs. In this part I will give a brief introduction of how to run tests on your corpuses. Given a corpus of labeled documents, how can it be used to determine a classifiers performance? There are several ways, one of the simplest is to divide it into two parts and use one part for training and the other for testing. How the corpus is divided affects the performance and is very likely to be biased, there is also a great data loss (50% is never used as testing/training). We can do a lot better.

Leave one out cross validation (LOOCV)

A well established technique is to train on all documents except one which is left out and used for testing. This procedure is repeated so that every document has been used for testing once. An advantage of this method is that it almost uses the full corpus as the training data (no data waste). The downside is that it’s expensive as it must be repeated as many times as there are documents. k fold cross validation solves this problem by dividing the corpus into k piles. The performance is then averaged over all the runs.

k fold cross validation

Perhaps the most common way to run tests is to use k fold cross validation. This means that k-1 parts of the corpus are used for training and 1 part for testing. This method is repeated k times so that every part of the corpus is once used for testing and k-1 times for training. Using 10 fold cross validation is commonly used. In that case start by training the classifier on part 2->10 and test it on part 1, then training it on part 1+3->10 and test it on part 2 and so on. For every rotation the performance is measured. When the tests have completed the performance is averaged. Using k fold cross validation will give a more robust performance measure as every part will be used as training and test data.

Remember from part one that it can be useful to vary the size of the corpus, scaling it from a small magnitude to a greater and using unbalanced data.

Summary

  • Don’t use test methods because they are simple – the results probably fool you.
  • Use an established method, such as k fold cross validation or leave one out.
  • Always remember to specify what method you have used together with the results.

In the next part I’ll show how performance actually can be measured! Happy classifying until then!

Classifier performance – Part I

There are several different classifiers, to name a few: Naive Bayesian Classifiers, Support Vector Machines, k Nearest Neighbor and Neural Networks. A crucial cornerstone when choosing a classifier is the performance – how well it classifies the data. There are several methods to measuring how good a classifier performs. In three parts I will try to give an idea of how to avoid common pitfalls.

Part I: Choosing test corpus
Part II: Running tests
Part III: Measuring the performance

What test corpus should I use? Use many!

This is perhaps the hardest part when trying to determine the performance of a classifier, every subset of data is a model that is likely to be biased. Therefore you should always question on what data (corpus) the tests are carried out on. For example, a classifier that reports high performance on a specific corpus is likely to have a different performance on real world data (and often lower! – to not look stupid in comparison to other classifiers they are tuned for the test corpus but this bias may degrade performance on other corpuses). Using many relevant corpuses can help avoiding that a classifier gets to narrowed down (specialized) to one specific corpus.

In “The fallacy of corpus anti-spam evalutation” Vipul explains why many anti spam researchers results may not say much since the test corpus is static. While researchers are busy measuring their performance on a corpus from 2005 (TREC2005), spammers today have had three years to figure out how to fool their spam filters… I completely agree, it’s almost like

– Look everyone, I’ve spent years inventing a highly accurate long bow to shoot down our nemisis Spam! It works really well back in my training yard!

– Did you not hear? Spam evolved, they now come in F-117 Nighthawk Stealth Fighters cloaked by deceiving words, even if you could see it – an arrow couldn’t scratch it.

– Oh, dear...

Small or large test sets? Both!

The size of the test data is also vital; using too small test set says more of how the classifier will perform during the training phase than after. Using too large and rich training sets may invoke overfitting (so much data that seemingly nonsense tests will fit your model). The best is to measure performance on different sizes, scaling the training set from a few test documents to the full set. Unfortunately this is often disregarded.

Hint: A benefit from measuring the performance as a function of a scaling corpus is that you can predict how much training data is needed to reach a certain level of performance. Just project the performance curve beyond the size of the test corpus.

Balanced or unbalanced training sets? Both!

The test corpus can be heavily unbalanced, meaning that one of the classes is overrepresented and another is underrepresented in number of documents. As Amund Tviet points out in his blog

“Quite frequently classification problems have to deal with unbalanced data sets, e.g. let us say you were to classify documents about soccer and casting (fishing), and your training data set contained about 99.99% soccer and 0.01% about casting, a baseline classifier for a similar dataset could be to say – “the article is about soccer”. This would most likely be a very strong baseline, and probably hard to beat for most heavy machinery classifiers.”

In many cases it’s desirable to run tests on unbalanced test sets. For example, imagine that you get 2000 spam every day and 10 legitimate. You decide to install a machine learning spam filter. The spam filter requires training, so each day you train it on 2000 spam and 10 legitimate e-mails. This creates unbalanced training data for your classifier, and it’s extremely important that it your spam filter doesn’t mark your legitimate as spam (could be a matter of life, death, peace or war – literally).

Summary

  • Understand that corpuses are biased and therefore also the test results.
  • Use up-to-date corpuses if the classification domain is dynamic.
  • Make sure the test data is as representative as possible for the domain. E.g. don’t trust test results from a spam corpus to apply on sentiment classification.
  • Prefer running tests on many different corpuses.
  • Run tests on the corpuses as they scale in size.
  • Make sure that the classifier is robust on unbalanced training data, especially when correct classifications can be a matter of life and death.

Gender Text Analysis

Do males and females express themselves differently in text? Yes is the answer if we look at the research carried out at the University of Texas, in the article “Effects on age and Gender on Blogging” [1] it’s found that author gender can be determined with an accuracy of 80% by looking at a text. This is achieved with a classifier, trained on 37478 blogs written by males and females at blogger.com.

Gender stereotypes in the blogosphere

The research also shows the most discriminating terms for males of females (using information gain).

Male favorite words


– linux
– microsoft
– gaming
– server

– software
– gb
– programming
– google
– data
– graphics
– india
– nations
– democracy

– users
– economic

Female favorite words


– shopping
– mom
– cried
– freaked
– pink

– cute
– gosh
– kisses
– yummy
– mommy
– boyfriend
– skirt
– adorable
– husband
– hubby

They conclude “Male bloggers of all ages write more about politics, technology and money than do their female cohorts. Female bloggers discuss their personal lives – and use more personal writing style – much more than males do.”

Try it on your blog

GenderAnalyzer.com uses the same approach as described in the article, they have collected 2000 blogs from blogger.com written by men and woman. They also have a poll which allows us to see how well it’s working, as we speak it has an accuracy of 70%.

Trying this blog in the analyzer gives us the correct answer

Results
We think http://blog.uclassify.com is written by a man.

[1] J. Schler, Moshe Koppel, S. Argamon and J. Pennebaker (2006), Effects of Age and Gender on Blogging, in Proc. of AAAI Spring Symposium on Computational Approaches for Analyzing Weblogs, March 2006. PDF

uClassify beta!

Today we are very pleased to announce the beta release of a new web service that allows everyone to access text classifiers for free. In short, by using a web api (e.g. google maps), everyone can create and train their own classifiers.

Two sites using the api already exists, be inspired and come up with your own classifiers

Typealyzer.com – Analyzes the personality of a blog author.

GenderAnalyzer.com – Figures out if a text is written by a man or woman.

During beta we will test the server for usability, stability, scalability and performance.

All comments and feedback are very appreciated!!

Best regards,

Jon Kågström, Roger Karlsson and Emil Kågström.