UrlAi.com – who are you?

UrlAi

We have created a new service called UrlAi.com, the basic concept is to run blog posts through a bunch of classifiers over time. To begin with we use Gender, Age, Mood and Tonality but the system is dynamic so we can add new classifiers at any time. If you have created a classifier that would fit on urlai.com let us know!

Some ideas

We have many ideas of how we can develop this project further, for example, now we are only showing a summary pie chart, it would be nice to see posts over time. User feedback for online training and classifier improvement may be possible. Another thing we could do is to have classified posts searchable, for example, enabling users to see the mood of everyone who mentioned ‘Avatar’.

Some kudos

Just want to thank the people that has been involved in this project, Roger Karlsson for coding, Johanna Forsman for the awesome logo and Mattias Östmar for sharing his Tonality and Mood classifiers. Mattias has also contributed with many ideas around this, being the idea fountain he is 😀

Artificial Intelligence to determine an authors age

Young and old people

We have just released ageanalyzer.com, a site that reads a blog and guesses the age of the author!

Background

Our writing style reflects us in many ways, for example texts written in anger probably differs from words written in joy.  Reading a text intuitively gives us a clue about the author as you start forming a picture in your head.  Sometimes it’s easy to pinpoint how you got this picture and at other times harder.

We wanted to know if we could give computers the same intuition, in this particular project we are interesting in finding out if a computer can tell the age of an author – only given a text.

To do this experiment we collected 7000 blogs that had age information in the profile and split it into 6 different age groups, 13-17, 18-25, 26-35, 36-50, 51-65 and 65+. We then created a classifier on uClassify and fed it with the training data. Viola!

Expected results

After running tests on the training data (10-fold-cross-validation) it was clear that our classifier was able to find differences between the six age groups. We expect the proportion of correctly classified blogs would be around 30% compared to a baseline of 17% which would be expected if the classifier was guessing out of the blue.

We have added a poll to the site to help us see how well (or poorly) it works!

Try AgeAnalyzer out here!

An experiment – predicting the stock market

During my three weeks of vaccation, I had an interesting conversation with a company that has a bot that trades soccer bets. This inspired me to set up classifier model that tries to predict the stock market.

Seeing into the future by looking on the past

My idea is to start with something that is really simple to implement and test – predicting the stock market tomorrow based on the history. I have no particular interest in stocks and the only thing I really know is that you should buy cheaper than you sell, hence this is the only thing the classifiers know as well =)

Google Stock Chart

What’s going to happen tomorrow?

Setup

For training data I’ve used historical stock prices that are downloaded from Yahoo Finance. Then I’ve automatically created one classifier per stock (in total about 3100 classifiers) that, given the todays stock state predicts tomorrows by inferring over historical data. From the 3100 classifiers (stocks) I will pick the top X classifiers that are most confident. This is done by evaluating the training data and picking those that historically would have worked best. This is the most time consuming task takes several hours to run 10-fold-cross-validation.

Evaluation

As I evaluate this project I will post predictions in this blog and follow up with the accuracy when the correct answers are known.

More to come!!

Donate your spam!

We are evaluating our next move and are running preliminary tests on spam comments (spaments?). We only have a few corporas to test on and it looks good on those (I’ll get back with exact performance later).

We want your blog comments for a good cause

Following our own guidelines we are looking for more data to test on. If you have a WordPress installation you can help us out by:

  1. Log into phpMyAdmin
  2. Select your WordPress database
  3. Click on the table ‘wp_comments’
  4. Click on ‘Export’
  5. Select the XML format
  6. Check ‘Save to file’ and click ‘Run’
  7. Attach the exported XML to an e-mail for contact AT uclassify DOT com

We will not publish any comments without asking you for permission first. Also you will be credited with your name and blog when we return with the classifier results for your comments.

Thank you!

Classifier performance – part II

In the first part I explained some guidelines to keep in mind when selecting a test copurs. In this part I will give a brief introduction of how to run tests on your corpuses. Given a corpus of labeled documents, how can it be used to determine a classifiers performance? There are several ways, one of the simplest is to divide it into two parts and use one part for training and the other for testing. How the corpus is divided affects the performance and is very likely to be biased, there is also a great data loss (50% is never used as testing/training). We can do a lot better.

Leave one out cross validation (LOOCV)

A well established technique is to train on all documents except one which is left out and used for testing. This procedure is repeated so that every document has been used for testing once. An advantage of this method is that it almost uses the full corpus as the training data (no data waste). The downside is that it’s expensive as it must be repeated as many times as there are documents. k fold cross validation solves this problem by dividing the corpus into k piles. The performance is then averaged over all the runs.

k fold cross validation

Perhaps the most common way to run tests is to use k fold cross validation. This means that k-1 parts of the corpus are used for training and 1 part for testing. This method is repeated k times so that every part of the corpus is once used for testing and k-1 times for training. Using 10 fold cross validation is commonly used. In that case start by training the classifier on part 2->10 and test it on part 1, then training it on part 1+3->10 and test it on part 2 and so on. For every rotation the performance is measured. When the tests have completed the performance is averaged. Using k fold cross validation will give a more robust performance measure as every part will be used as training and test data.

Remember from part one that it can be useful to vary the size of the corpus, scaling it from a small magnitude to a greater and using unbalanced data.

Summary

  • Don’t use test methods because they are simple – the results probably fool you.
  • Use an established method, such as k fold cross validation or leave one out.
  • Always remember to specify what method you have used together with the results.

In the next part I’ll show how performance actually can be measured! Happy classifying until then!

Classifier performance – Part I

There are several different classifiers, to name a few: Naive Bayesian Classifiers, Support Vector Machines, k Nearest Neighbor and Neural Networks. A crucial cornerstone when choosing a classifier is the performance – how well it classifies the data. There are several methods to measuring how good a classifier performs. In three parts I will try to give an idea of how to avoid common pitfalls.

Part I: Choosing test corpus
Part II: Running tests
Part III: Measuring the performance

What test corpus should I use? Use many!

This is perhaps the hardest part when trying to determine the performance of a classifier, every subset of data is a model that is likely to be biased. Therefore you should always question on what data (corpus) the tests are carried out on. For example, a classifier that reports high performance on a specific corpus is likely to have a different performance on real world data (and often lower! – to not look stupid in comparison to other classifiers they are tuned for the test corpus but this bias may degrade performance on other corpuses). Using many relevant corpuses can help avoiding that a classifier gets to narrowed down (specialized) to one specific corpus.

In “The fallacy of corpus anti-spam evalutation” Vipul explains why many anti spam researchers results may not say much since the test corpus is static. While researchers are busy measuring their performance on a corpus from 2005 (TREC2005), spammers today have had three years to figure out how to fool their spam filters… I completely agree, it’s almost like

– Look everyone, I’ve spent years inventing a highly accurate long bow to shoot down our nemisis Spam! It works really well back in my training yard!

– Did you not hear? Spam evolved, they now come in F-117 Nighthawk Stealth Fighters cloaked by deceiving words, even if you could see it – an arrow couldn’t scratch it.

– Oh, dear...

Small or large test sets? Both!

The size of the test data is also vital; using too small test set says more of how the classifier will perform during the training phase than after. Using too large and rich training sets may invoke overfitting (so much data that seemingly nonsense tests will fit your model). The best is to measure performance on different sizes, scaling the training set from a few test documents to the full set. Unfortunately this is often disregarded.

Hint: A benefit from measuring the performance as a function of a scaling corpus is that you can predict how much training data is needed to reach a certain level of performance. Just project the performance curve beyond the size of the test corpus.

Balanced or unbalanced training sets? Both!

The test corpus can be heavily unbalanced, meaning that one of the classes is overrepresented and another is underrepresented in number of documents. As Amund Tviet points out in his blog

“Quite frequently classification problems have to deal with unbalanced data sets, e.g. let us say you were to classify documents about soccer and casting (fishing), and your training data set contained about 99.99% soccer and 0.01% about casting, a baseline classifier for a similar dataset could be to say – “the article is about soccer”. This would most likely be a very strong baseline, and probably hard to beat for most heavy machinery classifiers.”

In many cases it’s desirable to run tests on unbalanced test sets. For example, imagine that you get 2000 spam every day and 10 legitimate. You decide to install a machine learning spam filter. The spam filter requires training, so each day you train it on 2000 spam and 10 legitimate e-mails. This creates unbalanced training data for your classifier, and it’s extremely important that it your spam filter doesn’t mark your legitimate as spam (could be a matter of life, death, peace or war – literally).

Summary

  • Understand that corpuses are biased and therefore also the test results.
  • Use up-to-date corpuses if the classification domain is dynamic.
  • Make sure the test data is as representative as possible for the domain. E.g. don’t trust test results from a spam corpus to apply on sentiment classification.
  • Prefer running tests on many different corpuses.
  • Run tests on the corpuses as they scale in size.
  • Make sure that the classifier is robust on unbalanced training data, especially when correct classifications can be a matter of life and death.

Gender Text Analysis

Do males and females express themselves differently in text? Yes is the answer if we look at the research carried out at the University of Texas, in the article “Effects on age and Gender on Blogging” [1] it’s found that author gender can be determined with an accuracy of 80% by looking at a text. This is achieved with a classifier, trained on 37478 blogs written by males and females at blogger.com.

Gender stereotypes in the blogosphere

The research also shows the most discriminating terms for males of females (using information gain).

Male favorite words


– linux
– microsoft
– gaming
– server

– software
– gb
– programming
– google
– data
– graphics
– india
– nations
– democracy

– users
– economic

Female favorite words


– shopping
– mom
– cried
– freaked
– pink

– cute
– gosh
– kisses
– yummy
– mommy
– boyfriend
– skirt
– adorable
– husband
– hubby

They conclude “Male bloggers of all ages write more about politics, technology and money than do their female cohorts. Female bloggers discuss their personal lives – and use more personal writing style – much more than males do.”

Try it on your blog

GenderAnalyzer.com uses the same approach as described in the article, they have collected 2000 blogs from blogger.com written by men and woman. They also have a poll which allows us to see how well it’s working, as we speak it has an accuracy of 70%.

Trying this blog in the analyzer gives us the correct answer

Results
We think http://blog.uclassify.com is written by a man.

[1] J. Schler, Moshe Koppel, S. Argamon and J. Pennebaker (2006), Effects of Age and Gender on Blogging, in Proc. of AAAI Spring Symposium on Computational Approaches for Analyzing Weblogs, March 2006. PDF

What is a text classifier?

A text classifier places documents into their relevant classes (categories). For example, placing spam in the spam folder or web pages about Artificial Intelligence into the AI category. There are different types of text classifiers, the one I will be addressing here is a machine learning one!

Training

To make the classifier understand where documents should go you must first train it. By training you manually set up two or more classes (e.g. spam and legitimate) and describe each class by showing typical documents. In the case of a spam classifier you would train the classifier on spam and legitimate documents. Basically saying to Mrs. Classifier “Hey look at this bunch of documents, they are all spam!” after which you show her the legitimate documents “and these are legitimate!”

By doing so the classifier learns characteristics for each class. This is called supervised training. The training documents are often referred to as the training corpus.

Classifying

Once a classifier has been trained it can be used to find out into which of the predefined classes a previously unseen document is most likely belong. You ask Mrs. Classifier something like “To which of the classes (I have trained you on) is this document most likely to belong?” She would the kindly answer something like “I am 96% certain that it should go into the spam folder.”

It’s not necessary to stop training a classifier when you start classifying. Training and classifying can take place at the same time.

Using our XML API you can communicate with “Mrs. Classifier”!