Trve or Emo?

Another interesting uClassify web application has seen the light, Trve VS Emo. It tests a site if it’s “True Black Metal” or “Emotional”. The author, Albert Örwall writes

To “train” the trve-classification I’ve used lyrics by norweigan black metal bands, such as Mayhem, Burzum and Darkthrone. The emo-classification is based on lyrics by emo bands like My Chemical Romance and Fall out boy…

I tested with a hard rock blog I found randomly, Hard Rock Hideout which proved to be 81% Trve (true black metal). I then tested with this blog which turned out to be 100% Emo 🙂

Is there any need for automatic music tagging?

This is really cool, another cool thing would be a classifier that has been trained on texts from all genres (hip-hop, country, soul etc), this would not only be a fun way to test your blog it could also be used for automatic lyric tagging (hence track and album tagging). Does anyone know if there is any need for such a web service?

An experiment – predicting the stock market

During my three weeks of vaccation, I had an interesting conversation with a company that has a bot that trades soccer bets. This inspired me to set up classifier model that tries to predict the stock market.

Seeing into the future by looking on the past

My idea is to start with something that is really simple to implement and test – predicting the stock market tomorrow based on the history. I have no particular interest in stocks and the only thing I really know is that you should buy cheaper than you sell, hence this is the only thing the classifiers know as well =)

Google Stock Chart

What’s going to happen tomorrow?

Setup

For training data I’ve used historical stock prices that are downloaded from Yahoo Finance. Then I’ve automatically created one classifier per stock (in total about 3100 classifiers) that, given the todays stock state predicts tomorrows by inferring over historical data. From the 3100 classifiers (stocks) I will pick the top X classifiers that are most confident. This is done by evaluating the training data and picking those that historically would have worked best. This is the most time consuming task takes several hours to run 10-fold-cross-validation.

Evaluation

As I evaluate this project I will post predictions in this blog and follow up with the accuracy when the correct answers are known.

More to come!!

Using published classifiers

We’ve just implemented so that everyone with a uClassify account (free) can access public classifiers.

Once a classifier is published everyone can use it via the GUI or the web API and in return authors get a link to their website from everyone who use their classifiers. This should hopefully inspire more people to share their cool classifiers!

As an example of a published classifier check out the mood classifier by prfekt.se. Here is the list of all published classifiers.

Tutorial – Creating your own classifier

This is a brief tutorial of how to create your own classifier. I’ve used the term class synonymously to category and classifier to categorizer.

1. Determine the classifier domain

Before a classifier can start to classify it needs to be created and trained. First you should ask yourself what you want the classifier to do, is it a spam filter? a news categorizer? Let’s assume it’s a news categorizer for this tutorial. So we create a news classifier with the name ‘Example News Categorizer’.

Fig 1. Create the classifier

2. Define the relevant classes

Secondly you need define what classes your classifier should include. Choosing relevant classes is straightforward – just ask yourself what categories are relevant for the domain you have chosen. Once you have selected the classes you want the classifier to distinguish between you create them. This is easy in our Graphical User Interface but can also be done via our web API. For our small example we create the following three classes: Science, Sports and Entertainment. You can create as many classes as you want.

Fig 2. Create the classes (categories)

You can also add and remove classes dynamically – so don’t worry if you aren’t 100% sure that you have included all.

3. Collect training data

Before the classifier can start to categorize texts into the classes we need to learn it how texts belonging to the different classes look. This is the hardest part as it requires you to collect actual training data. You can collect it from any source you find appropriate.

3.1 Amount of training data

It’s hard to generalize the amount texts needed for a classifier to work as it’s highly dependent on the domain. Simple domains such as classifying the language of a text only requires a small amount while harder problems such as seeing difference between texts written by males and females requires much more training data. However to test an idea I suggest at least 20 documents per category. With each document in the same format of those that will be used for classification later (e.g. for a spam filter you train it on e-mails). 20 is the bare minimum – from there the classifier only gets more accurate.

For our news categorizer I collected 20 plain text articles per class from random sources on Internet.

3.2 Automate the collecting!

In some cases you can automate the data collection by finding trusted sources on Internet. For example for our news classifier I could jack into three RSS feeds for Science, Sports and Entertainment and automatically gather the data. Ahhh, no manual collecting!! Nice.

4. Train the classifier

So you have collected training data in some form (perhaps text files on your hard drive or lists of urls or some feeds), now it’s time to train the classifier. This can be done manually in the GUI or automated if you have some basic programming skills. For our tutorial I found 20 news articles per class and copied and pasted the them manually into the GUI, it took me about 30 minutes.

Screenshot of training

Fig 3. Training the classifier via the GUI

4.1 Automate the training! (requires novice programming skills)

Training a classifier through the GUI can be cumbersome if large amounts of training data is tractable. My suggestion is to create a small script in your favorite language that automatically trains the classifier. If your training data is laying around on your machine locally (perhaps automatically collected?=) you can just batch it into our web API. If you haven’t collected the training data yet you could create a script that automatically collects it and train the classifier with it!

4. Start classifying

This is the fun part, when you have created your classifier you can start to use it. You can always test it in our GUI. Further you can (and should) build your own web site around it via our web API – providing the world with more semantics and cool classifications that never have been seen before! Also – remember that you can use your classifiers commercially and make money on it!

I’ve published the example classifier, don’t expect it to work perfectly – it has only been trained on 20 articles per class! Test it here – Example News Categorizer

Summary

  • Find out what you want to classify on and create a classifier
  • Define and create the categories
  • Collect training data for each category
  • Train each category on the gathered data
  • Build a really cool web site around it!

What’s your mood?

Today, 2 months after our launch, our users have created over 200 classifiers. Most are unpublished and under construction. PRfekt, the team behind the popular Typealyzer, recently published a new classifier that determines the mood of a text – whether a text is happy or upset. You can try it for yourself here!

So lets test some snippets!

Jamis is (justly) upset and writes:

Is anyone else annoyed by the “just speak your choice” automation in so many telephone menus? I feel like an idiot mumbling “YES!” or “CHECK BALANCE!” into my phone. Maybe it’s the misanthrope in me coming to the front, but I’d much rather push buttons than talk to a pretend person.

The mood classifier says 98.1% upset.

Spam is no fun either, or as Ed-Anger notes:

“I’m madder than a rooster in an empty hen house at Internet spammers and I won’t take it anymore. Those creeps clutter up my e-mail with their junk, everything from penis enlargement pills to some lady telling me she’ll give me a million dollars if I’ll help her get her money out of Africa. “Rush me 10 grand quick as possible and we’ll get the whole thing started,” she says.”

The mood classifier says 97.0% upset.

Now over to some happy blogs, amour-amour has a confesion:

“I love my iphone in a way I never thought possible!! When my fiance got his and spent 23 hours gazing at it lovingly, uploading (or is it downloading??) apps and buying accessories for it I put it down to him just being a technology geek.”

The mood classifier says 79.8% happy.

Finally Nitwik Nastik comments a Rickey Gervais:

“This is a hilarious stand-up routine by British Comedian Ricky Gervais on Bible and Creationism. It’s really funny how he ridicules the creationist stories from the book of Genesis (the book of genesis can be found here)and point out to it’s obvious logical blunders. Sometimes it may be difficult to understand his accent and often he will make some funny comments under his breath, so try to listen carefully.”

The mood classifier says 69.7% happy.

The author recommends at least two hundred words (more text than my samples) which seems reasonable!

GenderAnalyzer thoughts

First, thanks to everyone who is testing GenderAnalyzer, we have had incredible feedback. We received emails from many people that are facinated and a few that thinks it sucks =) GenderAnalyzer is still generating a lot of traffic and people are blogging about it.

Our learnings

Determining the gender of an author is not easy, besides the classification there is a chain of technical events that must work in order to get a reliable result. As many of you have noticed the accuracy has dropped to 53% which is far lower than expected based on our tests. There may be several reasons for this low accuracy and I will mention some of them here.

  • Our trainingdata of 2000 blogs is automatically collected from blogspot. Runing internal tests (10 fold cross validation) on this data gives us an accurcy of 75% this effectivly means “Given that the corpus is a perfect representation of real world data, the classifier is able to give any real world data the correct label by a chance of 75%”. So our trainingdata is probably not very representative, as a matter of fact it’s very stereotypical (see for yourself here). Using data from all kind of sources should give us a better model.
  • When someone is testing a blog we are not crawling through posts on the blog to get a good amount of text. We are only hitting the given url and using the text (and html) that appear there as test data. So a page with mostly images or frames will give bad test data. Does anyone know a nice library that – given an url crawls blog posts? Via RSS perhaps?
  • We are trying to encode test data to utf-8 which is the format of the training data – it could be that we are missing some encodings.
  • And of course – the difference between male and female writing is not significant?

What’s next?

We are currently collecting a new set of training data that is much more representative. We will switch to this classifier during the next week and start a new poll for it. It’s going to be very exciting!

Spam, huh?

We are currently working on a prototype to identify spam blogs – splogs. Spam blogs can be really tricky to identify even to the human eye, as i-trepreneur.com writes in a recent post:

Why? These Splogs are user friendly. They were not made for search engines but for real visitors. There’s excellent design, well organized sections, working RSS feed. All the information on such Splogs is manually selected from the most popular resources on the net and is properly referenced. Only fresh content is used so it is not identified as duplicate instantly.

Pointing out that madconomist dot com and business-opportunities dot biz are two well made splogs which people are commenting and linking. I can’t tell by just looking at them with my bare eyes – so is’t spam huh? A later post on that philosophical aspect!

A prototype

We have set up a prototype to identify spam blogs. Right now it’s really rudimentary but shows potential. In the future by using clusters of classifiers hosted here at uclassify we think we can create a sufficiently good splog classifier.

Check out the project here, www.spamhuh.com. Remember that it’s only an early prototype!

Concerning the two hard to detect spam blogs above spamhuh.com is able to correctly identify one of them :)

Try it out and let us know what you think!!

Everybody can classify

Creating your own classifiers has never been easier, we have developed a Click’n’Classify Graphical User Interface (GUI). This means that you can manually create and train your classifiers without knowing any programming at all. This is very good way to test an idea, if the classifier works well – build your web site around it or use it for whatever purpose.

The GUI allows you to do everything that you can do via our Application Programming Interface (API). Also, just like phpMyAdmin shows the SQL queries our uClassify GUI will show the XML queries so you can easily understand and use the API from your site.

Features

  • Create and remove classifiers
  • Add and remove classes
  • Train and untrain classes
  • See basic information about your classifiers

Screenshot – Create a classifier

This shows a screenshot of how it looks like when you are about to create a classifier, just log in and try it yourself!

Creating a classifier is easy

Screenshot – Training a classifier

Just copy and paste the texts you want to use as training data.

Training a classifier is easy

Happy classifying!

Click’n’classify

As we suspected, most users who sign up think it’s to high threshold to get started as it requires some programming to create and train classifiers. Therefore we have decided to add more GUI features that allows users to do all the API calls without any programming! Once classifiers are set up developers can start building their web application around their classifiers via the API.

Copy’n’code

All GUI driven API calls will display the generated the XML so that users can easily see whats going on and copy the XML directly into the code (much like PhpMyAdmin does with SQL queries).

We expect this to take a couple of weeks.

Developing the development

Since we released the beta version a couple a weeks ago we have seen a few websites pop up building on the uClassify techonology. This is very encouraging for us! Right now we are trying to reach out to more users who want to use our classifier API.

We have spent a lot of time on development of our service – making it parallel – robust – low on memory – fast etc. This is what we are really good at. The remaining part which is as important – to reach out to users – advertise ourselves and being seen on the right places is not our sharpest skill.

Besides writing this blog and posting the uClassify link on a couple of sites we haven’t done much to show our muscles – yet! We thought that we perhaps would use our own API ourselves – that is probably an easier way to create some buzz! We have a couple of ideas make us seen (feel free to use these ideas for yourself):

Build an Anti Spam Comment Plugin for WordPress?

We are quite confident that we could do really well as the classifier engine has shown really good results in Cactus Spam Filter. This would compete or be a good complement to Akismet, Defensio and similar. Is there anyone who needs another blog spam comment filter?
antispamspam

Build a Spam Blog Filter?

This seems to be a problem for many blog communities, building a splogs (spam blogs) filter could give us some good attention. What would be really nice is if somebody could provide us with dynamic training data on slogs and blogs – then we could automate the training process and find the undetected spam! Anyone who want to donate their spam? :)

Implement a JSON API for uClassify?

Building a JSON API would not only broaden our API (only XML API right now) it would also let users use our classification service via Yahoo! Pipes. Yahoo Pipes let’s you combine different RSS flows into one and use external web services (via JSON) – which is madly cool.

Language Detection – talar du svenska?

We already have a language detection classifier (not published yet) that only needs training data refinement (removal of noise such as English words in the Filipino class). It supports 40 languages. This would be fairly simple and could give us some buzz.

Ideas, anyone!

Do you have any ides? Let us know – or use the uClassify API to create your own classifier (spam filter, language detection or whatever comes to your mind).