Using published classifiers

We’ve just implemented so that everyone with a uClassify account (free) can access public classifiers.

Once a classifier is published everyone can use it via the GUI or the web API and in return authors get a link to their website from everyone who use their classifiers. This should hopefully inspire more people to share their cool classifiers!

As an example of a published classifier check out the mood classifier by Here is the list of all published classifiers.

More memory or smaller memories?

In order to build a classification server that can handle thousands of classifiers and process huge amounts of data we were sure that we eventually would have to do some major optimizations. To avoid doing any work prematurely we waited with all optimizations that actually require design changes or invoke less code readability until we were absolutely sure where to improve.

When we ran our first test in May it was obvious what would be our first bottleneck – the memory consumption of classifiers. It was really bad, raw classifier data that expanded by a factor of about 5 into the primary memory – a tiny classifier of 1Mb would take 5Mb as soon it’s fetched into memory. It was really easy to pinpoint the memory theives.

Couple in crime – STL strings and maps

We were using STL maps to hold frequency distributions for tokens (features). All tokens were mapped to their frequency, map<string, unsigned int> accordingly. This is a very convenient and straightforward way to do it. But the memory overhead is not very attractive.

VS2005 STL string memory overhead

The actual sizes of types vary between platforms and STL implementations (these numbers are from the STL that comes with VS2005 on 32 bit Windows XP).

Each string takes at least 32 bytes
size_type _Mysize = 4 bytes (string size)
size_type _Myres = 4 bytes (reserve size)
_Elem _Buf = 16 bytes (internal buffer for strings shorter than 16 bytes)

_Elem* _Ptr = 4 bytes (pointer to strings that don’t fit in the buffer)
this* = 4 bytes (this pointer)

Best case overhead for STL strings is 16 bytes if the internal buffer is filled exactly. Worst case is for empty or strings longer than 15 bytes which gives the overhead of 32 bytes. Therefore string overhead varies from 16 to 32 bytes.

VS2005 STL map memory overhead

Each entry in a map consists of a STL pair – the key and value (first and second). A pair only has the memory overhead of the this pointer (4 bytes) (and that inherited from the types it’s composed of). However the map is a colored tree and consists of linked nodes. Each pair is stored in a node and nodes have quite heavy memory overhead:

_Genptr _Left = 4 bytes (points to the left subtree)
_Genptr _Parent = 4 bytes (pointer to parent)
_Genptr _Right = 4 bytes (points to the right subtree)
char _Color = 1 byte (the color of the node)
char _Isnil = 1 byte (true if node is head)

this* = 4 bytes (this pointer)

So there is a 18 byte overhead per node and 4 bytes per pair, which sums up to 22 bytes.

Strings in maps

Now inserting a string shorter than 16 bytes into a map<string, unsigned int> will consume 32+22+4=58 bytes. It could even be more if memory alignment kicks in for any of the allocations. In most cases this is perfectly fine and is not even worth considering optimizing. In our case it was not plausible to have a memory overhead factor of 5. Our language classifier takes about 14Mb on disk and should not take much more when loaded into memory – it blew up to about 65Mb. As it consists of 43 languages with probably around 30000 unique words per class (language) it gets really bloated.

One solution

We needed to maintain the search and insertion speed of maps (time complexity O(log n)) but get rid of the overhead. Insertions are needed when classifiers are trained.

Maintaining search speed

Since we already had limited features to the maximum length of 32 bytes we could use that information to create what we call memory lanes. A memory lane only consists of tokens of the same size followed by the frequency. In that manner we created 32 lanes, lane 1 with all tokens of size 1, lane 2 with all tokens of size 2 and so on. Tokens in memory lanes are sorted so we can use binary search.

Memory lane 1 could look like this (tokens of size 1 followed by the frequency)

and memory lane 3 like this

By doing so we get rid of all overhead and maintaining search at O(log n).

Maintaining insertion speed (almost)

Maps allow fast insertions in O(log n) so we kept an intermediate map for each memory lane. When a classifier is trained, new tokens they go into the map and the frequency of those that already exist in the memory lane is increased. When the training session is over the intermediate maps are merged to their respective memory lane. This can be done in O(n) and is the major penalty. Note that explicit sorting is never required since maps are ordered. Another penalty occur when both the map and memory lane are filled with tokens – at this point two lookups can happen (first in the memory lane and if it doesn’t exist a search through the map is required).

This solution reduced memory consumption by a factor of 4-5 at the penalty of having to merge new training data into memory lanes every now and then. This is perfectly fine for us as training often reduce with time (training data get good enough) and classification hence increase.

A similar optimization for Java is described on the LingPipe blog.


As we suspected, most users who sign up think it’s to high threshold to get started as it requires some programming to create and train classifiers. Therefore we have decided to add more GUI features that allows users to do all the API calls without any programming! Once classifiers are set up developers can start building their web application around their classifiers via the API.


All GUI driven API calls will display the generated the XML so that users can easily see whats going on and copy the XML directly into the code (much like PhpMyAdmin does with SQL queries).

We expect this to take a couple of weeks.

Donate your spam!

We are evaluating our next move and are running preliminary tests on spam comments (spaments?). We only have a few corporas to test on and it looks good on those (I’ll get back with exact performance later).

We want your blog comments for a good cause

Following our own guidelines we are looking for more data to test on. If you have a WordPress installation you can help us out by:

  1. Log into phpMyAdmin
  2. Select your WordPress database
  3. Click on the table ‘wp_comments’
  4. Click on ‘Export’
  5. Select the XML format
  6. Check ‘Save to file’ and click ‘Run’
  7. Attach the exported XML to an e-mail for contact AT uclassify DOT com

We will not publish any comments without asking you for permission first. Also you will be credited with your name and blog when we return with the classifier results for your comments.

Thank you!

Developing the development

Since we released the beta version a couple a weeks ago we have seen a few websites pop up building on the uClassify techonology. This is very encouraging for us! Right now we are trying to reach out to more users who want to use our classifier API.

We have spent a lot of time on development of our service – making it parallel – robust – low on memory – fast etc. This is what we are really good at. The remaining part which is as important – to reach out to users – advertise ourselves and being seen on the right places is not our sharpest skill.

Besides writing this blog and posting the uClassify link on a couple of sites we haven’t done much to show our muscles – yet! We thought that we perhaps would use our own API ourselves – that is probably an easier way to create some buzz! We have a couple of ideas make us seen (feel free to use these ideas for yourself):

Build an Anti Spam Comment Plugin for WordPress?

We are quite confident that we could do really well as the classifier engine has shown really good results in Cactus Spam Filter. This would compete or be a good complement to Akismet, Defensio and similar. Is there anyone who needs another blog spam comment filter?

Build a Spam Blog Filter?

This seems to be a problem for many blog communities, building a splogs (spam blogs) filter could give us some good attention. What would be really nice is if somebody could provide us with dynamic training data on slogs and blogs – then we could automate the training process and find the undetected spam! Anyone who want to donate their spam? :)

Implement a JSON API for uClassify?

Building a JSON API would not only broaden our API (only XML API right now) it would also let users use our classification service via Yahoo! Pipes. Yahoo Pipes let’s you combine different RSS flows into one and use external web services (via JSON) – which is madly cool.

Language Detection – talar du svenska?

We already have a language detection classifier (not published yet) that only needs training data refinement (removal of noise such as English words in the Filipino class). It supports 40 languages. This would be fairly simple and could give us some buzz.

Ideas, anyone!

Do you have any ides? Let us know – or use the uClassify API to create your own classifier (spam filter, language detection or whatever comes to your mind).