Discourse classifier

We have added a new classifier that can determine the discourse of a text. It can for example distinguish questions from answers, if the answer is an agreement or disagreement. It even tries to see if there is humor in the text. The classes are listed below.

  • Agreement
  • Announcement
  • Answer
  • Appreciation
  • Disagreement
  • Elaboration
  • Humor
  • Negative_reaction
  • Other
  • Question

 

 

Since long texts often has mixed discourse, containing questions, answers, elaborations, humor an so on – it may make sense to pass single sentences or phrases for classification (split the text).

It’s based on the dataset from the paper “Characterizing Online Discussion Using Coarse Discourse Sequences (ICWSM ’17)” The dataset is built from annotated reddit comments.

Spanish, French and Swedish classifier languages

During the last half year the Sentiment classifier have been beta enabled for Spanish, French and Swedish. The test period has been very successful and we have decided to expand multi language support to more popular classifiers such as the Gender Analyzer, Mood and Myer Briggs classifiers.

Classifiers with multiple languages are have flags displayed like the icons above. From the GUI you can test them by clicking the flag first, from the API you simply add the language code (/es, /fr, /sv) to the request URL, for more information see the documentation.

The service is still in beta, as we still need to make sure it scales when more users start to use it. The API will probably not change.

New xml text element

Our XML API has been around since the release of uClassify back in 2008. It’s very flexible and powerful. Previously, to avoid breaking the XML all texts passed needed to be base64 encoded in the <textBase64> element. With this release we introduce the element <text> that doesn’t require base64 encoding. The <textBase64> is of course still supported.

The new <text> element can take plain text. This saves some bandwidth, performance and makes it easier to use. The string needs to be XML encoded so it doesn’t break the XML. Most languages have support functions for this, look for “escape XML” or similar. Basically it replaces 5 characters (<,>,&,’ and “) with their encoding (&lt; etc.).

<text>I love new features &amp; would like to see more in the future</text>

<textBase64>SSBsb3ZlIG5ldyBmZWF0dXJlcyAmIHdvdWxkIGxpa2UgdG8gc2VlIG1vcmUgaW4gdGhlIGZ1dHVyZQ==</textBase64>

The new <text> element makes the implementation of our next big feature easier… 😉

IAB Taxonomy V2

The Interactive Advertising Bureau (IAB) has released a version 2 of their taxonomy as of the first of March 2017. The new taxonomy contains more topics than the old and has gone through a general overhaul to make it more clear.

We have build a new classifier, IAB Taxonomy V2, that conforms with the latest standard.

The new ‘Content’ category has been left out but you can get content language by calling our Language Detector.

Any feedback is appreciated and we may add more training data if necessary.

Class name format

The new taxonomy has up to 4 tiers this is reflected in the class names. The format of the class names is level1_leaf_id1_id2_id3_id4 the ids correspond to the IAB codes and are integers.

You can read more about the taxonomy at their homepage where you also can find the complete id mapping.

Classifier accuracy improvements

We have updated some of our most popular classifiers to give better results.

Our most popular classifier for sentiment has been updated to give better performance. The major difference is that the data has gone through a cleaning pass, removing non english texts (noise). And with a slightly improved feature extractor and optimized data we can expect better accuracy.

We’ve also updated the following popular classifiers to use a new feature extractor. The result is better accuracy.

We might update more classifiers in the future.

Language Detector for +370 major and rare languages

We have constructed a language detector consisting of about 374 languages.

It can detect both living and extinct languages (e.g. English and Tupi), identify ancient and constructed (e.g. Latin and Klingon) and even different dialects.

Each language class has been named with its English name followed by an underscore and the corresponding ISO 639-3 three letter code. E.g.

  • Swedish_swe
  • English_eng
  • Chinese_zho
  • Mesopotamian Arabic_acm

You can try it here, it needs a few words to make accurate detections.

Some of the rare languages (about 30) may have insufficient training data. The idea is to improve the classifier as more documents are gathered. Also we may add more languages in the future, so make sure your code can handle that.

Here is the full list of supported languages

Language Name ISO 639-3 Type
Abkhazian abk living
Achinese ace living
Adyghe ady living
Afrihili afh constructed
Afrikaans afr living
Ainu ain living
Akan aka living
Albanian sqi living
Algerian Arabic arq living
Amharic amh living
Ancient Greek grc historical
Arabic ara living
Aragonese arg living
Armenian hye living
Arpitan frp living
Assamese asm living
Assyrian Neo-Aramaic aii living
Asturian ast living
Avaric ava living
Awadhi awa living
Aymara aym living
Azerbaijani aze living
Balinese ban living
Bambara bam living
Banjar bjn living
Bashkir bak living
Basque eus living
Bavarian bar living
Baybayanon bvy living
Belarusian bel living
Bengali ben living
Berber ber living
Bhojpuri bho living
Bishnupriya bpy living
Bislama bis living
Bodo brx living
Bosnian bos living
Breton bre living
Bulgarian bul living
Buriat bua living
Burmese mya living
Catalan cat living
Cebuano ceb living
Central Bikol bcl living
Central Huasteca Nahuatl nch living
Central Khmer khm living
Central Kurdish ckb living
Central Mnong cmo living
Chamorro cha living
Chavacano cbk living
Chechen che living
Cherokee chr living
Chinese zho living
Choctaw cho living
Chukot ckt living
Church Slavic chu ancient
Chuvash chv living
Coastal Kadazan kzj living
Cornish cor living
Corsican cos living
Cree cre living
Crimean Tatar crh living
Croatian hrv living
Cuyonon cyo living
Czech ces living
Danish dan living
Dhivehi div living
Dimli diq living
Dungan dng living
Dutch nld living
Dutton World Speedwords dws constructed
Dzongkha dzo living
Eastern Mari mhr living
Egyptian Arabic arz living
Emilian egl living
English eng living
Erzya myv living
Esperanto epo constructed
Estonian est living
Ewe ewe living
Extremaduran ext living
Faroese fao living
Fiji Hindi hif living
Finnish fin living
French fra living
Friulian fur living
Fulah ful living
Gagauz gag living
Galician glg living
Gan Chinese gan living
Ganda lug living
Garhwali gbm living
Georgian kat living
German deu living
Gilaki glk living
Gilbertese gil living
Goan Konkani gom living
Gothic got ancient
Guarani grn living
Guerrero Nahuatl ngu living
Gujarati guj living
Gulf Arabic afb living
Haitian hat living
Hakka Chinese hak living
Hausa hau living
Hawaiian haw living
Hebrew heb living
Hiligaynon hil living
Hindi hin living
Hmong Daw mww living
Hmong Njua hnj living
Ho hoc living
Hungarian hun living
Iban iba living
Icelandic isl living
Ido ido constructed
Igbo ibo living
Iloko ilo living
Indonesian ind living
Ingrian izh living
Interlingua ina constructed
Interlingue ile constructed
Iranian Persian pes living
Irish gle living
Italian ita living
Jamaican Creole English jam living
Japanese jpn living
Javanese jav living
Jinyu Chinese cjy living
Judeo-Tat jdt living
K’iche’ quc living
Kabardian kbd living
Kabyle kab living
Kadazan Dusun dtp living
Kalaallisut kal living
Kalmyk xal living
Kamba kam living
Kannada kan living
Kara-Kalpak kaa living
Karachay-Balkar krc living
Karelian krl living
Kashmiri kas living
Kashubian csb living
Kazakh kaz living
Kekchķ kek living
Keningau Murut kxi living
Khakas kjh living
Khasi kha living
Kinyarwanda kin living
Kirghiz kir living
Klingon tlh constructed
Kölsch ksh living
Komi kom living
Komi-Permyak koi living
Komi-Zyrian kpv living
Kongo kon living
Korean kor living
Kotava avk constructed
Kumyk kum living
Kurdish kur living
Ladin lld living
Ladino lad living
Lakota lkt living
Lao lao living
Latgalian ltg living
Latin lat ancient
Latvian lav living
Laz lzz living
Lezghian lez living
Lįadan ldn constructed
Ligurian lij living
Lingala lin living
Lingua Franca Nova lfn constructed
Literary Chinese lzh historical
Lithuanian lit living
Liv liv living
Livvi olo living
Lojban jbo constructed
Lombard lmo living
Louisiana Creole lou living
Low German nds living
Lower Sorbian dsb living
Luxembourgish ltz living
Macedonian mkd living
Madurese mad living
Maithili mai living
Malagasy mlg living
Malay zlm living
Malay msa living
Malayalam mal living
Maltese mlt living
Mambae mgm living
Mandarin Chinese cmn living
Manx glv living
Maori mri living
Marathi mar living
Marshallese mah living
Mazanderani mzn living
Mesopotamian Arabic acm living
Mi’kmaq mic living
Middle English enm historical
Middle French frm historical
Min Nan Chinese nan living
Minangkabau min living
Mingrelian xmf living
Mirandese mwl living
Modern Greek ell living
Mohawk moh living
Moksha mdf living
Mon mnw living
Mongolian mon living
Morisyen mfe living
Moroccan Arabic ary living
Na nbt living
Narom nrm living
Nauru nau living
Navajo nav living
Neapolitan nap living
Nepali npi living
Nepali nep living
Newari new living
Ngeq ngt living
Nigerian Fulfulde fuv living
Niuean niu living
Nogai nog living
North Levantine Arabic apc living
North Moluccan Malay max living
Northern Frisian frr living
Northern Luri lrc living
Northern Sami sme living
Norwegian nor living
Norwegian Bokmål nob living
Norwegian Nynorsk nno living
Novial nov constructed
Nyanja nya living
Occitan oci living
Official Aramaic arc ancient
Ojibwa oji living
Old Aramaic oar ancient
Old English ang historical
Old Norse non historical
Old Russian orv historical
Old Saxon osx historical
Oriya ori living
Orizaba Nahuatl nlv living
Oromo orm living
Ossetian oss living
Ottoman Turkish ota historical
Palauan pau living
Pampanga pam living
Pangasinan pag living
Panjabi pan living
Papiamento pap living
Pedi nso living
Pennsylvania German pdc living
Persian fas living
Pfaelzisch pfl living
Picard pcd living
Piemontese pms living
Pipil ppl living
Pitcairn-Norfolk pih living
Polish pol living
Pontic pnt living
Portuguese por living
Prussian prg living
Pulaar fuc living
Pushto pus living
Quechua que living
Quenya qya constructed
Romanian ron living
Romansh roh living
Romany rom living
Rundi run living
Russia Buriat bxr living
Russian rus living
Rusyn rue living
Samoan smo living
Samogitian sgs living
Sango sag living
Sanskrit san ancient
Sardinian srd living
Saterfriesisch stq living
Scots sco living
Scottish Gaelic gla living
Serbian srp living
Serbo-Croatian hbs living
Seselwa Creole French crs living
Shona sna living
Shuswap shs living
Sicilian scn living
Silesian szl living
Sindarin sjn constructed
Sindhi snd living
Sinhala sin living
Slovak slk living
Slovenian slv living
Somali som living
South Azerbaijani azb living
Southern Sami sma living
Southern Sotho sot living
Spanish spa living
Sranan Tongo srn living
Standard Latvian lvs living
Standard Malay zsm living
Sumerian sux ancient
Sundanese sun living
Swabian swg living
Swahili swa living
Swahili swh living
Swati ssw living
Swedish swe living
Swiss German gsw living
Tagal Murut mvv living
Tagalog tgl living
Tahitian tah living
Tajik tgk living
Talossan tzl constructed
Talysh tly living
Tamil tam living
Tarifit rif living
Tase Naga nst living
Tatar tat living
Telugu tel living
Temuan tmw living
Tetum tet living
Thai tha living
Tibetan bod living
Tigrinya tir living
Tok Pisin tpi living
Tokelau tkl living
Tonga ton living
Tosk Albanian als living
Tsonga tso living
Tswana tsn living
Tulu tcy living
Tupķ tpw extinct
Turkish tur living
Turkmen tuk living
Tuvalu tvl living
Tuvinian tyv living
Udmurt udm living
Uighur uig living
Ukrainian ukr living
Umbundu umb living
Upper Sorbian hsb living
Urdu urd living
Urhobo urh living
Uzbek uzb living
Venda ven living
Venetian vec living
Veps vep living
Vietnamese vie living
Vlaams vls living
Vlax Romani rmy living
Volapük vol constructed
Võro vro living
Walloon wln living
Waray war living
Welsh cym living
Western Frisian fry living
Western Mari mrj living
Western Panjabi pnb living
Wolof wol living
Wu Chinese wuu living
Xhosa xho living
Xiang Chinese hsn living
Yakut sah living
Yiddish yid living
Yoruba yor living
Yue Chinese yue living
Zaza zza living
Zeeuws zea living
Zhuang zha living
Zulu zul living

Attribution

The classifier has been trained by reading texts in many different languages. Finding high quality, non noisy texts is really difficult. Many thanks to

  1. Wikipedia that exists in so many languages
  2. Tatoeba which is a great resources for clean sentences in many languages

New account limits

We have updated the daily quota limits for the different accounts. If you already (before 3rd of March 2017) are subscribing on an account this will not affect you.

The adjustments are made after looking at the statistics of our users. Before the Indie and Professional account had the same rate limits but at different prices. The new model is a ladder from Indie to Professional accounts, with growing discount the higher you get.

This also affected our translation api, which now gives you 40 characters per call instead of 10.

The new pricing can be found here. We will see how it works out the coming weeks, we might have to make some adjustments.

The feature extractor update

With the latest version of uClassify we have abandoned the old feature extractor for user trained classifiers. This means that you will get better performance for all new classifier created after 2017-03-26. All already created classifiers won’t be affected.

Background

In the beginning, around 2007, I thought it would be best to allow the users to do preprocessing and use a really simple feature extractor on the server. This feature extractor only separated words by the space character (32 decimal). The idea was that users could preprocess their texts to allow more delimiters (e.g. exchange ! to space on their side). You can also generate your own bigrams by combining them with underscore or something.

Classifiers made by uClassify have used other feature extractors, e.g. the sentiment classifier uses unigrams, bigrams, some stemming and converting to lower case. This improves the performance significantly.

Now, it’s not easy for someone who is new to machine learning and text classification to guess which delimiters to use. Also it can be very counter intuitive. For that reason, I decided to replace the old unigram feature extractor with a new high performance general purpose extractor.

The new feature extractor

The new feature extractor has been found heuristically by running extensive tests over a large set of corpora. The datasets are a part of our internal testing suite that contains over 80 different test sets for a wide range of problems. Therefore I am very confident that the new feature extractor will do really well.

In short here is what it does:

  • convert text to lower case
  • separate words on white spaces, exclamation mark, parentheses, period and slash.
  • generate uni grams
  • generate bi grams
  • generate parts of long unigrams

This is similar to what the Sentiment classifier has been using which allows it to differentiate between “I don’t like” and “I like”.

Let me know if you have any feedback, you can reach me at contact AT uclassify DOT com or @jonkagstrom on twitter!

Analyze your uClassify results with Excel

Did you know there’s a way to classify texts without having to leave Excel? We have paired up with SeoTools for Excel, a Swiss army knife Excel-plugin, which offers a tailored “Connector” for all uClassify users.

In this blog post, we will show how SeoTools allows you to classify lists of texts or URLs with the classifiers of your choice, and having the results ready for analysis in a matter of seconds.

Don’t be worried if your Excel spreadsheet doesn’t look as the example above. The extra ribbon tab “SeoTools” is added when SeoTools for Excel is installed. At the end of this post you find all the links necessary to setup your uClassify account.

Selecting a classifier

The uClassify Connector is, as the name suggests, connected to uClassify library. Clicking on “Select” opens a window of all available classifiers. It is also possible to choose input type (Text or URL) and if the results include classification and probability.

When you are satisfied with your settings, click “Insert”, and SeoTools will generate the data in columns A and onwards.

Save time and automate the process

Exporting and filtering Excel data from web based platforms takes time, especially if it’s required on a daily or weekly basis. The filtering part of standardized files is also associated with human error. SeoTools solves this with saving and loading of “Configurations”:

Next time, just load a previous configuration and you will get classifications based on the same settings as last time.

Use Formula Mode to supercharge your classification

The beauty of combining uClassify with Excel is the ability to create large numbers of requests automatically. Instead of populating cells with values, select “Formula” before Inserting the results:

Next, you can change the formula to reference a cell and the uClassify Connector will generate results based on the value or text in that cell.

In the following example, company A has been mentioned 100 times on Twitter in the last week and we want to determine the text Language and Sentiment for these tweets.

First, select the Text Language Classifier and enter a random character in the Input field (we will change this in the formula to reference the tweets). Also, don’t forget select “Exclude headers in result” since we only want the values for each row.

When the formula has been inserted in cell C2, change the input “y” to B2, and SeoTools will return the language with the highest probability. Repeat the same steps for the Sentiment classifier, but insert it in cell D2. It should look like this:

To get the results for all rows, select cell C2 and D2 and drag the formula down and SeoTools will generate the classifications for all tweets. In the example below, we’ve started on row 16 to illustrate the results:

Do you want to try it with your uClassify account?

⦁ Sign up for a 14-Day Trial and follow the instructions to download and install the latest version of SeoTools.

⦁ Register your access key under “Upgrade to Pro” and access uClassify in the Connectors menu:

⦁ Next, go to API keys in the top menu of your uClassify account and copy the Read key

⦁ Finally, copy your API-key and paste it in the “Options” menu:

The complete documentation of the uClassify Connector features can be found here.

If you have any questions, feedback, or suggestions about ways to improve the Connector, please contact victor@seotoolsforexcel.com.

About the translation algorithm

A brief introduction to our machine translation algorithm

We have implemented statistical machine translation (SMT). SMT is completely data driven. It works by calculating word and phrase probabilities from a large corpus. We have used OPUS and Wiktionary as our primary sources.

Data models

From the data sources (mostly bilingual parallel language sets) a dictionary of translations is constructed. For each translation we keep a count and parts of speech tags for both source and target, this is our translation model & pos models and it looks something like:

Translation & pos models
source word|source pos tags|translation count|target word |target pos tags
om|conj|12|if|conj
om|adv|7|again|adj
övermorgon|adv|3|the day after tomorrow|det noun prep noun
...

For the target language a language model and a grammar model is used. Each consists of 1-5 n grams. The language model consists of word sequences and a frequency, the grammar model of pos tags and their frequencies:

Language model
phrase|count
hello word|493920
hi world|19444
...
Grammar model
pos tags|count
prep noun|454991
prep prep|3183
...

Building a graph

So we have data. Plenty of data. Now we just need to make use of it. When a text is translated a graph is built between all possible translations, most of the time each word has multiple translations and meanings, so the number of combinations grows very quickly. During the graph building we need to remember that source phrases can contract, e.g. ‘i morgon’=>’tomorrow’ and expand ‘övermorgon’=>’the day after tomorrow’.

We look at a maximum of 5 words. Once the graph is built, a traversal is initiated. As we traverse the graph encountered sub phrases are scored and the best path is chosen.

Graph for 'hej världen!'
hej       världen       !
--------------------------
Translations:
hi        world         !
hello     universe
howdy     earth
hey

Combinations:
hi        world         !
hi        universe      !
hi        earth         !
hello     world         !
hello     universe      !
hello     earth         !
...

Unfortunately there is no way to examine all translations so we need to traverse the graph intelligently. We use a beam search with limited depth and width to get the scope down to manageable scales.

Scoring phrases

The scoring of each phrases combines  the four aforementioned aspects of the language:

Translation model: This is the dictionary, source->target each entry has a frequency, from the frequency we can calculate a probability (p1) “the most likely translation for ‘hej’ this word is ‘hello'”

Source grammar model: The pos tag helps us to resolve ambiguity, a probability (p2) is calculated, basically saying “‘hej’/’hello’ is likely an interjection”.

Target language model: We look at 1-5 grams. A n-gram is a sequence of words, for example “hello world” is a 2-gram. Each n-gram has a frequency indicating how common it’s. Again a probability (p3) can be calculated, “the sequence ‘hello world’ is more likely than ‘hi world'”.

Target grammar model: just like the language model we do the same but with pos tags. A probability (p4) is calculated indicating “Yeah a verb followed by a preposition sounds better than two prepositions in a row” etc.

We use a sliding window moving over the phrase and combining probabilities using the chain rule into accumulated P1-P4. We end up with 4 parameters that are finally mixed with different weights according to

score=P1^w1*P2^w2*P3^w3*P4^w4

Working in log space makes life easier here. Then we just select the phrase with the highest score.

We estimate the weights (w1-w4) by a randomized search that tries to maximize a bleu-score for a test set. The estimation only needs to be rerun when the training data changes. As expected, the most important (highest weight) is assigned the translation model (w1=1), second highest the source grammar model (w3~0.6), third highest the language model (w2~0.3) and finally the target grammar model (w4~0.05). Yes, the as it turns out the target grammar model is not very important, it helps to resolve uncertainty in some cases by predicting pos tags. But I might actually nuke it to favor simplicity in future versions.

There were plenty of unmentioned problems to be solved along the way, but you get the overall idea. One thing that easily puts you off is the size of the data you are dealing with. E.g. downloading TB sized datasets like the google ngrams and processing those. At one point, after 4 days processing those huge zipfiles, Windows Update decided to restart the computer…