Dating in the dark gemist rtl. Gender recognition on dutch tweets - pdf
The first set is derived from the tokenizer output, and can be Dating in the dark gemist rtl as a kind of normalized character n-grams. For the bigrams Figure 2we see much the same picture, although there are differences in the details. The tokenizer is able to identify hashtags and Twitter user names to the extent that these conform to the conventions used in Twitter, i.
Then, we used a set of feature types based on token n-grams, with which we already had previous experience Van Bael and van Halteren However, we do observe different behaviour when reversing the signs. If, in any application, unbalanced collections are expected, the effects of biases, and corrections for them, will have to be investigated.
Top rankingfemales insvr ontokenunigrams, with ranksand scoresforsvr with various feature types.
For each setting and author, the systems report both a selected class and a floating point score, which can be used as a confidence score. As we approached the task from a machine learning viewpoint, we needed to select text features to be provided as input to the machine learning systems, as well as machine learning systems which are to use this input for classification.
And also some more negative emotions, such as haat hate and pijn pain.
Then, as several of our features were based on tokens, we tokenized all text samples, using our own specialized tokenizer for tweets. The men, on the other hand, seem to be more interested in computers, leading to important content words like software and game, and correspondingly more determiners and prepositions.
This means that the content of the n-grams is more important than their form. Original 5-gram About K features. Apart from normal tokens like words, numbers and dates, it is also able to recognize a wide variety of emoticons. We achieved the best results, TiMBL peaks a bit later at with All systems have no trouble recognizing him as a male, with the lowest scores around 1 for the top function words.
However, even style appears to mirror content.
For SVR, one would expect symmetry, as both classes are modeled simultaneously, and differ merely in the sign of the numeric class identifier. The age is reconfirmed by the endearingly high presence of mama and papa.
There is an extreme number of misspellings even for Twitterwhich may possibly confuse the systems models. Most of them rely on the tokenization described above. All users, obviously, should be individuals, and for each the gender should be clear.
Bigrams Two adjacent tokens. For only one feature type, character trigrams, LP with PCA manages to reach a higher accuracy than SVR, but the difference is not statistically significant. An interesting observation is that there is a clear class of misclassified users who have a majority of opposite gender users in their social network.
The best performing character n-grams normalized 5-gramswill be most closely linked to the token unigrams, with some token bigrams thrown in, as well as a smidgen of the use of morphological processes.
The dashed line represents the separation threshold, i. It then chose the class for which the final score is highest. If no cue is found in a user s profile, no gender is assigned.
Figures 1, 2, and 3 show accuracy measurements for the token unigrams, token bigrams, and normalized character 5-grams, for all three systems at various numbers of principal components.
With these main choices, we performed a grid search for well-performing hyperparameters, with the following investigated values: We start with the accuracy of the various features and systems Section 5.
This turns out to be Judith Sargentini, a member of the European Parliament, who tweets under the 14 Although clearly female, she is judged as rather strongly male