Expert answer:critical short essay

Expert answer:Based on the reading and write a 300-word essay on it. This essay must be a critical reflection on the reading you have chosen.Note that the term critical as used here does not mean to exclusively find fault or to point out something negative in your chosen reading. Instead, the term criticism here is used in its broader sense. You are encouraged to explore both the merits and faults of the article and to position the article in its larger, scientific and social context.Here are a number of prompts to serve as guidelines for the writing of your thought essay:What are some of the strengths of the article? Why should these strengths be considered positively?What are some of the weaknesses of the article? Why do these weaknesses hurt the article?What is the overall premise of the article? Do you agree with it? Why or why not?In your opinion, why should this article exist? What purpose does it serve, and what questions or issues does it address?What is the context of this article? Why is it the logical thing to do given existing research in this area? What methodological or theoretical problems does it address? What are the broader implications of this article? Whether you believe this article makes a valid conclusion or is incorrect, what does this imply for future research in this area?These prompts are meant to help you get started, but we discourage you from simply going down these bullet points and answering these questions in the order they’re presented.During your reflection on the article, you may come up with questions or criticisms that you yourself are unable to address. If anything, coming up with difficult-to-answer questions is a sign that you’re doing things correctly. If such questions or concerns come up during your essay, we highly encourage you to add it to your submission. Good, open-ended questions will boost the quality of your submission and may be used as prompts for class discussions.
m11___barbieri___automaticdetectionironyhumour.pdf

Unformatted Attachment Preview

Automatic Detection of Irony and Humour in Twitter
Francesco Barbieri
Horacio Saggion
Pompeu Fabra University
Barcelona, Spain
francesco.barbieri@upf.edu
Pompeu Fabra University
Barcelona, Spain
horacio.saggion@upf.edu
Abstract
Irony and humour are just two of many forms of figurative
language. Approaches to identify in vast volumes of data
such as the internet humorous or ironic statements is important not only from a theoretical view point but also for their
potential applicability in social networks or human-computer
interactive systems. In this study we investigate the automatic detection of irony and humour in social networks such
as Twitter casting it as a classification problem. We propose a
rich set of features for text interpretation and representation to
train classification procedures. In cross-domain classification
experiments our model achieves and improves state-of-the-art
performance.
Introduction
Irony and humour are just two examples of figurative language (Reyes, Rosso, and Veale 2013). Approaches to
identify in vast volumes of data such as the internet humorous or ironic statements are important not only from a
theoretical view point but also for their potential applicability in social network analysis and human-computer interactive systems. Systems able to select humorous/ironic
statements on a given topic to present to a user are important in human-machine communication. It is also important for a system being able to recognise when users are being ironic/humorous to appropriate deal with their requests.
Irony has also relevance in the field of sentiment analysis
and opinion mining (Pang and Lee 2008) since it can be used
to express a negative statement in an apparently positive
way. However, irony detection appears as a difficult problem since ironic statements are used to express the contrary
of what is being said (Quintilien and Butler 1953), therefore
being a tough nut to crack by current systems. Reyes et al.
(2013) approach the problem as one of classification training machine learning algorithms to sepatate ironic from nonironic statements. Humour has been studied for a number of
years in computational linguistics in terms of both humour
generation (Stock and Strapparava 2006; Ritchie and Masthoff 2011) and interpretation (Mihalcea and Pulman 2007;
Taylor and Mazlack 2005). In particular it has also been
approached as classification by Mihalcea and Strapparava
(2005) creating a specially designed corpus of one-liners
(i.e., one sentence jokes) as the positive class and headlines
and other short statements as a negative class.
Following these lines of research, we first try to detect
these topics separately; then, since they are both figurative
language, and they may have some correlation, we also try
to detect them at the same time (we use the union of them
as positive example). This last experiment is interesting as it
will give us hints for figurative language detection, hence it
will help us exploring new aspects of creativity in language
(Veale and Hao 2010b). This experiment can be seen as a
small step toward the design of a machine capable to evaluate creativity, and with further work also capable to generate
creative utterances.
Our dataset is composed of text retrieved from the microblogging service Twitter1 .
For the experiments to be presented in this paper we use a
dataset created for the study of irony detection which allows
us to compare our findings with recent state-of-the-art
approaches (Reyes, Rosso, and Veale 2013). The dataset
also contains humorous tweets therefore being appropriate
for our purpose.
The contributions of this paper are as follows:
• the evaluation of our irony detection model (Barbieri and
Saggion 2014) to humour classification;
• a comparison of our model with the state-of-the-art; and
• a novel set of experiments to demonstrate cross-domain
adaptation.
The paper will show that our model achieves and improve
state-of-the-art performance, and that it can be applied to
different domains.
Related Work
Verbal irony has been defined in several ways over the years
but there is no consensual agreement on its definition. The
standard definition is considered “saying the opposite of
what you mean” (Quintilien and Butler 1953) where the
opposition of literal and intended meanings is very clear.
Grice (1975) believes that irony is a rhetorical figure that
violates the maxim of quality: “Do not say what you believe to be false”. Irony is also defined (Giora 1995) as
any form of negation with no negation markers (as most
1
https://twitter.com/
of the ironic utterances are affirmative, and ironic speakers
use indirect negation). Wilson and Sperber (2002) defined
it as echoic utterance that shows a negative aspect of someone’s else opinion. Finally irony has been defined as form
of pretence by Utsumi (2000) and Veale and Hao (2010b).
Veale states that “ironic speakers usually craft their utterances in spite of what has just happened, not because of it.
The pretence alludes to, or echoes, an expectation that has
been violated”. Past computational approaches to irony detection are scarce. Carvalho et. al (2009) created an automatic system for detecting irony relying on emoticons and
special punctuation. They focused on detection of ironic
style in newspaper articles. Veale and Hao (2010a) proposed an algorithm for separating ironic from non-ironic
similes, detecting common terms used in this ironic comparison. Reyes et. al (2013) have recently proposed a model
to detect irony in Twitter, which is based on four groups of
features: signatures, unexpectedness, style, and emotional
scenarios. Their classification results support the idea that
textual features can capture patterns used by people to convey irony. Among the proposed features, skip-grams (part of
the style group) which captures word sequences that contain
(or skip over) arbitrary gaps, seems to be the best one. Computational approaches to humour generation include among
others the JAPE system (Ritchie 2003) and the STANDUP
riddle generator program (Ritchie and Masthoff 2011) which
are largely based on the use of a dictionary for humorous effect. It has been argued that humorous discourse depend on
the fact that they can have multiple interpretations, that is
they are ambiguous. These characteristics are explored in
approaches to humour detection. Mihalcea and Strappavara
(2005) study classification of a restricted type of humorous
discourse: one-liners, which have the purpose of producing
humorous effect in very few words. They created a dataset
semi-automatically by retrieving itemized sentences from
web sites whose URLs contain words such as ”oneliner”,
”humour”, ”joke”, etc. Non-humorous data was created using Reuters titles, Proverbs, and sentences extracted from
the British National Corpus. They use two types of models
to separate humorous from non-humorous texts. On the one
hand a specially designed set of features is created to model
Alliteration, Antonymy, and Slang of a sexual oriented nature. On the other hand they tried a word-based text classification algorithm. Non surprisingly the word-based classifier
is much more effective than the specially designed features.
In (Mihalcea and Pulman 2007) additional features to model
violated expectations, human oriented activities, and polarity are introduced. Veale (2013) also created a dataset of
humorous similes by querying the web with specific similes
patterns.
Data and Text Processing
The dataset used for the experiments reported in this paper has been prepared by Reyes et al. (2013). It is a corpus of 40.000 tweets equally divided into four different topics: Irony, Education, Humour, and Politics. The tweets
were automatically selected by looking at Twitter hashtags
(#irony, #education, #humour, and #politics) added by users
in order to link their contribution to a particular subject and
community. The hashtags are removed from the tweets for
the experiments. According to Reyes et. al (2013), these
hashtags were selected for three main reasons: (i) to avoid
manual selection of tweets, (ii) to allow irony analysis beyond literary uses, and because (iii) irony hashtag may reflect a tacit belief about what constitutes irony and humour.
Another corpora is employed in our approach to measure the frequency of word usage. We adopted the Second
Release of the American National Corpus Frequency Data2
(Ide and Suderman 2004), which provides the number of occurrences of a word in the written and spoken ANC. From
now on, we will mean with “frequency of a term” the absolute frequency the term has in the ANC.
In order to process the tweets we used the Gate plugin
Twitie (Bontcheva et al. 2013), an open-source information extraction pipeline for Microblog Text. We used it
as tokeniser and part-of-speech tagger. We also adopted
Rita WordNet API (Howe 2009) and Java API for WordNet
Searching (Spell 2009) to perform operations on WordNet
synsets (Miller 1995).
Methodology
We approach the detection of irony and humour as a classification problem applying supervised machine learning methods to the Twitter corpus previously introduced. When
choosing the classifiers we had avoided those requiring features to be independent (e.g. Naive Bayes) as some of our
features are not. Since we approach the problem as a binary
decision (deciding if a tweet is ironic or not) we picked two
tree-based classifiers: Random Forest and Decision tree (the
latter allows us to compare our findings directly to Reyes
et. al (2013)). We use the implementations available in the
Weka toolkit (Witten and Frank 2005).
To represent each tweet we use seven groups of features.
Some of them are designed to detect imbalance and unexpectedness, others to detect common patterns in the structure
of the tweets (like type of punctuation, length, emoticons).
Below is an overview of the group of features in our model:
• Frequency (gap between rare and common words)
• Written-Spoken (written-spoken style uses)
• Intensity (intensity of adverbs and adjectives)
• Structure (length, punctuation, emoticons, links)
• Sentiments (gap between positive and negative terms)
• Synonyms (common vs. rare synonyms use)
• Ambiguity (measure of possible ambiguities)
In the following sections we describe the theoretical motivations behind the features and how them have been implemented.
2
The American National Corpus (http://www.anc.org/) is, as we
read in the web site, a massive electronic collection of American
English words (15 million)
Frequency
Unexpectedness and Incongruity can be a signals of irony
and humour (Lucariello 2007; Venour 2013). In order to
study these aspects we explore the frequency imbalance between words, i.e. register inconsistencies between terms of
the same tweet. The intuition is that the use of many words
commonly used in English (i.e. high frequency in ANC) and
only a few terms rarely used in English (i.e. low frequency
in ANC) in the same sentence creates imbalance that may
cause unexpectedness, since within a single tweet only one
kind of register is expected. We are able to explore this aspect using the ANC Frequency Data corpus.
Three features belong to this group: frequency mean,
rarest word, frequency gap. The first one is the arithmetic
average of all the frequencies of the words in a tweet, and it
is used to detect the frequency style of a tweet. The second
one, rarest word, is the frequency value of the rarest word,
designed to capture the word that may create imbalance.
Written-Spoken
Twitter is composed of written text, but an informal spoken
English style is often used. We designed this set of features
to explore unexpectedness and incongruity created by using spoken style words in a mainly written style tweet or
vice versa (formal words usually adopted in written text employed in a spoken style context). We can analyse this aspect with ANC written and spoken, as we can see using this
corpora whether a word is more often used in written or spoken English. There are three features in this group: written
mean, spoken mean, written spoken gap. The first and
second ones are the means of the frequency values, respectively, in written and spoken ANC corpora of all the words
in the tweet. The third one, written spoken gap, is the absolute value of the difference between the first two, designed
to see if ironic writers use both styles (creating imbalance)
or only one of them. A low difference between written and
spoken styles means that both styles are used.
Structure
With this group of features we want to study the structure of
the tweet: if it is long or short (length), if it contains long
or short words (mean of word length), and also what kind
of punctuation is used (exclamation marks, emoticons, etc.).
This is a powerful feature, as ironic and humorous tweets in
our corpora present specific structures: for example ironic
tweets are longer (mean length of an ironic tweet is 94.7
characters against 82.0467, 86.5776, 86.5307 of the other
topics), and humorous tweets use more emoticons than the
other domains (mean number of emoticons in a humorous
tweet is 0.012 and in the other corpora is only 0.003, 0.001,
0.002). The Structure group includes several features that
we describe below.
The length feature consists of the number of characters
that compose the tweet, n. words is the number of words,
and words length mean is the mean of the words length.
Moreover, we use the number of verbs, nouns, adjectives
and adverbs as features, naming them n. verbs, n. nouns,
n. adjectives and n. adverbs. With these last four features
we also computed the ratio of each part of speech to the
number of words in the tweet; we called them verb ratio,
noun ratio, adjective ratio, and adverb ratio. All these
features have the purpose of capturing the style of the writer.
Inspired by Davidov et al. (2010) and Carvalho (2009)
we designed features related to punctuation. These features
are: number of commas, full stops, ellipsis, exclamation
and quotation marks that a tweet contain.
We also added the feature laughs which is the number of
hahah, lol, rofl, and lmao.
Additionally, there are the emoticon feature, that is the
number of :), :D, of :(, and 😉 in a tweet. This feature works
well in the Humour corpus as it contains four times more
emoticons than the other corpora. The ironic corpus is the
one with the least emoticons (there are only 360 emoticons
in the Irony corpus, while in Humour, Education, and Politics tweets they are 2065, 492, 397 respectively). In the
light of these statistics we can argue that ironic authors avoid
emoticons and leave words to be the central thing: the audience has to understand the irony without explicit signs, like
emoticons. Humour seems, on the other hand, more explicit.
Finally we added a simple but powerful feature, weblinks. It simply say if a tweet include or not an internet link.
This feature result good for Humour and excellent for Irony,
where internet links are not used frequently.
Intensity
We also study the intensity of adjectives and adverbs. We
adopted the intensity scores of Potts (2011) who uses naturally occurring metadata (star ratings on service and product reviews) to construct adjectives and adverbs scales. An
example of adjective scale (and relative scores in brackets)
could be the following: horrible (-1.9) → bad (-1.1) → good
(0.2) → nice (0.3) → great (0.8).
With these scores we evaluate four features for adjective
intensity and four for adverb intensity (implemented in the
same way): adj (adv) tot, adj (adv) mean, adj (adv) max,
and adj (adv) gap. The sum of the AdjScale scores of all
the adjectives in the tweet is called adj tot. adj mean is
adj tot divided by the number of adjectives in the tweet.
The maximum AdjScale score within a single tweet is adj
max. Finally, adj gap is the difference between adj max
and adj mean, designed to see “how much” the most intense
adjective is out of context.
Synonyms
As previously said, irony convey two messages to the audience at the same time (Veale 2004). It follows that the choice
of a term (rather than one of its synonyms) is very important in order to send the second, not obvious, message. The
choice of the synonym is an important feature for humour
as well, and it seems that authors of humours tweets prefer
using common terms.
For each word of a tweet we get its synonyms with WordNet (Miller 1995), then we calculate their ANC frequencies
and sort them into a decreasing ranked list (the actual word
is part of this ranking as well). We use these rankings to define the four features which belong to this group. The first
one is syno lower which is the number of synonyms of the
word wi with frequency lower than the frequency of wi . It
is defined as in Equation 1:
slwi = |syni,k : f (syni,k ) < f (wi )| (1) where syni,k is the synonym of wi with rank k, and f (x) the ANC frequency of x. Then we also defined syno lower mean as mean of slwi (i.e. the arithmetic average of slwi over all the words of a tweet). We also designed two more features: syno lower gap and syno greater gap, but to define them we need two more parameters. The first one is word lowest syno that is the maximum slwi in a tweet. It is formally defined as: wlst = max{|syni,k : f (syni,k ) < f (wi )|} wi (2) The second one is word greatest syno defined as: wgst = max{|syni,k : f (syni,k ) > f (wi )|}
wi
(3)
We are now able to describe syno lower gap which detects
the imbalance that creates a common synonym in a context
of rare synonyms. It is the difference between word lowest
syno and syno lower mean. Finally, we detect the gap of
very rare synonyms in a context of common ones with syno
greater gap. It is the difference between word greatest syno
and syno greater mean, where syno greater mean is the following:
sgmt =
|syni,k : f (syni,k ) > f (wi )|
n. words of t
(4)
Ambiguity
Another interesting aspect of irony and humour is ambiguity.
We noticed that ironic tweets includes the greatest arithmetic
average of the number of WordNet synsets, and humour the
least; this indicates that ironic tweets presents words with
more meanings, an humorous tweets words with less meaning. In the case of irony, our assumption is that if a word has
many meanings the possibility of “saying something else”
with this word is higher than in a term that has only a few
meanings, then higher possibility of sending more then one
message (literal and intended) at the same time.
There are three features that aim to capture these aspects:
synset mean, max synset, and synset gap. The first one
is the mean of the number of synsets of each word of the
tweet, to see if words with many meanings are often used in
the tweet. The second one is the greatest number of synsets
that a single word has; we consider this word the one with
the highest possibility of being used ironically (as multiple
meanings are available to say different things). In addition,
we calculate synset gap as the difference between the number of synsets of this word (max synset) and the average
number of synsets (synset mean), assuming that if this gap
is high the author may have used that inconsistent word intentionally.
Sentiments
We analyse also the sentiments of irony and humour by using the SentiWordNet sentiment lexicon (Esuli and Sebastiani 2006) that assigns to each synset of WordNet sentiment
scores of positivity and negativity.
There are six features in the Sentiments group. The first
one is named positive sum and it is the sum of all the positive scores in a tweet, the second one is negative sum, defined as sum of all the negative scores. The arithmetic average of the previous ones is another feature, named positive
negative mean, designed to reveal the sentiment that better describe the whole tweet. Moreover, there is positivenegative gap that is the difference between the first two features, as we wanted also to …
Purchase answer to see full
attachment

How it works

  1. Paste your instructions in the instructions box. You can also attach an instructions file
  2. Select the writer category, deadline, education level and review the instructions 
  3. Make a payment for the order to be assignment to a writer
  4.  Download the paper after the writer uploads it 

Will the writer plagiarize my essay?

You will get a plagiarism-free paper and you can get an originality report upon request.

Is this service safe?

All the personal information is confidential and we have 100% safe payment methods. We also guarantee good grades

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Order your essay today and save 20% with the discount code ESSAYHELP