Calculating term frequencies and inverse document frequencies

In this we will learn how to calculate term frequencies and inverse document frequencies.

Getting ready

Occurrences and counts are good as feature values, but they suffer from some problems. Let's say that we have four documents of unequal length. This will give a higher weightage to the terms in the longer documents than those in the shorter ones. So, instead of using the plain vanilla occurrence, we will normalize it; we will divide the number of occurrences of a word in a document by the total number of words in the document. This metric is called term frequencies. Term frequency is also not without problems. There are words that will occur in many documents. These words would dominate the feature vector but they are not informative enough to distinguish the documents in the corpus. Before we look into a new metric that can avoid this problem, let's define document frequency. Similar to word frequency, which is local with respect to a document, we can calculate a score called document frequency, which is the number of documents that the word occurs in the corpus divided by the total number of documents in the corpus.

The final metric that we will use for the words is the product of the term frequency and the inverse of the document frequency. This is called the TFIDF score.

How to do it…

Load the necessary libraries and declare the input data that will be used for the demonstration of term frequencies and inverse document frequencies:

# Load Libraries
from nltk.tokenize import sent_tokenize
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer


# 1.	We create an input document as in the previous recipe.

text = "Text mining, also referred to as text data mining, roughly equivalent to text analytics,
refers to the process of deriving high-quality information from text. High-quality information is 
typically derived through the devising of patterns and trends through means such as statistical 
pattern learning. Text mining usually involves the process of structuring the input text 
(usually parsing, along with the addition of some derived linguistic features and the removal 
of others, and subsequent insertion into a database), deriving patterns within the structured data, 
and finally evaluation and interpretation of the output. 'High quality' in text mining usually 
refers to some combination of relevance, novelty, and interestingness. Typical text mining tasks 
include text categorization, text clustering, concept/entity extraction, production of granular 
taxonomies, sentiment analysis, document summarization, and entity relation modeling 
(i.e., learning relations between named entities).Text analysis involves information retrieval, 
lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation, 
information extraction, data mining techniques including link and association analysis, 
visualization, and predictive analytics. The overarching goal is, essentially, to turn text 
into data for analysis, via application of natural language processing (NLP) and analytical 
methods.A typical application is to scan a set of documents written in a natural language and 
either model the document set for predictive classification purposes or populate a database 
or search index with the information extracted."

Let's see how to find the term frequency and inverse document frequency:

# 2.	Let us extract the sentences.
sentences = sent_tokenize(text)

# 3.	Create a matrix of term document frequency.
stop_words = stopwords.words('english')

count_v = CountVectorizer(stop_words=stop_words)
tdm = count_v.fit_transform(sentences)

#4.	Calcuate the TFIDF score.
tfidf = TfidfTransformer()
tdm_tfidf = tfidf.fit_transform(tdm)

How it works…

Steps 1, 2, and 3 are the same as the previous recipe. Let's look at step 4, where we will pass the output of step 3 in order to calculate the TFIDF score:

>>> type(tdm)
<class 'scipy.sparse.csr.csr_matrix'>
>>>

Tdm is a sparse matrix. Now, let's look at the values of these matrices, using indices, data, and index pointer:

How it works…

The data shows the values, we don't have the occurences, but the normalized TFIDF score for the words.

There's more…

Once again, we can delve deeper into the TFIDF transformer by looking into the parameters that can be passed:

>>> tfidf.get_params()
{'use_idf': True, 'smooth_idf': True, 'sublinear_tf': False, 'norm': u'l2'}
>>>

The documentation for this is available at http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset