Latent semantical analysis

Typically, latent semantical analysis (LSA) is applied to text after it has been processed by TfidfVectorizer or CountVectorizer. Compared to PCA, it applies SVD to the input dataset (which is usually a sparse matrix), producing semantic sets of words that are usually associated with the same concept. This is why LSA is used when the features are homogeneous (that is, all the words in the documents) and are present in large numbers.

An example of the same in Python with text and TfidfVectorizer is as follows. The output shows part of the content of a latent vector:

In: from sklearn.datasets import fetch_20newsgroups
categories = ['sci.med', 'sci.space']
twenty_sci_news = fetch_20newsgroups(categories=categories)
from sklearn.feature_extraction.text import TfidfVectorizer
tf_vect = TfidfVectorizer()
word_freq = tf_vect.fit_transform(twenty_sci_news.data)
from sklearn.decomposition import TruncatedSVD
tsvd_2c = TruncatedSVD(n_components=50)
tsvd_2c.fit(word_freq)
arr_vec = np.array(tf_vect.get_feature_names())
arr_vec[tsvd_2c.components_[20].argsort()[-10:][::-1]]

Out: array(['jupiter', 'sq', 'comet', 'of', 'gehrels', 'zisfein',
'jim', 'gene', 'are', 'omen'], dtype='<U79')
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset