Data preprocessing

The transcripts consist of individual statements by a company representative, an operator, and usually a question and answer session with analysts. We will treat each of these statements as separate documents, ignoring operator statements, to obtain 22,766 items with mean and median word counts of 144 and 64, respectively:

documents = []
for transcript in earnings_path.iterdir():
content = pd.read_csv(transcript / 'content.csv')
documents.extend(content.loc[(content.speaker!='Operator') & (content.content.str.len() > 5), 'content'].tolist())
len(documents)
22766

We use spaCy to preprocess these documents as illustrated in Chapter 13Working with Text Data (see the notebook) and store the cleaned and lemmatized text as a new text file.

Data exploration reveals domain-specific stopwords such as year and quarter that we remove in a second step, where we also filter out statements with fewer than ten words so that some 16,150 remain.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset