6. The Five Factors: Putting Data to Work

Everywhere you look, you’ll find a piece of data, each bit of it playing some kind of role in your life. How cold is it outdoors? Glance at the thermometer for data. Now you know whether to don your thick wool overcoat. How much do you weigh this month? The answer tells you if you can afford to order cheesecake for dessert. Every day of your life, you absorb and process countless pieces of data, often without conscious thought. Sometimes those decisions involve processing multiple bits of data: Ordering the cheesecake may be a function of your weight, how much you can afford to spend on lunch, and how much time you have before you need to be back at your desk. But when it comes to building an investment process, you can’t afford to handle data in quite such a slapdash manner. True, there’s a lot of it out there, from stock prices and valuation ratios to pages and pages of economic information, including such esoteric details as changes in employment levels within Wisconsin’s furniture-manufacturing industry.

“You can’t manage what you don’t measure.” It’s a hackneyed phrase because so many management gurus use it to describe the travails associated with running their companies. To discover whether their marketing campaigns are successful or a new business strategy is working out, business leaders and their senior managers must identify key performance indicators and monitor them. Investing isn’t like managing a business, of course, but the adage can still help you make informed and rational decisions. The challenge is the same for everyone, as I discussed in the preceding chapter: Each investor must decide, independently, what he or she needs to measure in order to make a thoughtful buy or sell decision. I also have come to realize that just because I can measure something doesn’t mean that the data I would produce is useful, much less significant.

That challenge of selecting and managing data is one that quantitative investors and money managers have wrestled with since the early 1970s, when technological advancements first made quantitative, or “quant,” investing possible. The idea that an investment process could be constructed around sets of data rather than the results of in-depth research by individual investment researchers was still new when I joined Keystone Funds as a quantitative analyst in 1982. It was quickly apparent that, as a quant, I wasn’t going to be part of the mainstream. The first task I set myself was to build a model that could tell me when stocks in the S&P 500 Index were over- or undervalued relative to their peers and to history. At the time, Keystone’s dozen or so fund managers, along with another dozen analysts in charge of monitoring trends in various industry groups, preferred to focus on stock “stories” rather than on data. These stories revolved around some kind of insight into the company’s business. A particular technology company was about to outperform, one analyst might argue, because it had slashed its debt levels and hired some brilliant new engineers to accelerate research and development even as the marketing was also gaining traction. Making the story better was news that the CEO of its chief competitor was trying to resolve a gender discrimination lawsuit and didn’t have his eye on the ball. The role of the quants was to serve as a kind of chorus, providing data supporting that narrative.

That was the state of the investment world when I joined it. Back then, of course, developing and pursuing a pure quantitative investment strategies would have been prohibitively expensive because it would have required any company to purchase vast quantities of data and develop a proprietary database to analyze it. Only the major brokerage firms—Salomon Brothers, in particular—had both the financial resources and the conviction required to make significant headway in quant research. When I resolved to find ways to use data more systematically in my own investing, sorting through mountains of data in search of the handful of pieces of the puzzle that could be counted on to be consistently helpful and useful was frustrating. My very first valuation project showed me just how complex a task this would be. I decided that a good way to identify attractive stocks would be with some metric that would combine dividend yield (the size of a company’s dividend relative to its stock price) and a company’s historical stock price growth rate. With the right model, I could pick out companies offering an appealing combination of strong dividends and low volatility.

Because this seemed like a recipe for successful investing, I set about ranking the 500 stocks in the S&P Index each month based on this methodology. Alas, the data I selected didn’t take into account everything that was happening in the market. It overlooked the fact that many fast-growing companies didn’t pay dividends. (For instance, it would be another 20 years before Microsoft—one of the decade’s biggest winners—paid out its first dividend.) A model based on dividend data that tried to identify the growth stocks most likely to outperform the rest of the market, it seemed, was inherently flawed. I would have been left with a portfolio stuffed with a bunch of boring utility companies and banks if I hadn’t reconsidered my approach.

It was back to the drawing board, and to the seemingly endless and unmanageable amounts of data that the market spits out every day: intraday stock price moves and ranges; trading volume statistics; details on the number of investors selling the stock short (betting that the stock will decline); volatility; correlation (the degree to which one stock or market sector matched or deviated from its peers or the market as a whole); earnings results; dividend payments. It was enough to make a grown man weep. I was a victim of “information overload,” the famous condition described by Alvin Toffler in his 1970 bestseller Future Shock. Trying to grasp all this information made trying to take a sip of water from a fire hose seem simple in comparison. And as for trying to figure out which bits of the puzzle were actually useful...My mind boggled.

Happily, the process of managing raw data streams has become a lot simpler since then. Today, data that brokerage firms once treated as proprietary information is easily and readily accessible to anyone. For years, for instance, clients of Salomon Brothers received updates—in the mail—to the brokerage firm’s Yield Book. Those updates ranked the various bond issues’ yields by credit quality, maturity, sector, and other factors. Until the early 1990s, if I wanted to try to analyze this kind of yield information on a historical basis, I would have to send an assistant in search of the three-ring binder in which these updates were stored, and then someone would have to type each data point into a spreadsheet computer program (with all the potential for human error that involved). There was no way to do this on a time-sensitive basis, much less winnow out what pieces of data were important. Nowadays, data like this is only a mouse click away. (While I may access it on Bloomberg, you, the reader, can find much of the historical yield data or government statistics you need via the Internet at no charge.)

In the absence of hard quantitative indicators, and still looking for clues to market behavior in various kinds of data, I tried new approaches. By the mid-1980s, I was working at Constitution Capital Management overseeing bond funds for institutional investors such as pension funds. Finding ways to predict interest rate movements was crucial: It was the investment world’s equivalent of the Holy Grail. That’s when I stumbled over an interesting piece of data. Every week, Barron’s reported the results of a survey by Investors Intelligence about the level of bullishness and bearishness in the bond market. One week I noticed that investors were rather pessimistic; only 35% of those surveyed expected interest rates to fall (sending bond prices higher) over the course of the coming month. Curiously, interest rates actually did fall over the course of that month. Hmm. To a natural contrarian like me, this data looked like gold dust. If I bought when everyone was bearish and sold when they turned bullish....

To build my model, I went from one stack of dusty old newspapers to the next, typing in the weekly sentiment figures. Every so often, a particular issue would be missing, leaving a gap in the data series, and I’d have to seek a copy elsewhere. Eventually, I had three years’ worth of data, and, I figured, the beginnings of another data-based investment model. Alas, it was all in vain. It seemed as if investor sentiment wasn’t a primary driver of the market’s performance after all. Sure, it worked out some of the time, but not always. And I couldn’t count on it in isolation. (Of course, in today’s Internet era, I would have been able to build and test that model in a matter of a few hours before moving on.)

Still, by the late 1980s, life was getting easier for would-be quantitative analysts such as myself. Despite my failures, I remained sure that finding the right data and monitoring it actively would give me an edge in the markets. The problem was discovering what that data was and how to build the right model to use it effectively. Suddenly, Bloomberg opened its vast databases to clients, allowing users to download information onto their desktop computers to analyze as they wanted. Some major brokerage firms followed suit. Just as suddenly, I was able to access all kinds of fresh sets of data instantaneously. New research doors flew open wide. How did small stocks fare during periods of recession? All I had to do was sit down at my computer, clatter away at the keyboard for a few minutes and, bingo, I could tell you. Of course, I still made mistakes (there is a lot of truth in the old adage about models, “garbage in, garbage out”), but at least the process of trial and error was faster. It took days rather than weeks or months for me to discover there wasn’t any pattern linking changes in companies’ credit ratings and their stock prices.

Around 1990, I began to suspect my focus had been too narrow. I had been looking for ways to predict what was going to happen within individual segments of the market, or even individual securities. Every day, I showed up at work armed with the determination to beat the market by finding new ways to select individual bonds. But it was an endless battle, with very few victories on my part. Cheap bonds were usually cheap for a reason: Someone was dumping them onto the market because the company’s credit profile was deteriorating. Even as I continued to hunt for another fraction of a percentage point to add to my portfolio’s return each day, I grew to realize that a long-term investor’s edge lies outside the daily frenzy of my trading desk. Even as I yelled and screamed and tore my hair out in frustration in reaction to the day-to-day bond price movements, a bigger dynamic was taking shape in the wings that would ultimately prove much more important to portfolio returns. Interest rates were falling, slowly and steadily. From 13.75% in June 1984, the yield on the 10-year Treasury note fell to only 7.375% by March 1986. Anyone who had done nothing but buy that benchmark Treasury and hang on to it throughout the long period during which interest rates declined would have walked away at the end of that period with a return of about 9% every year, on average.

This realization astounded me. While I had been running around in circles, chasing one tiny trading opportunity after another, I had missed the opportunity to put a single outsize trade in place, sit back calmly, and pull in the returns. All that was missing, I realized, was confidence in my own judgment. I would have to feel sure that I had correctly identified these major marketwide trading opportunities as they arose. But in the wake of this epiphany, it seemed clear that I needed to focus my research on identifying data that, together or alone, could tell me more about the direction of the various markets rather than the individual securities within them. If I could capture that trend, I told myself, the rest of the strategy would fall into place. But what I needed wasn’t just ordinary data: I needed some superior form of data that could tell me what I needed to know about the bigger picture. I needed factors.

At the time, I didn’t know what those factors were. In fact, the concept of factors, as they now appear to me, wasn’t yet clear in my mind. Nor did I realize that I might need multiple factors to help me impose some kind of order on the investment landscape the way I wanted. It wasn’t until I happened to pick up a copy of Bloomberg Markets Magazine one fall afternoon that the idea struck me. That issue contained a story that explained the basics of quantitative investing, and how these analytical techniques could help an investor better grasp what was going on across markets. It encouraged readers to think about financial markets from several perspectives simultaneously—with valuation issues and the economy among the key factors. Archimedes had his eureka moment in the bathtub; mine was more prosaic but felt like the same kind of breakthrough. It should be possible, I reasoned (still clutching the magazine in my hands) to assemble different data points onto a single investment “dashboard” that I could monitor on an ongoing basis. I had already recognized the need to step back and analyze the entire market. This latest revelation made me accept the logic that the key to understanding the markets, and thus to investment outperformance, lay in the interplay of multiple different factors. But which ones, and how did they work together? The article didn’t give me many clues: It was an overview of sorts. But its author had given me a starting point. If I could figure out which factors were important and then determine which metrics would best shed light on those factors, the rest would fall into place.

My first goal was to identify a set of factors that might predict interest rate movements. I took bond yield data and compared this factor to others, including economic indicators such as GDP growth rates and inflation. To no avail; just as no single factor could help me, it seemed that factors couldn’t be used within any single market to predict its direction. Fortunately, at around the same time (in the early 1990s), I moved to BankBoston, where one of my jobs was co-managing the 1784 Asset Allocation Fund. Working with my new colleague and friend Ron Claussen, my job would be to oversee a $50 million “balanced” retail mutual fund that kept roughly 60% of its assets in stocks and the remaining 40% in bonds and fixed-income securities. Like many bank-sponsored mutual funds, this product had been developed by BankBoston’s Trust department for its wealthy clients. My job was to run the bond side of the portfolio, leaving Ron to focus on stocks. But it wasn’t long before I began to think more broadly about the challenge the two of us faced. As its name implied, the fund’s success hinged on the ability of its managers to decide on the right balance between stocks and bonds. Getting that right would mean that security selection—choosing which specific stocks or bonds to buy—would become easier. This was a way for me to test which factors enabled me to make that call, in what combinations, and with what level of consistency.

At first I settled on three sets of metrics, or factors. The first, which remains at the base of my investment model to this day, revolved around the market’s fundamental valuations: The earnings yield model compares anticipated earnings (divided by the current price of the index) to the prevailing corporate borrowing rate. Because I knew that the economy was vital to whether stocks or bonds would outperform at any given point in time, for my second factor I turned to economic data, and specifically to data that would track the way interest rates on short-term and long-term securities shifted over time. The bigger the difference between them, I figured, the more likely that stocks would outperform, because the yield curve is good at predicting the economy’s performance. Since investors expect higher interest rates to accompany economic growth, I could watch that yield differential widen as investors bet that interest rates would rise in coming years. Then I could draw conclusions about the implications of growth for both stocks and bonds.

I also believed momentum would prove vital in any decision to emphasize either stocks or bonds. Financial markets tend to keep moving in the direction they are going until new information arrives, attitudes change, or some other event forces investors to reconsider their opinions and change course. So I began to monitor the market’s position relative to its 200-day moving average of the S&P 500, turning bullish on stocks when the market traded above that line and bearish when it sank below it.

The concept of using quantitative metrics to manage our stock-bond balancing act took some getting used to. Ron was an old-line stock picker who relied on fundamental analysis, along with intuition. Then I arrived on the scene, an upstart from a bond-trading desk armed only with a mathematics degree and a lot of chutzpah. Whereas Ron’s job was to select stocks that would collectively outperform the S&P 500, mine was to buy bonds whose returns would trounce the Lehman Bond Index. But we needed to work together to decide on our asset-allocation target. Thankfully, Ron was open-minded and eager to explore any approach that might help us boost the fund’s returns. Together, we refined my metrics process and integrated it into our decision making. The results were encouraging. Within 2 years, our fund was one of the top-ranked performers in its category, according to both Morningstar and Lipper. At last, I seemed to have achieved my ambition to apply quantitative measurements to investment decision making and shove my emotions governing security selection to the background. Factors, it seemed clear, were the answer.

Of course, the process had to be refined on an ongoing basis. There was more to it than valuation and momentum, and my dashboard needed reconfiguring to reflect that. Ultimately I would decide that as well as market valuation, momentum, and the economy, I would need to track market sentiment and liquidity. These became my five factors. Encouragingly, all five worked most dramatically when applied to the big picture. Because that was where I already knew investment decisions could have the biggest impact on returns, as I explained with the decision tree in Chapter 2, “The World of Global Macro,” I believed that my victory was twofold.

There were hiccups along the way, as in 1994 when these handpicked factors sent me contradictory signals. I was sure that the stock market was going to outperform bonds that year. Valuations were reasonable, and the yield curve had an upward slope (meaning that longer-term issues carried higher interest rates and yields, a sure sign that bond investors were anticipating a period of economic growth ahead, during which stocks typically outperform). My momentum factor was also sending me the same message. But in March, the picture suddenly changed. Federal Reserve policy makers unexpectedly began raising short-term interest rates, boosting them from 3% to as high as 4.75%, sending stocks into a tailspin. Within 2 weeks, the S&P 500 had plunged 5%, leaving it below its 200-day moving average. Suddenly, our metrics were completely out of whack. What set should I trust? The valuation argument in favor of stocks was intact, but my momentum factor was now screaming “sell, sell, sell!”

I realized that my approach, of which I had been so proud, was incomplete. For starters, I was treating it as if it were a system: as if all I had to was input data points related to my chosen factors and the system would spit out the (correct) buy or sell decision. I had to face the harsh reality that there my dream was doomed. No electronic Wizard of Oz would emerge to tell me precisely and infallibly when to be in or out of the market. What I needed wasn’t a system but, rather, a process. I knew that metrics could help me make investment decisions if I combined my factors with my own judgment and experience. Metrics made sense, but not in isolation. So I set out to refine the framework and review what factors would work best within this new approach.

Clearly, valuation had to remain at the heart of that process. Without examining valuation, it isn’t possible to even try to decide whether a market is attractive on an absolute or relative basis. This factor is at the core, one way or another, of nearly every investment decision made today. Someone who decides a particular stock is cheap and a “good buy” is making that call based on valuation. So, too, are those whose quantitative model tells them it’s time to shift assets out of overvalued U.S. small-cap stocks and into undervalued emerging-market bonds. Of course, every factor, including valuation, can and should be second-guessed. For instance, a key valuation metric is data that aggregates the opinions of investment research analysts’ regarding corporate earnings. Do they expect stocks to post higher or lower earnings results over the next 4 quarters, and what is the magnitude of that forecast gain or loss? Obviously, a shrinking rate of growth or a shift from positive earnings growth to losses would be bearish signals. But all of that hinges on the reliability of those earnings forecasts.

At the beginning of 2008, a lot of attention was placed on what analysts were thinking about earnings as the real estate market’s woes spilled over into the stock market and economists debated the likelihood of a recession. By January, economic indicators were predicting a bleaker environment than most pundits had expected. But with the collection of analysts who followed individual companies or industries at that time, most seemed to be living in some kind of parallel universe, one where the economic sun shone brightly and earnings were expected to soar. Indeed, just weeks before the government issued a bleak fourth-quarter GDP report, analysts were calling, in aggregate, for a 15%-plus jump in corporate earnings! Something was clearly amiss; one of these groups had misread completely the economic environment.

When I looked at data from Ned Davis Research, which has studied the behavior of corporate earnings in a variety of economic contexts, managing this conflict became easier. That data suggests that during times of recession, when the rate of GDP growth declines, corporate earnings tend to fall at an annualized rate of 5%. But only experience can help you know how to interpret and second-guess the behavior of the industry analysts and find ways to adjust your own strategy in response. Most of the time the input data I use is fine, but I know that sometimes I need to circle back and check to see whether it’s reasonable. It’s the same process as my 11-year-old daughter follows when she tries to solve a story problem about a boy walking around town and comes up with an answer of 510 miles. Odds are that answer is unreasonable and she’ll have to sharpen her pencil and take another crack at her calculations.

When it comes to the investment markets, sometimes interest rates, particularly Treasury yields, move for reasons that don’t have that much to do with economic trends. As recently as 2002, for instance, the yield on the 10-year Treasury note declined in spite of aggressive Fed action. By February 2005, a confused Alan Greenspan admitted the fact that yields on 10-year Treasury securities kept falling even as he and fellow Federal Reserve policy makers kept raising short-term rates was a “conundrum.” What Greenspan didn’t realize was that the real reason for the sharp slide in Treasury yields had nothing to do with the economy and everything to do with a buying spree by foreign central banks and institutions from countries such as China and India. All these trading partners were coping with their own glut of savings by using them to buy all the U.S. Treasury securities they could lay their hands on. As happens whenever demand for a commodity spikes unexpectedly, the price also spiked, sending yields lower. This enormous flow of funds resulted in artificially low interest rates and, as a result, pushed domestic lenders to take on more risk to maintain their lending targets. Incidentally, this flood of cheap capital helped fuel the housing bubble that, in turn, was largely responsible for the market meltdown of 2008.

Obviously, the factors that collectively represent an investment process—and the elements they contain—can never be frozen in time. They are inherently dynamic, as I realized in 1994 when the sudden jump in interest rates made me understand that I needed to include liquidity as a factor in my investment model. The spike in short-term lending rates that I have already described made capital instantly more costly and scarce and made liquidity a crucial factor. By ratcheting up interest rates, the Fed was able to transform a market with ample liquidity (and positive stock market fundamentals) into one where a shortage of investment capital and a reluctance to put that to work in any risky investment translated into a bear market for stocks. Clearly, I needed to find metrics that would help me track this important factor.

It wasn’t until the late 1990s that psychology emerged as my fifth factor. Until then, I had viewed market psychology as something that provided headlines for journalists but not much useful data for professional investors. It was sometimes intriguing but rarely relevant. But after I started working for the wealthy clients of a private bank, my views began to change. For these wealthy families, the money I managed was their money, a symbol of their success. For the first time, I found myself understanding in a real sense just how an individual investor’s emotions can wreak havoc on rational decision making.

It took more than a decade, but by the late 1990s I was close to refining my personal version of the Holy Grail: a quantitatively based investment process. It was clear to me that five factors need to be considered by anyone hoping to develop a process that is both robust and successful. Collectively, valuation, the economy, liquidity, investor psychology, and momentum explain and shape a good deal of the major movements within financial markets. The weight of any single factor might vary from time to time, but investors who make their market selections based on all five factors are likely to end up with a stronger portfolio, one characterized by higher returns and lower risk levels.

That isn’t to say that understanding these factors, monitoring the data sets associated with each, and analyzing the results is always simple and straightforward. On the plus side, obtaining the data is a simple matter, especially compared to two decades ago when I started trying to do this. The Internet has decisively leveled the playing field, making all the data investors may need available at the click of a mouse (as readily available to the part-time investor monitoring his own retirement funds as it is to pension fund trustees or hedge fund managers who devote all their waking hours to the markets). Implementing those decisions has also become much simpler. By emphasizing factors that point investors toward the kind of major turning points in financial markets, the process steers you directly toward the kind of simple yet high-impact decisions that all of us should favor in our portfolios. Listen to what the five factors tell you about the markets and you’ll be able to step back and wait patiently for that handful of trading opportunities that offer true upside potential to appear.

Note the use of the word wait. Building and overseeing your own investment process based on the five factors isn’t going to be nearly as exciting as stock picking can be. And it requires patience. You will need to look at or gather data at regular intervals (every month, at least), and then evaluate it. But those evaluations should be made with a 12- to 18-month investment horizon. Sure, some investments pay off much more quickly. We all read about them in the newspapers. But the reason we read about them is because of their rarity; newspapers don’t celebrate routine events. (That’s while you’ll never see a headline that reads “12,327 planes landed safely yesterday.”) Alas, quick returns usually go hand in hand with an outsize degree of risk and inevitably with high volatility. So an overnight investment triumph isn’t likely to help you sleep more peacefully.

The key to any global macro approach is the quest for markets that are either relatively cheap or relatively expensive. That is what the five factors that I have identified and that I explain to you in the coming chapters are designed to do. However, valuation changes for whole asset classes or asset categories, such as bonds or large-cap stocks, don’t occur as rapidly as those for individual stocks. Just because your research steers you toward a particularly cheap asset class, there is no guarantee that other investors will follow your lead and start hitting the “buy” button themselves. Instead of expecting instant gratification, you should prepare for frustration as weeks and months pass before the market gradually comes around to the same point of view. All things being equal, expensive markets tend to become more pricey, while the relative bargains you have discovered become even bigger bargains as their valuations slump further. It’s the classic conundrum that all value and contrarian investors must understand and accept.

To pursue a factor-based global macro strategy successfully, you must learn to be disciplined. That means recognizing that you might never buy at the lowest possible price and learning to be content with the fact that you picked up a relative bargain. Remind yourself as often as you need to that it is over periods of 12 to 18 months or longer that cheap markets tend to outperform their more expensive counterparts. Need proof? Just look back to the years from 2002 to 2006, a period during which the Russell 2000 Index outpaced the S&P 500. Of course, some of that move was rational and supported by the fundamentals. Still, by late 2004, the price/earnings ratio of the smaller stocks in the Russell benchmark exceeded that of the large-cap stocks in the S&P 500. That prompted strategists to declare (prematurely) that the small-stock rally was about to end. Instead, smaller stocks continued to beat their blue-chip counterparts until April 2007. Investors who reacted to the signal sent by one set of metrics—the valuation factor—would have seen their patience rewarded eventually. But those investors would also have had to deal with a lot more angst than someone who had waited until more factors were in place.

Heading into the discussion of the five factors, I can promise you that you will not experience all the excitement and drama you may have seen watching CNBC. Hopefully, the market meltdown of the autumn of 2008 has left you with an aversion to “thrills” of that kind, and that you have been left instead with a willingness to be patient and tolerate short-term frustration in the pursuit of long-term gains. Major market shifts don’t happen overnight, and they don’t happen every 3 months or so. A factor-based approach can be about as exciting and drama-filled as watching paint dry. When it comes to investing, however, as recent events have shown, excitement can also be perilous. Incorporating a global macro process will leave you with a healthier portfolio.

Now it’s time to investigate the inner workings of each of these factors. Over the next five chapters, I’ll help you understand how to think about each of them and how to tackle your most important job as an investor: discerning what each can say about the markets and understanding how to look at the data sets that collectively make up each factor.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset