13

Modelling Qualitative Data

In Chapter 11 we described how a fund's cash flows can be scaled to achieve a projected lifetime and a projected TVPI, but we have not discussed how projected TVPIs reflecting the fund's growth prospects could be determined. For investment in a new fund as a blind pool and in young funds with too little significant history, we need to use qualitative inputs to model such multiples. Therefore, we have to deal with the question of how we can use such data in a consistent manner to put funds into different classes according to their growth prospects.

Such classifications could, for example, take the form of what is commonly called a “fund rating”. Another question relevant for this discussion – and to which we will turn in more detail later – is how to translate such a classification into quantification, again to determine ranges for growth rates as inputs for the cash flow models.

13.1 QUANTITATIVE VS. QUALITATIVE APPROACHES

Quantitative approaches are concerned with the statistical analysis of data that are collected from empirical observations. In order to derive meaningful conclusions from the statistical analysis, the data sample must be sufficiently large and representative (i.e., unbiased). Unfortunately, such samples often do not exist as far as alternative assets are concerned. As a result, risk managers face the challenge of working with imperfect data, strictly limiting the application of quantitative techniques regarding the ex-ante assessment of limited partnership funds. Instead, risk management in the area of illiquid assets has frequently to rely on the interpretation of sporadic, incomplete and often ambiguous information.

13.1.1 Relevance of qualitative approaches

In contrast to quantitative approaches, qualitative assessments focus on the classification of information, which is usually anecdotal and hence subject to interpretation as data samples are small and unrepresentative. Insights are sought from loosely structured qualitative information rather than quantitative data that allow the application of econometric techniques. However, working with small data samples and information that is difficult to measure – such as reputation, expertise or management style – inevitably introduces an element of subjectivity. Many risk management practitioners thus view qualitative analysis with suspicion: lack of repeatability and structure, inconsistencies in the analysis as well as problems in translating descriptive information into quantitative measures contribute to the perception that qualitative analysis is an inferior approach to be used only if data problems are insurmountable.

Another reason why many investment professionals feel uncomfortable with qualitative approaches is likely to be rooted in psychological factors: the principal weaknesses of such approaches may expose the decision maker to a higher degree of responsibility, whereas purely quantitative – and thus allegedly “objective” – models due to their “black box” nature are viewed as less subject to manipulation. Thus, their results tend to be more likely to be accepted by outside stakeholders, such as auditors and regulators. As Porter (1992) argues, “quantification appears as a strategy for overcoming distance and distrust […] We need to understand quantification as a response to a set of political problems.”

Nevertheless, regulators increasingly recognize that qualitative analysis can be of significant value as it may generate a more in-depth understanding about a particular issue (ESMA, 2011). Qualitative analysis can provide a competitive edge, as it is concerned with understanding the underlying dynamics. Arguably, therefore, it may be more forward looking than quantitative analysis, where information about key factors, such as management, may not be reflected in the data or may be available only with a substantial delay.

In order for qualitative analysis to provide meaningful input in the decision-making process, it has to be properly structured, for example, through a formalized scoring system. In this sense, qualitative analysis is principally no different from quantitative approaches, which are also subject to interpretation of the data and hence not entirely free from a potential decision bias. In this context, an important step in avoiding a decision bias in using qualitative analysis could lie in establishing an expert panel that is responsible for approving the evaluation and scoring of activities. Consistency and repeatability can be ensured through keeping matters as simple as possible, proper documentation of the methodologies used, training of analysts and through regular reviews.

13.1.2 Determining classifications

Classifications may take the form of ratings. To determine such ratings, two approaches are generally conceivable. First, the various classes and the delineation between them are clearly defined and described in as much detail as possible, but the methodology to arrive at the classification is left to analysts who are free to select the tools most appropriate for the purpose. Such an approach may be advantageous in situations where it is difficult to model how individual characteristics interact to produce outcomes and where combinations of factors determine the ultimate classification. There is certainly a high element of subjectivity and lack of transparency and consistency, but these problems could be mitigated, for example, through a structured review process.

Alternatively, the scoring methodology, and how the various rankings are to be aggregated to arrive at the classification, could be a formalized algorithm. The advantage of this approach is that the classification process is more transparent and repeatable. However, as mentioned above, aggregating various individual scores into one classification may be problematic in situations where combinations of factors determine the ultimate classification.

13.2 FUND RATING/GRADING

It has repeatedly been claimed that investing through limited partnership structures lends itself to techniques that are akin to assessing credit risk. For instance, the Basel Committee on Banking Supervision (BIS, 2001) has argued that it

“is a sound practice to establish a system of internal risk ratings for equity investments […]. For example, rating factors for investments in private equity funds could include an assessment of the fund's diversification, management experience, liquidity, and actual and expected performance. Rating systems should be used for assessments of both new investment opportunities and existing portfolio investments. The quantification of such risk ratings will vary based on the institution's needs […]. The policies, procedures and results of such quantitative efforts should be fully documented and periodically validated.”

Similarly, the International Swaps and Derivatives Association (ISDA, 2001) has taken the view that

“… some traded assets with little or low liquidity (e.g. private equity) may require risk rather [sic] analysis closer to that which accompanies assessment of bankruptcy or default risk rather than a market risk paradigm.”

Importantly, the traditional approach to assess credit risk is a rating system. The rating of borrowers is a widespread practice in capital markets. It is meant to summarize the quality of a debtor and, in particular, to inform the market about repayment prospects. All credit rating approaches are based on a combination of quantitative and qualitative components.1 The more limited the quantitative data, the more the rating will have to depend on the qualitative assessments.

13.2.1 Academic work on fund rating

Unfortunately, the views expressed by institutions like the Basel Committee and the ISDA have failed to encourage academic research in this field. To the extent that work on rating systems in private equity and similar asset classes has been done, it has been led by practitioners and commercial entities. Examples include Troche (2003) on private equity, Giannotti and Mattarocci (2009) on real estate and Ruso (2008), who discusses a rating system comprising a governance and risk rating for closed-end real estate, ship and private equity funds. In many ways, the proposed techniques are similar in the sense that the risk rating consists of several criteria for which either negative or positive points are awarded, depending on whether they increase or decrease the risk level. However, none of these studies link rating classes to quantitative measures.

13.2.2 Techniques

Studies on investment management and qualitative methods often use the terms “scoring”, “ranking” and “rating” interchangeably, which can lead to confusion. For our purpose, we differentiate between qualitative and quantitative methods to determine a ranking, the scores derived from a ranking and the various rankings or scores which are aggregated to come to a classification, such as a rating or grading (Meyer and Mathonet, 2005).

Ranking

Rankings from “best” to “worst” are usually designed to help users make decisions. Rankings, sometimes called “league tables”, are based on various measures (Bromley, 2002). Typically, several relevant dimensions are ranked independently: for example, in university league tables “research assessment”, “teaching assessment”, “staff/student ratio” are often ranked separately, as users may be interested in the individual factors that drive the overall rankings or have a particular interest in a single dimension. While rankings are generally quite straightforward as long as they entail only one dimension, it is far more challenging to aggregate rankings of different dimensions as a basis for making a decision about two or several alternative investments. This requires translating rankings of a set of items or attributes translated into numerical scores.

Scoring

A scoring aims to assign a set of criteria that are relevant for the measurement within a meaningful categorization in predefined classes. When designing a scoring template, important questions relate to the number of dimensions that should be reflected in the evaluation and the weighting of these dimensions. For example, for an infrastructure fund it might be assumed that the ability to take advantage of cheap debt financing is more important than being able to provide operational support to the portfolio company. By contrast, while venture capital investments typically involve very little, if any, debt, operational factors are critical. Given the relative importance of various dimensions, the question arises as to how one may assign specific weights.

13.2.3 Practical considerations

There are a number of limitations and trade-offs that need to be taken into consideration. First, while practitioners generally try to develop a single collective ranking from a set of rankings of different criteria, the aim of coming up with a “perfect ranking” is illusive. In fact, such a ranking cannot exist, a classical paradox in social choice theory as shown by Condorcet and Arrow.2

A good scoring method will result in classes where the intra-class similarity is high while the inter-class similarity is low. These classes should be somehow “similar” to one another, so that the population of funds within the class can be treated collectively as one group. But the more classes we look at, the higher the probability that the proposal is assigned to the “wrong” class, i.e. that there is another class that fits its characteristics better. Consequently, the lower the number of classes, the more robust the scoring method will be. Weighting the various dimensions will often be difficult and ambiguous. In such situations, a pragmatic and robust approach may lie in assigning equal weights to each dimension.

A simple approach to guide decisions is “tallying”. In this approach, analysts look for cues that might help to make a choice between two or several options, with the preferred option being determined by the greatest excess of positive over negative cues without bothering to try to rate them in order of importance (Fisher, 2009). Tallying looks oversimplistic as it takes no account of the relative importance of different factors, but this simple method was found to do consistently better in predicting outcomes than experts' intuition (Dawes, 1979). Statistical weighting of the different factors is a better fit for known data. However, risk managers are dealing with situations of high uncertainty where it is not known which weights to give to these factors. For extrapolating data into the future – what risk managers should primarily be concerned with – simple tallying works just as well and sometimes even better. Rather than operating with “absolute truths”, in essence a risk manager – like a judge in a legal case – can only weight evidence pro and con. Terry Smith followed a comparable approach in his 1992 analysis of accounting techniques (Smith, 1996). He introduced “blob” scores for companies (with a “blob” representing the use of creative accounting techniques). For the companies he analysed, this “blob” scoring has proved to be a remarkably robust methodology for predicting financial distress.

It also does not make sense to look at too many dimensions and be overly sophisticated with the scoring. The more dimensions that have to be taken into account for the scoring, the more pronounced the reversion to the mean will be: statistically speaking, an extreme event is likely to be followed by a less extreme event. The more dimensions are taken into consideration, the closer the aggregations are to the average.

13.3 APPROACHES TO FUND RATINGS

To review the different approaches to fund ratings that are currently in use, we differentiate between (i) assessments that are conducted by independent external parties and (ii) techniques for fund evaluation that are employed internally by investors. The term “rating” is typically used in the context of credit risk models and is associated with default probabilities of loans or bonds. While ratings are sometimes mentioned in the context of limited partnership funds, funds – as we argued in Chapter 8 – do not “default” in the sense of a credit default, which is generally defined as an event where the debtor misses a regular contractually agreed repayment of interest or principal. Later in this chapter, we shall discuss a classification of limited partnership funds that we call a “grading”, i.e. an assessment based on comparisons against a peer group population.

13.3.1 Rating by external agencies

As far as mutual funds are concerned, the term “rating” is the norm, although such ratings are fundamentally different from credit ratings in terms of their objectives and underlying methodologies. Ratings of mutual funds are conducted by independent agencies like Feri (Financial and Economic Research International), Lipper, Morningstar or S&P. According to the Feri Trust Funds Guide 2002, “fund ‘rating’ is a standardised valuation with a forward-looking prognosis”. For a fund to be rated by Morningstar, for example, it needs to have a minimum history of 5 years and at least 20 comparable funds. S&P, by contrast, requires a fund history of at least 3 years and a sufficient number of comparable funds. In contrast to mutual funds, however, limited partnership funds are typically not covered by external rating agencies. The concept of a rating assigned by an independent agency is difficult to apply to private equity and real assets.

Fiduciary rating of firms

Fiduciary rating measures the risk of investors who entrust their money to third-party organizations. Fiduciary risk is the risk of breaching the investor's trust by failing to perform their contractual obligations. It reflects weaknesses, deficiencies and failure of systems, processes and organization of an investment firm. According to RCP & Partners, a fiduciary rating is “a methodology for assessing, rating and monitoring asset management organisations through application of standardised process”.

The rating evaluates the stability of a firm and its ability to sustain relative performance over time and takes criteria such as the quality of the investment process, the financial strength, the quality of risk management, the avoidance or mitigation of conflicts of interest, the quality of controlling, customer service or management strategy into consideration. RCP & Partners assesses management companies by reviewing two families of risk:3

  • Structural risk, which relates to the “hardware” of a firm, covering overall resource allocation, risk control, compliance, administration and back-office, middle office, sales and marketing.
  • Performance risk, which relates to the firm's “software” and depends on the whole investment management process, from research to trade execution, including the firm's own investment track record.

A fiduciary rating is based on the assumption that a necessary condition for good investment performance lies in the appropriate organization of the investment process. The advantage of this approach is that it does not require a long investment history and therefore might help overcome a main obstacle to investing in private equity and real assets. However, there is no direct link between fiduciary rating and future performance, and a good fiduciary rating is not a sufficient condition for good investment returns. RCP & Partners use the same scale as Standard & Poor's, which could cause confusion, as the RCP & Partners' scale is not based on the same investment risk model.

There is an additional challenge. Fiduciary rating relies on voluntary participation, a precondition that might be difficult to meet in the alternative investing industry. Ruso (2008), for example, bases his governance rating on information disclosed in the issuance prospects: 12 main criteria are used to evaluate different fund features that determine the quality of a fund's governance structure. However, high-quality firms may not even be interested in providing more granulated information, as they have an established investor base for the funds they raise.

Importantly, a fiduciary rating should not be confused with due diligence. Instead, it should be regarded as a complement, possibly suited for the monitoring phase and as a standard input for a fund's qualitative assessment. Such “ratings” signal the quality of a fund (typically focusing on the quality of the investment team or organization) but are not designed to predict performance.

Rating of firms

Instead of rating individual funds, Gottschalg focuses on what he labels the fitness of private equity firms.4 These fitness rankings have been published since 2009 as a joint product by HEC (the French business school) and Dow Jones. The HEC-Dow Jones Private Equity Fitness Ranking™ aims to list the best private equity firms “… in terms of their competitive fitness, specifically, their expected ability to yield a superior performance over the next 5–10 years”.5 More specifically, the rankings are designed to evaluate each firm's competitive positioning based on 10 different criteria, deriving an overall future competitiveness score based on the historic link between firm performance and each of the criteria. The criteria are chosen out of more than 30 criteria because collectively they are found to capture some of the most important value drivers. The calibration of the model is based on the proprietary HEC buyout database, which contains information on the investment characteristics and performance of a large sample of private equity transactions over the past 30 years.

The criteria thus selected include the scale of current activities, the ability to take advantage of cheap debt financing, the ability to time the stock market to benefit from market trends over the holding period, the ability to time the stock market to exit at high exit valuations, the level of industry focus, the change in the level of industry focus, the quality of deal flow (defined as the ability to continue to invest during periods when all other private equity firms are decreasing their investment pace), the flexibility to take advantage of investment opportunities of different sizes, the level of strategic uniqueness and differentiation and recent changes in the scale of activity.

While the fitness rankings are supposed to be forward-looking, they are thought to complement the HEC-Dow Jones Private Equity Performance Ranking™, which aims to rank the top GPs in terms of their past performance.

The actual rankings are based on data provided by Thomson Reuters VentureXpert, a large private equity database. For the 2012 rankings, this dataset includes a total of 33, 025 investments of USD 631 billion by 2, 544 funds and 1, 295 firms into 15, 690 distinct portfolio companies. From this large universe, private equity firms are selected which had (i) completed at least 50 transactions, (ii) raised at least 4 funds, (iii) invested at least USD 1 billion and (iv) been active for at least 10 years. These filters reduced the sample to 238 firms with over 1, 000 funds that had raised almost USD 1 trillion and made investments in over 20, 000 portfolio companies. Missing variables reduced the sample further to 217 private equity firms.

While the statistical tests suggest a high explanatory power of the model, the methodology is inevitably subject to two important limitations. As Gottschalg himself points out, the ranking of competitive fitness is based on the historic relationship between the criteria and subsequent performance. In situations where these criteria or their relationships change, the model's accuracy decreases. Furthermore, the analysis is based on data that are observable but do not reflect factors like the departure of key personnel or future changes in strategy that are not yet reflected in recent investment decisions but may influence the future performance of the firm. The ranking's value as a decision support tool thus rests heavily on the assumption of performance persistence, an assumption that is far from perfect despite its wide acceptance among practitioners.

Investment rating of funds

Introduced in 2000, Feri's investment rating of closed-end funds differentiates between the following asset classes: real estate, ships, aircraft, new energy, private equity, infrastructure, multi-assets.6 The objective of the rating is similar to traditional ratings of mutual funds, i.e. to help users select individual funds following a transparent, standardized and effective evaluation method, based on a defined list of criteria which include qualitative and quantitative factors: “The outcomes of fund managers' due diligence are benchmarked. The quantitative aspects of a fund are then compensated by qualitative aspects and both result in a fund rating from A to E, where A is the best rating.” Feri uses a scoring model to combine the various criteria into a single rating. The rating of funds is conducted at the request of Feri's clients, who require a systematic and independent analysis to ensure that their investment decisions are sound. The ratings are not made available to the general public. As far as real estate funds are concerned, a fund's rating is based on the evaluation of its structure (e.g., contract analyses, guarantees, financing and earnings, exit options), the quality of management and the property or properties under management. By contrast, Feri's private equity fund selection criteria are management7 (60%), economics8 (32.5%) and customer service9 (7.5%) (Söhnholz, 2002).

There are other advisers who have developed a rating process for limited partnership funds. One example is Mackewicz & Partner and its successor firm Fleischhauer, Hoyer & Partner (FHP), who regard the rating of private equity and VC funds as broadly comparable to traditional funds rating.10 The objective of their approach is to provide reliable decision-making support for potential investors in funds and in funds-of-funds. Its focus is on evaluating the probability of losses and gains for the capital invested. FHP uses a scoring model which is based heavily on qualitative criteria for five main assessment dimensions.11 This results in an “FHP-Rating” for the fund of either “very bad” (weighted aggregate score 0–39), “bad” (weighted aggregate score 40–59), “good” (weighted aggregate score 60–79), “very good” (weighted aggregate score 80–89) or “outstanding” (weighted aggregate score 90–100).

Limitations of fund assessments by external agencies

The following issues may render the rating of limited partnership funds by external agencies problematic.

  • If an external agency cannot base its opinion on a sufficient number of objective criteria, it will be difficult to defend an assigned “rating”. Alternative investments are an appraised and speculative asset class. Therefore, the assessment of a fund will predominantly be based on qualitative factors which could make the rating highly subjective.
  • A rating usually does not imply any recommendation by an agency. As an external rating for a limited partnership fund would only be relevant pre-investment and there is no efficient risk-adjusted pricing, it implicitly forms an investment recommendation. Post-investment, the investor has access to far better in-depth information on the fund than any rating agency.
  • There are too few potential investors as customers to make an external rating service viable.12 As this is an unregulated industry, only qualified and experienced investors can become LPs, and they cannot invest without proper due diligence.
  • Committing capital to a fund is possible only during the fundraising period or through a secondary transaction. This is fundamentally different from mutual funds, where investors can continuously adjust their portfolios in response to an external rating.

13.3.2 Internal fund assessment approaches

Some private equity investment programmes use grading-like assessments to manage their portfolios. CalPERS, for instance, uses the categories listed in Table 13.1.

Table 13.1 CalPERS fund performance assessment

Exceeds expectations
As expected
Below expectations
Below expectations/with concern
Too early to tell
Source: CalPERS.

“Too early to tell” does not mean that CalPERS has no opinion on a fund before they invest. The underlying assumption is that the investment is done in a fund that meets the declared return expectations. Return expectations carry over the cycle and across asset classes. According to Braunschweig (2001), CalPERS based their commitment decisions for seed capital investments on an expected return of at least 30% at the beginning of the century. While the target return for early and late-stage VC was set at 25% at that time, buyout and mezzanine investments were subject to an expected return of 20% and 15%, respectively.

Table 13.2 provides an alternative example for an assessment framework developed for VC funds.

Table 13.2 Internal fund performance assessment

Assessment
scale Description
A Clear evidence of X%+ rate of return over the life of the fund.
B An immature fund managed by a strong VC team or a fund set to generate a return in the low to high teens range.
C An underlying portfolio which may generate a return in the high single figure to low teen range, or an unproven or less talented management team.
D A fund set to produce a single figure return or major concerns about the management team.
E A fund expected to produce a negative return or minimal positive return.

Both examples in Tables 13.1 and 13.2 define static benchmarks for grades that, to some degree, take a specific market environment into consideration.13 Raschle and Jaeggi (2004) refer to another approach based on probabilities and quartiling; see Table 13.3.

Table 13.3 Adveq fund performance assessment

Manager quality Quality definition
Outstanding 50% probability of reaching top quartile
Solid 35% probability of reaching top quartile
Average 25% probability of reaching top quartile
Poor Less than 20% probability of reaching top quartile
Unproven Too young
Source: Adveq analysis 2002.

Such fund assessments are mainly used for internal purposes and are rarely published. Based on discussions with industry practitioners, it appears that fund “grading” approaches – whether published or internal – are often “unpopular”.14 One reason might be that a low grade would typically be interpreted as a failure of the initial investment selection method. The probability of making it into the first quartile is also time-dependent. A mature top-performing fund will most likely make it to the first quartile, while in its early years the same fund's probability of reaching this objective will certainly be lower. Consequently, in the Adveq scale a fund would go through different stages, although the fund's quality is essentially unchanged. That could make comparisons over several vintage years difficult.

13.4 USE OF RATING/GRADING AS INPUT FOR MODELS

The choice of a rating/grading system and the techniques to be applied depends critically on the decision maker's main objective. Is the rating/grading system supposed to support investment decisions as part of the due diligence process? Alternatively, is its main purpose seen in the area of portfolio management and risk budgeting? Or is such a system expected to support the monitoring of investment decisions? For example, to the extent that ratings/grades are primarily employed as a tool in investment decision making, the main interest is in picking superior investment proposals. Therefore, the ratings described before could be interpreted mainly as indicators for success.

It is well understood that there can be no excess return without incurring risks. In fact, often there is the expectation that this works in reverse, too, and in return for taking the risk an investor would automatically get rewarded. This, however, requires a risk-adjusted pricing mechanism that establishes a link between the risk taken and the premium sought by the investor. Alternatively, as we will discuss below, investors can seek risk by exploring for opportunities thus undetected and unexploited by other market participants.

13.4.1 Assessing downside risk

The typical limited partnership structure does not allow for risk-adjusted pricing. All primary positions are bought at par (i.e., without premium or discount) and there is no predefined coupon payment but only an uncertain performance and a predefined cost structure. Only in the case of secondary transactions is it possible to translate a fund's underperformance into respective discounts (Mathonet and Meyer, 2007). As a consequence, the elimination of critical issues is usually tackled during the due diligence pre-investment phase. If critical issues remain (often called “deal-breakers”), an investment proposal is typically rejected. However, during the lifetime of a fund things can change, and a fund that was given a high rating may encounter unexpected problems. While ideally the rating should anticipate possible issues that may arise over the life of the fund, it is important to recall (Chapter 8) that default models for limited partnership funds are problematic as they ignore the upside potential compensating investors for the downside risk they accept.

13.4.2 Assessing upside potential

While a structured approach should result in a higher degree of consistency, there remain doubts as to whether rating techniques per se can actually lead to additional insights that allow superior investment decisions. A single rating or grading can be derived from a profile of different scores, but not vice versa. For investment decisions a profile offers more insights than an assignment to a single class. In any case, the value of any methodology for investment decision making rests on its ability to predict future outcomes compared to peers – a proposition that has yet to be proved by robust empirical evidence.

13.4.3 Is success repeatable?

Rating systems are explicitly or implicitly predicated on the assumption that returns are persistent. In fact, many practitioners subscribe to this view, which takes into account that success in private equity requires a special skill set, with fund managers typically going through a learning process. Kaplan and Schoar (2005) find support for the persistence hypothesis in private equity and Hendershott (2007) argues that there is at least an 80% probability of a fund being top-quartile, if its three predecessors were top-quartile as well. However, given the substantial data issues researchers are confronted with, more research is required to draw meaningful conclusions from the point of view of making investment decisions. Specifically, the following points should be taken into account.

  • Rouvinez (2006) points to the fact that more than a quarter of the funds in the market are being labelled “top quartile” and that there is about a 40% probability that managers with lower quartile funds do not come back to the market.15 As a consequence, investors tend to only meet top-quartile managers. The high attrition rate with a combined repetition of upper performance seems to be the signature characteristic of the private equity asset class. This makes it difficult for investors to use top-quartile performance as an effective screening criterion.
  • Peer groups cannot be compared over different vintage years. Private equity firms raise funds at irregular intervals, and therefore the firms that raised the funds comprising the previous vintage year peer group may not be in the market looking for investors the same time again. With peer group compositions continuously changing, the persistence claim is difficult to verify.
  • Studies on return persistence are typically based on data for mature funds. At a minimum, funds are at least 6 years old to be included in a sample as the performance of younger funds is still subject to considerable variation.16 However, a typical fundraising cycle is 4 years and at the peak of the last private equity boom it was not uncommon for funds to return to market after less than 3 years. In fact, in the dataset for venture capital funds used by Conner (2005), firms raised a successor fund on average after just 2.9 years. Nearly half of these firms in his sample raised a successor fund in years 2 or 3. This implies that the performance of a fund is not reliably visible at the time when investors have to take their re-up decision. In fact, at that time a non-trivial part of the fund's committed capital is still to be called, and the investments that have already been made are often too recent to draw conclusions with a sufficiently high degree of confidence.
  • To the extent that persistence exists, the question arises as to whether this is due to superior skills or other factors. Recent research by Chung (2010), for example, finds that market conditions are likely to play an important role, with outperforming funds operating in markets that have enjoyed particularly strong growth. However, this makes success less predictable as market conditions can change and new competitors may enter the market, affecting the incumbents' potential for achieving excess returns.
  • Finally, the performance of a fund manager may be undermined by his own success. As a fund outperforms its peers, its successor fund will find it easier to attract more capital. However, as the size of the fund (and its successor fund) grows, its performance might suffer. In fact, Kaplan and Schoar (2005) find a concave relationship between fund size and performance for VC funds, although not for buyout funds. Robinson and Sensoy (2011) find that PMEs for both buyout and VC funds are modestly concave in the log of fund size. Harris et al. (2012), finally, report a concave relation between PME and the log of fund size for both buyout and VC funds controlling for vintage year. However, the regression coefficients are significant only at the 12% level for buyouts and are not at all significant for VC funds.

13.5 ASSESSING THE DEGREE OF SIMILARITY WITH COMPARABLE FUNDS

Against this background, we suggest focusing on the degree of similarity of a fund with respect to its peer group as a reference point for quantification. The scoring aims to measure the deviation in relevant dimensions from this peer group. Using such a comparison for ex-ante assessment is based on the assumption that membership in a group of funds has significant performance implications (Porter, 1979). Our concept of a qualitative risk assessment is based on how closely a fund is aligned with best practices in a given market environment at the time when it is launched. A fund that is well adjusted today is assumed to perform in line with earlier funds that were well adjusted at the time when they were raised, even if the criteria of what constitutes best practice have changed since.

13.5.1 The AMH framework

The adaptive market hypothesis (AMH) reflects an evolutionary model of the alternative asset industry: market participants often make mistakes but they also learn. Competition drives adaptation and innovation, natural selection shapes market ecology and evolution determines market dynamics. Speculative opportunities do exist in the market, but appear and disappear over time, so innovation in the form of continuous search for new opportunities is critical for survival and growth. The AMH originated in the hedge fund world, where a significant number of funds focus on generating returns from arbitrage strategies, which should not be possible if the efficient market hypothesis (EMH) holds. The AMH is a relatively new framework developed by Lo, although the application of evolutionary ideas to economic behaviour is not new.17

The grading technique based on the idea of “similarity” is questionable in the context of the EMH: without a risk-adjusted pricing mechanism it would not seem to make sense to invest in a fund that has any apparent weaknesses or structural deviations from industry standards. However, as a tool the fund grading is consistent with the AMH: it is measuring the deviation of a fund structure from market best practices representing the “average” population. Kukla (2011) finds that several successful strategies exist in the private equity sector. Strategic groups evolve, with successful firms monitoring other firms in their strategic group as reference points and converging to each other over time.

13.5.2 Strategic groups in alternative assets

Kukla (2011) identifies strategic centre points, i.e. “centroids”, such as “sector specialist”, “product specialist”, “sector-focused investment firm”, “multi-business investment firm” and “small cap generalist”. He interprets centroids as the mathematical equivalent of a strategic pattern of a group of firms and finds significant inter-group differences. He reports evidence which suggests that private equity firms affiliated with a more successful strategic group gravitate towards their centroids over time.

The LPA ratings tool developed by ILPA could be interpreted as falling into this category.18 Its purpose is to rate the degree to which a particular partnership agreement adheres to the ILPA's best practice approach, PE Principles V 2.0. These principles are meant to serve as a tool in connection with due diligence and to monitor and evaluate investments in private equity. While the rating tool is based on a ranking and weighting process that mirrors the more measurable aspects of a fund's governance, it is stressed that qualitative analysis is equally important – sometimes necessitating judgment calls by the LP. As every partnership agreement is different, it can also be subject to varying interpretations. Thus, the LPA ratings tool mainly forms a basis for comparing funds against peers, helping identify relative strengths and weaknesses. Although adherence to the ILPA principles plays an increasingly important role for LPs in the fund selection process, 19 it is important for investors to recognize their limitations, given their qualitative nature.

13.5.3 Linking grading to quantification

Measuring deviations from the strategic group “centroids” could result in a grading as “standard”, “mainstream”, “niche” or “experiment”, depending on how strong the deviations are. Quotes from industry practitioners like “[w]e categorise managers from A to D, with A being the managers in our portfolio and D being managers that we regard as non-institutional quality. We target our resources towards the A's and B's, essentially, but we would also be meeting the C's on a regular basis”20 suggest that comparable methodologies for evaluating limited partnership funds are finding increasing acceptance.

13.6 CONCLUSIONS

The line between the “classical” approach to due diligence and the various fund ratings discussed in this chapter is blurred. Generally, such ratings aim to predict investment success for individual funds, but there remains considerable scepticism and investment managers often feel that such an “algorithmic” approach is unlikely to work. At the same time, many investment managers view due diligence as a major – or even the only – risk management tool in alternative assets. However, Kahneman (2011) finds that in situations of high uncertainty and unpredictability, expert views are often inferior to relatively simple formula. He concedes that intuition can lead to better results if the environment is sufficiently regular and the expert had a chance to learn its regularities. This, however, is typically not the case for investments in limited partnership funds as the investment environment is continuously changing and the long life of a fund makes observations too infrequent for investment managers to identify performance-relevant patterns and learn from their experience. This debate goes beyond the scope of this book, but we conclude that “best practices” and “lessons learned” can usefully be formalized in an algorithm that produces grades for limited partnership funds as input risk measurement purposes.

Consistent with the AMH, we therefore advocate an approach that aims to identify the closest similar benchmark population for the purpose of translating expected performance grades into quantifications. In Chapter 14 we discuss how, with this qualitative input, the index of comparable funds can be translated into a range of multiples.

1 For example, in the case of credit ratings, qualitative factors can have a weight of more than 50% of the total rating analysis. See O'Sullivan, B. and Weston, I. (2006) Challenges in Validating Rating Systems for IRB under Basel II. Standard & Poor's, October. Quoted in Rebonato (2007).

2 According to Nobel laureate Kenneth Arrow, the impossibility theorem states that no voting system based on the ranking of candidates can be converted into a community-wide ranking while also satisfying a particular set of four criteria – unrestricted domain, non-dictatorship, Pareto efficiency and independence of irrelevant alternatives.

3 See http://www.globalcustody.net/rcp-and-partners/?149 [accessed 10 February 2012].

4 This work should not be confused with his proposed approach of selecting funds, which is described in Gottschalg (2010).

5 See press release of 19 May 2011 (Professor Oliver Gottschalg publishes the Spring 2011 HEC-Dow Jones Private Equity Fitness Ranking™. http://www.hec.edu/var/fre/storage/original/application/b3034f561b8dc60a51887e9d6d7d849e.pdf, accessed 5 November 2012). See also Primack (2011).

6 See http://frr.feri.de/en/products-services/funds/closed-end-funds/ and http://ft.feri.de/en/investment-segments/private-equity/ [accessed 8 February 2012].

7 Business concept, management experience, management resources, past performance, deal and exit generation, manager risk, management participation.

8 Management fee, incentive fee, other costs, cash flow, fund risk.

9 Tax and legal structure, customer relationship management.

10 See http://www.fhpe.de/investors/vc-pe.htm [accessed 8 February 2012].

11 The dimensions of management team and experience (30%), track record (30%), structure and terms of fund (20%), investment strategy (10%) and investment process (10%) are assigned scores from 1 (very poor) to 5 (very good). See http://www.fhpe.de/investoren/FHP_Flyer_Rating%20internet.pdf [accessed 8 February 2012].

12 Mutual funds are more scalable in terms of number of investors (mainly retail investors) and investment volume: interest for rating services like Morningstar and the mutual fund managers is higher and no due diligence is necessary, as it is a regulated industry. The rating for alternatives assigned by Feri should rather be seen as a standardized due diligence; its results are, to our knowledge, only made available to Feri customers.

13 See Healy (2001): “Calpers [sic] may have gotten greedy after that ITV fund. The Silicon Valley fund's 1998 portfolio was up 69.9% through the end of last year – yet was ranked ‘below expectations.’ A Thomas H. Lee fund of the same year (a buyout fund, vs. a start-up tech fund) had gained 19.2% by year-end and was seen to be performing ‘as expected’… Still, over the long term, Calpers [sic] has been doing something right. As of March 31, its average annual return for 10 years of private equity investing was 17.5%. The Wilshire 2500 Index, a broad stock market benchmark, was up 13.9% in that period.”

14 See Healy (2001): “Even now, managers of venture funds and other private portfolios are talking about the posting, aghast that the numbers – good, bad, and ugly – are there for all to see. Said one private equity executive, ‘If you show up in the “below expectations” column, you’re done'.”

15 See Rouvinez (2006): “One reason is that except for the 25 percent ratio itself, nothing in the definition is cast in stone. Whether ‘best performance’ refers to total value or internal rate of return, net or gross, realised or not, is open to interpretation, as is the question of who are the ‘peers’.” Good fund managers can also be unlucky, e.g. backing a good company where an exceptional CEO suddenly died or where the entire sector then goes into a protracted downturn. Long-term exposure to market extremes can disproportionately favour one strategy over another even if fund managers are equally competent. Likewise, Hendershott (2007) calculated that for the best 250 of 1, 000 private equity funds one would expect to find that 146 of them or 58.4% were managed by top-quartile managers. That still leaves 41.6% of the good funds managed by the 13.9% of ordinary managers who happened to be lucky.

16 See Burgel (2000) and Conner (2005).

17 See Lo (2005) or Lo and Mueller (2010). In fact, Thomas Malthus already used biological arguments to predict rather dire economic consequences. Vice versa, the evolutionary biologists Charles Darwin and Alfred Russel Wallace were strongly influenced by Malthus. His arguments became an intellectual stepping-stone to the idea of natural selection. Also, Schumpeter's notions of “creative destruction” and “burst of entrepreneurial activity” are consistent with the concept of evolution.

18 See http://ilpa.org/lpa-ratings-tool/ [accessed 7 February 2012].

19 According to a recent survey by Preqin, a data vendor, the majority of surveyed investors see non-adherence to the ILPA principles as a reason not to invest in a fund. To the extent that deviations exist, only those funds for which a case can be made – e.g., where there are strengths or differentiation compared to competing proposals – are likely to attract investors described as “increasingly terms and conditions-sensitive”. See http://www.cpifinancial.net [accessed 23 June 2011].

20 See Institutional Investor Profile, Colin Wimsett, Managing Partner, Pantheon Ventures at http://www.altassets.com/features/arc/2008/nz13106.php [accessed 3 July 2008].

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset