[关闭]
@1007477689 2020-04-30T15:18:49.000000Z 字数 25007 阅读 391

UNDERSTANDING ASSET PRICES

内幕交易


Scientific Background on the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2013

compiled by the Economic Sciences Prize Committee of the Royal Swedish Academy of Sciences


I. Introduction

  The behavior of asset prices is essential for many important decisions, not only for professional investors but also for most people in their daily life. The choice between saving in the form of cash, bank deposits or stocks, or perhaps a single-family house, depends on what one thinks of the risks and returns associated with these different forms of saving. Asset prices are also of fundamental importance for the macroeconomy because they provide crucial information for key economic decisions regarding physical investments and consumption. While prices of financial assets often seem to reflect fundamental values, history provides striking examples to the contrary, in events commonly labeled bubbles and crashes. Mispricing of assets may contribute to financial crises and, as the recent recession illustrates, such crises can damage the overall economy. Given the fundamental role of asset prices in many decisions, what can be said about their determinants?

  This year's prize awards empirical work aimed at understanding how asset prices are determined. Eugene Fama, Lars Peter Hansen and Robert Shiller have developed methods toward this end and used these methods in their applied work. Although we do not yet have complete and generally accepted explanations for how financial markets function, the research of the Laureates has greatly improved our understanding of asset prices and revealed a number of important empirical regularities as well as plausible factors behind these regularities.

  The question of whether asset prices are predictable is as central as it is old. If it is possible to predict with a high degree of certainty that one asset will increase more in value than another one, there is money to be made. More important, such a situation would reflect a rather basic malfunctioning of the market mechanism. In practice, however, investments in assets involve risk, and predictability becomes a statistical concept. A particular asset-trading strategy may give a high return on average, but is it possible to infer excess returns from a limited set of historical data? Furthermore, a high average return might come at the cost of high risk, so predictability need not be a sign of market malfunction at all, but instead just a fair compensation for risk-taking. Hence, studies of asset prices necessarily involve studying risk and its determinants.

  Predictability can be approached in several ways. It may be investigated over different time horizons; arguably, compensation for risk may play less of a role over a short horizon, and thus looking at predictions days or weeks ahead simplifies the task. Another way to assess predictability is to examine whether prices have incorporated all publicly available information. In particular, researchers have studied instances when new information about assets becomes became known in the marketplace, i.e., so-called event studies. If new information is made public but asset prices react only slowly and sluggishly to the news, there is clearly predictability: even if the news itself was impossible to predict, any subsequent movements would be. In a seminal event study from 1969, and in many other studies, Fama and his colleagues studied short-term predictability from different angles. They found that the amount of short-run predictability in stock markets is very limited. This empirical result has had a profound impact on the academic literature as well as on market practices.

  If prices are next to impossible to predict in the short run, would they not be even harder to predict over longer time horizons? Many believed so, but the empirical research would prove this conjecture incorrect. Shiller's 1981 paper on stock-price volatility and his later studies on longer-term predictability provided the key insights: stock prices are excessively volatile in the short run, and at a horizon of a few years the overall market is quite predictable. On average, the market tends to move downward following periods when prices (normalized, say, by firm earnings) are high and upward when prices are low.

  In the longer run, compensation for risk should play a more important role for returns, and predictability might reflect attitudes toward risk and variation in market risk over time. Consequently, interpretations of findings of predictability need to be based on theories of the relationship between risk and asset prices. Here, Hansen made fundamental contributions first by developing an econometric method – the Generalized Method of Moments (GMM), presented in a paper in 1982 – designed to make it possible to deal with the particular features of asset-price data, and then by applying it in a sequence of studies. His findings broadly supported Shiller's preliminary conclusions: asset prices fluctuate too much to be reconciled with standard theory, as represented by the so-called Consumption Capital Asset Pricing Model (CCAPM). This result has generated a large wave of new theory in asset pricing. One strand extends the CCAPM in richer models that maintain the rational-investor assumption. Another strand, commonly referred to as behavioral finance – a new field inspired by Shiller's early writings – puts behavioral biases, market frictions, and mispricing at center stage.

  A related issue is how to understand differences in returns across assets. Here, the classical Capital Asset Pricing Model (CAPM) – for which the 1990 prize was given to William Sharpe – for a long time provided a basic framework. It asserts that assets that correlate more strongly with the market as a whole carry more risk and thus require a higher return in compensation. In a large number of studies, researchers have attempted to test this proposition. Here, Fama provided seminal methodological insights and carried out a number of tests. It has been found that an extended model with three factors – adding a stock's market value and its ratio of book value to market value – greatly improves the explanatory power relative to the single-factor CAPM model. Other factors have been found to play a role as well in explaining return differences across assets. As in the case of studying the market as a whole, the cross-sectional literature has examined both rational-investor–based theory extensions and behavioral ones to interpret the new findings.

  This document is organized in nine sections. Section 2 lays out some basic asset-pricing theory as a background and a roadmap for the remainder of the text. Sections 3 and 4 discuss short- and longer-term predictability of asset prices, respectively. The following two sections discuss theories for interpreting the findings about predictability and tests of these theories, covering rational-investor–based theory in Section 5 and behavioral finance in Section 6. Section 7 treats empirical work on cross-sectional asset returns. Section 8 briefly summarizes the key empirical findings and discusses their impact on market practices. Section 9 concludes this scientific background.

II. Theoretical background

  In order to provide some background to the presentation of the Laureates’ contributions, this section will review some basic asset-pricing theory.

(1)Implications of competitive trading

  A set of fundamental insights, which go back to the century, derive from a basic implication of competitive trading: the absence of arbitrage opportunities. An arbitrage opportunity is a “money pump,” which makes it possible to make arbitrary amounts of money without taking on any risk. To take a trivial example, suppose two assets pay safe rates of eturn and , where . If each asset can be sold short, i.e., held in negative amounts, an arbitrage gain could be made by selling asset short and investing the proceeds in asset : the result would be a safe rate profit of . Because this money pump could be operated at any scale, it would clearly not be consistent with equilibrium; in a competitive market, and must be equal. Any safe asset must bear the same return for safe); the rate at which future payoffs of any safe asset are “discounted.”

  This simple reasoning can be generalized quite substantially and, in particular, can deal with uncertain asset payoffs. The absence of arbitrage opportunities can be shown to imply that the price of any traded asset can be written as a weighted, or discounted, sum of the payoffs of the asset in the different states of nature next period, with weights independent of the asset in question (see, e.g., Ross, 1978 and Harrison and Kreps, 1979). Thus, at any time , the price of any given asset is given by

  Here,

  In general, all these items depend on the state of nature. Note that the discounting weights are the same for all assets. They matter for the price of an individual asset only because both and depend on .

  For a safe asset , does not depend on , and the formula becomes

  Thus, we can now interpret as defining the time risk-free discount rate for safe assets:

  More generally, though, the dependence of on the state of nature captures how the discounting may be stronger in some states of nature than in others: money is valued differently in different states. This allows us to capture how an asset's risk profile is valued by the market. If it pays off particularly well in states with low weights, it will command a lower price.The no-arbitrage pricing formula is often written more abstractly as

where now subsumes the summation and probabilities: it is the expected (probability weighted) value. This formula can be viewed as an organizational tool for much of the empirical research on asset prices. With , equation (1) can be iterated forward to yield the price of a stock as the expected discounted value of future dividends.

(i)Are asset prices predictable?

  Suppose, first, that we consider two points in time very close to each other. In this case, the safe interest rate is approximately zero. Moreover, over a short horizon, might be assumed not to vary much across states: risk is not an issue. These assumptions are tantamount to assuming that equals . If the payoff is simply the asset's resale value , then the absence of arbitrage implies that

  In other words, the asset price may go up or down tomorrow, but any such movement is unpredictable: the price follows a martingale, which is a generalized form of a random walk. The unpredictability hypothesis has been the subject of an enormous empirical literature, to which Fama has been a key contributor. This research will be discussed in Section 3.

(ii)Risk and the longer run

  In general, discounting and risk cannot be disregarded, so tests of the basic implications of competitive trading need to account for the properties of the discount factor : how large it is on average, how much it fluctuates, and more generally what its time series properties are. Thus, a test of no-arbitrage theory also involves a test of a specific theory of how evolves, a point first emphasized by Fama (1970).

  Suppose we look at a riskless asset and a risky asset . Then equation (1) allows us to write the asset's price as

  The discount factor can be regarded as the value of money in state . The above pricing equation thus says that the asset's value depends on the covariance with the value of money. If the covariance is negative, i.e., if the asset's payoff is high when the value of money is low, and vice versa, then the asset is less valuable than the expected discounted value of the payoff. Moreover, the discrepancy term can be factorized into , the "risk loading" (amount of risk), and , the "risk exposure," of the asset.

  The pricing formula can alternatively be expressed in terms of expected excess returns over the risk-free asset:

where

This allows us to write

  An asset whose return is low in periods when the stochastic discount factor is high (i.e., in periods where investors value payoffs more) must command a higher “risk premium” or excess return over the risk-free rate. How large are excess returns on average? How do they vary over time? How do they vary across different kinds of assets? These fundamental questions have been explored from various angles by Fama, Hansen and Shiller. Their findings on price predictability and the determinants and properties of risk premia have deepened our understanding of how asset prices are formed for the stock market as a whole, for other specific markets such as the bond market and the foreign exchange market, and for the cross-section of individual stocks. In Section 4, we will discuss the predictability of asset prices over time, whereas cross-sectional differences across individual assets will be treated in Section 7.

(2)Theories of the stochastic discount factor

  The basic theory, described above, is based on the absence of arbitrage. The obvious next step is to discuss the determinants of the stochastic discount factor . Broadly speaking, there are two approaches: one based on rational investor behavior, but possibly involving institutional complications, investor heterogeneity, etc., and an alternative approach based on psychological models of investor behavior, often called "behavioral finance".

(i)Rational-investor theory

  Theory based on the assumption of rational investor behavior has a long tradition in asset pricing, as in other fields of economics. In essence, it links the stochastic discount factor to investor behavior through assumptions about preferences. By assuming that investors make portfolio decisions to obtain a desired time and risk profile of consumption, the theory provides a link between the asset prices investors face in market equilibrium and investor well-being. This link is expressed through , which captures the aspects of utility that turn out to matter for valuing the asset. Typically, the key link comes from the time profile of consumption. A basic model that derives this link is the CCAPM.3 It extends the static CAPM theory of individual stock prices by providing a dynamic consumption-based theory of the determinants of the valuation of the market portfolio. CCAPM is based on crucial assumptions about investors' utility function and attitude toward risk, and much of the empirical work has aimed to make inferences about the properties of this utility function from asset prices.

  The most basic version of CCAPM involves a “representative investor” with time-additive preferences acting in market settings that are complete, i.e., where there is at least one independent asset per state of nature. This theory thus derives as a function of the consumption levels of the representative investor in periods and . Crucially, this function is nonlinear, which has necessitated innovative steps forward in econometric theory in order to test CCAPM and related models. These steps were taken and first applied by Hansen.

  In order to better conform with empirical findings, CCAPM has been extended to deal with more complex investor preferences (such as time non-separability, habit formation, ambiguity aversion and robustness), investor heterogeneity, incomplete markets and various forms of market constraints, such as borrowing restrictions and margin constraints. These extensions allow a more general view of how depends on consumption and other variables. The progress in this line of research will be discussed in Section 5.

(ii)Behavioral finance

  Another interpretation of the implied fluctuations of observed in the data is based on the view that investors are not fully rational. Research along these lines has developed very rapidly over the last decades, following Shiller's original contributions beginning in the late s. A number of specific departures from rationality have been explored. One type of departure involves replacing the traditional expected-utility representation with functions suggested in the literature on economic psychology.

  A prominent example is prospect theory, developed by the Laureate Daniel Kahneman and Amos Tversky. Another approach is based on market sentiment, i.e., consideration of the circumstances under which market expectations are irrationally optimistic or pessimistic.

  This opens up the possibility, however, for rational investors to take advantage of arbitrage opportunities created by the misperceptions of irrational investors. Rational arbitrage trading would push prices back toward the levels predicted by non-behavioral theories. Often, therefore, behavioral finance models also involve institutionally determined limits to arbitrage.

  Combining behavioral elements with limits to arbitrage may lead to behaviorally based stochastic discount factors, with different determinants than those derived from traditional theory. For example, if the is estimated from data using equation (1) and assuming rational expectations (incorrectly), a high value may be due to optimism and may not reflect movements in consumption. In other words, an equation like (1) is satisfied in the data, but since the expectations operator assigns unduly high weights to good outcomes it makes the econometrician overestimate 𝑚𝑚. Behavioral-finance explanations will be further discussed in Section 6.

(iii)CAPM and the cross-section of asset returns

  Turning to the cross-section of assets, recall from above that an individual stock price can be written as the present value of its payoff in the next period discounted by the riskless interest rate, plus a risk-premium term consisting of the amount of risk, of the asset times its risk exposure,

The latter term is the “beta” of the particular asset, i.e., the slope coefficient from a regression that has the return on the asset as the dependent variable and as the independent variable.

  This expresses a key feature of the CAPM. An asset with a high beta commands a lower price (equivalently, it gives a higher expected return) because it is more risky, as defined by the covariance with .The CAPM specifically represents by the return on the market portfolio. This model has been tested systematically by Fama and many others. More generally, several determinants of can be identified and richer multi-factor models can be specified of the cross-section of asset returns, as stocks generally covary differently with different factors. This approach has been explored extensively by Fama and other researchers and will be discussed in Section 7.

(3)Are returns predictable in the short term?

  A long history lies behind the idea that asset returns should be impossible to predict if asset prices reflect all relevant information. Its origin goes back to Bachelier (), and the idea was formalized by Mandelbrot () and Samuelson (), who showed that asset prices in well-functioning markets with rational expectations should follow a generalized form of a random walk known as a submartingale. Early empirical studies by Kendall (), Osborne (1959), Roberts (), Alexander (, ), Cootner (, ), Fama (, ), Fama and Blume (), and others provided supportive evidence for this hypothesis.

  In an influential paper, Fama () synthesized and interpreted the research that had been done so far, and outlined an agenda for future work. Fama emphasized a fundamental problem that had largely been ignored by the earlier literature: in order to test whether prices correctly incorporate all relevant available information, so that deviations from expected returns are unpredictable, the researcher needs to know what these expected returns are in the first place. In terms of the general pricing model outlined in section 2, the researcher has to know how the stochastic discount factor is determined and how it varies over time. Postulating a specific model of asset prices as a maintained hypothesis allows further study of whether deviations from that model are random or systematic, i.e., whether the forecast errors implied by the model are predictable.

Finding that deviations are systematic, however, does not necessarily mean that prices do not correctly incorporate all relevant information; the assetpricing model (the maintained hypothesis) might just as well be incorrectly specified.4 Thus, formulating and testing asset-pricing models becomes an integral part of the analysis.5 Conversely, an asset-pricing model cannot be tested easily without making the assumption that prices rationally incorporate all relevant available information and that forecast errors are unpredictable. Fama's survey provided the framework for a vast empirical literature that has confronted the joint-hypothesis problem and provided a body of relevant empirical evidence. Many of the most important early contributions to this literature were made by Fama himself. In Fama () he assessed the state of the art two decades after the first survey.

  In his 1970 paper, Fama also discussed what “available” information might mean. Following a suggestion by Harry Roberts, Fama launched the trichotomy of

  1. weak-form informational efficiency, where it is impossible to systematically beat the market using historical asset prices;
  2. semi-strong–form informational efficiency, where it is impossible to systematically beat the market using publicly available information; and
  3. strong-form informational efficiency, where it is impossible to systematically beat the market using any information, public or private.

The last concept would seem unrealistic a priori and also hard to test, as it would require access to the private information of all insiders. So researchers focused on testing the first two types of informational efficiency.

(3.1)Short-term predictability

  Earlier studies of the random-walk hypothesis had essentially tested the first of the three informational efficiency concepts — whether past returns can predict future returns. This work had addressed whether past returns had any power in predicting returns over the immediate future, days or weeks.

  If the stochastic discount factor were constant over time, then the absence of arbitrage would imply that immediate future returns cannot be predicted from past returns. In general, the early studies found very little predictability; the hypothesis that stock prices follow a random walk could not be rejected. Over short horizons (such as day by day), the joint-hypothesis problem should be negligible, since the effect of different expected returns should be very small. Accordingly, the early studies could not reject the hypothesis of weak-form informational efficiency.

  In his PhD dissertation from , Fama set out to test the random-walk hypothesis systematically by using three types of test:

  1. tests for serial correlation,
  2. runs tests (in other words, whether series of uninterrupted price increases or price decreases are more frequent than could be the result of chance),
  3. filter tests.

These methods had been used by earlier researchers, but Fama's approach was more systematic and comprehensive, and therefore had a strong impact on subsequent research. In , Fama reported that daily, weekly and monthly returns were somewhat predictable from past returns for a sample of large U.S. companies. Returns tended to be positively auto-correlated. The relationship was quite weak, however, and the fraction of the return variance explained by the variation in expected returns was less than 1% for individual stocks. Later, Fama and Blume () found that the deviations from random-walk pricing were so small that any attempt to exploit them would be unlikely to survive trading costs. Although not exactly accurate, the basic no-arbitrage view in combination with constant expected returns seemed like a reasonable working model. This was the consensus view in the 1970s.

(3.2)Event studies

  If stock prices incorporate all publicly available information (i.e., if the stock market is “semistrong” informationally efficient, in the sense used by Fama, ), then relevant news should have an immediate price impact when announced, but beyond the announcement date returns should remain unpredictable. This hypothesis was tested in a seminal paper by Fama, Fisher, Jensen and Roll, published in . The team was also the first to use the CRSP data set of U.S. stock prices and dividends, which had been recently assembled at the University of Chicago under the leadership of James Lorie and Lawrence Fisher. Fama and his colleagues introduced what is nowadays called an event study.6 The particular event Fama and his coauthors considered was a stock split, but the methodology is applicable to any piece of new information that can be dated with reasonable precision, for example announcements of dividend changes, mergers and other corporate events.

  The idea of an event study is to look closely at price behavior just before and just after new information about a particular asset has hit the market (“the event”). In an arbitrage-free market, where prices incorporate all relevant public information, there would be no tendency for systematically positive or negative risk-adjusted returns after a news announcement. In this case, the price reaction at the time of the news announcement (after controlling for other events occurring at the same time) would also be an unbiased estimate of the change in the fundamental value of the asset implied by the new information.

  Empirical event studies are hampered by the noise in stock prices; many things affect stock markets at the same time making the effects of a particular event difficult to isolate. In addition, due to the joint-hypothesis problem, there is a need to take a stand on the determinants of the expected returns of the stock, so that market reactions can be measured as deviations from this expected return. If the time period under study — “the event window” — is relatively short, the underlying risk exposures that affect the stock's expected return are unlikely to change much, and expected returns can be estimated using return data from before the event.

  Fama and his colleagues handle the joint-hypothesis problem by using the so-called “market model” to capture the variation in expected returns. In this model, expected returns are given by

Here is the contemporaneous overall market return, and and are estimated coefficients from a regression of realized returns on stock , on the overall market returns using data before the event.7 Under the assumption that captures differences in expected return across assets, this procedure deals with the joint-hypothesis problem as well as isolates the price development of stock from the impact of general shocks to the market.

  For a time interval before and after the event, Fama and his colleagues then traced the rate of return on stock and calculated the residual

If an event contains relevant news, the accumulated residuals for the period around the event should be equal to the change in the stock's fundamental value due to these news, plus idiosyncratic noise with an expected value of zero. Since lack of predictability implies that the idiosyncratic noise should be uncorrelated across events, we can estimate the value impact by averaging the accumulated values across events. The event studied in the original paper was a stock split. The authors found that, indeed, stocks do not exhibit any abnormal returns after the announcement of a split once dividend changes are accounted for. This result is consistent with the price having fully adjusted to all available information. The result of an event study is typically presented in a pedagogical

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注