The Nature of the Business Cycle
- Details
- Category: Economics
- Hits: 15,999
Perhaps the most widely quoted and influential definition is that of Burns and Mitchell (1946, p.l) who state that:
Business cycles are a type of fluctuation found in the aggregate economic activity of nations that organize their work mainly in business enterprises:
- a cycle consists of expansions occurring at about the same time in many economic activities, followed by similarly general recessions, contractions, and revivals which merge into the expansion phase of the next cycle;
- the sequence of changes is recurrent but not periodic;
- in duration cycles vary from more than one year to ten or twelve years;
- they are not divisible into shorter cycles of similar character with amplitudes approximating their own.
A number of features of this definition should be highlighted. Firstly, it stresses only two phases of the cycle, the expansionary and contractionary phases. It will be seen in section: The Monte Carlo Hypothesis that the peak or upper turning point and the trough or lower turning point are not analyzed as distinct phases but are merely used to identify business cycles in aggregate economic time series.
Many economists, however, regard the turning points as particular phases requiring separate explanations. This is especially evident in the discussion of the financial instability hypothesis, which stresses the role of financial crises in terminating the boom phase, in The Financial Instability Hypothesis.
The second main feature is the emphasis on the recurrent nature of the business cycle, rather than strict periodicity. Combined with the wide range of acceptable durations, encompassing both major and minor cycles (Hansen 1951), this means that cycles vary considerably in both duration and amplitude and that the phases are also likely to vary in length and intensity.
Minor cycles are often assumed to be the result of inventory cycles (Metzler 1941), but Burns and Mitchell reject these as separable events as postulated by Schumpeter (1939), among others.2 Finally, and perhaps most importantly, they emphasize comovements as evidenced by the clustering of peaks and troughs in many economic series. This is a feature stressed in numerous subsequent business cycle definitions, a sample of which are discussed below.
The original National Bureau of Economic Research (NBER) work of Burns and Mitchell concentrated on the analysis of non-detrended data. In the post-war period, such analysis has continued but the NBER has also analyzed detrended data in order to identify growth cycles, which tend to be more symmetric than the cycles identified in non-detrended data. The issue of asymmetry is an important one because it has implications for business cycle modeling procedures; it will be discussed further in section: Symmetric Business Cycles.
Concerning the existence of the business cycle, there remain bodies of atheists and agnostics. Fisher (1925, p. 191) is often quoted by doubters and disbelievers. He states:
I see no reason to believe in the Business Cycle. It is simply a fluctuation about its own mean. And yet the cycle idea is supposed to have more content than mere variability. It implies a regular succession of similar fluctuations constituting some sort of recurrence, so that, as in the case of the phases of the moon, the tides of the sea, wave motion or pendulum swing we can forecast the future on the basis of a pattern worked out from past experience, and which we have reason to believe will be copied in the future.
The work done at the NBER has subsequently attempted to show that there is indeed more to the business cycle than mere variability. Doubters remain, however, and tests of Fisher’s so-called Monte Carlo hypothesis will be discussed in the section: The Monte Carlo Hypothesis.
The NBER view that there is sufficient regularity, particularly in comovements, to make the business cycle concept useful is shared by two of the most distinguished students of cycle theory literature, Haberler (1958, pp. 454-9) and Hansen. Hansen (1951) notes that some would prefer to substitute ‘fluctuations’ for cycles but concludes that the usage of the term cycles in other sciences does not imply strict regularity. This point is also made by Zarnowitz and Moore (1986) in a recent review of the NBER methodology.
Lucas (1975) helped to rekindle interest in business cycle theory4 by reviving the idea of an equilibrium business cycle. The cycle had tended to be regarded as a disequilibrium phenomenon in the predominantly Keynesian contributions to the post-war cycle literature. Lucas (1977) discussed the cycle in more general terms and stressed the international generality of the business cycle phenomenon in decentralized market economies. He concluded (p. 10) that:
with respect to the qualitative behaviour of comovements among series, business cycles are all alike.
And that this:
suggests the possibility of a unified explanation of business cycles, grounded in the general laws governing market economies, rather than in political or institutional characteristics specific to particular countries or periods.
The intention here is not to deny that political or institutional characteristics can influence actual cycle realisations and help account for their variation between countries and periods. It is rather to stress the existence of general laws that ensure that a market economy subjected to shocks will evolve cyclically. Research that aims to gauge the extent to which the US business cycle has changed since the Second World War is reviewed in section: Has the Business Cycle Changed Since 1945?.
Sargent (1979, p. 254) attempts to formalise a definition of the business cycle using time series analysis. He first analyses individual aggregate economic time series and arrives at two definitions. Firstly:
A variable possesses a cycle of a given frequency if its covariogram displays damped oscillations of that frequency, which is equivalent with the condition that the non-stochastic part of the differential equation has a pair of complex roots with argument… equal to the frequency in question. A single series is said to contain a business cycle if the cycle in question has periodicity of from about two to four years (NBER minor cycles) or about eight years (NBER major cycles).
Secondly, Sargent argues that a cycle in a single series is marked by the occurrence of a peak in the spectral density of that series. Although not equivalent to the first definition, Sargent (1979, Ch. XI) shows that it usually leads to a definition of the cycle close to the first one.
Sargent (1979, p. 254) concludes that neither of these definitions captures the concept of the business cycle properly. Most aggregate economic time series actually have spectral densities that display no pronounced peaks in the range of frequencies associated with the business cycle,5 and the peaks that do occur tend not to be pronounced.
The dominant or ‘typical’ spectral shape - as dubbed by Granger (1966) -of most economic time series is that of a spectrum which decreases rapidly as frequency increases, with most of the power in the low frequency, high periodicity bands. This is characteristic of series dominated by high, positive, low order serial correlation, and is probably symptomatic of seasonal influences on the quarterly data commonly used.
Sargent warns, however, that the absence of spectral peaks in business cycle frequencies does not imply that the series experienced no fluctuations associated with business cycles. He provides an example of a series that displays no peaks and yet appears to move in sympathy with general business conditions. In the light of this observation Sargent (1979, p. 256) offers the following, preferred, definition, which emphasizes comovements:
The business cycle is the phenomenon of a number of important economic aggregates (such as GNP, unemployment and lay offs) being characterized by high pairwise coherences6 at the low business cycle frequencies, the same frequencies at which most aggregates have most of their spectral power if they have ‘typical spectral shapes’.
This definition captures the main qualitative feature or ‘stylized fact’ to be explained by the cycle theories discussed in Business Cycle Theory.
The dominant methodology of business cycle analysis is based on the Frisch-Slutsky hypothesis discussed in section The Frisch-Slutsky hypothesis. Low order linear deterministic difference or differential equation models cannot yield the irregular non-damped or non-explosive cycles typically identified by the NBER, but low order linear stochastic models can yield a better approximation,7 as Frisch (1933) and Slutsky (1937) observed. Sargent (1979, pp. 218-19) observes that high order non-stochastic difference equations can, however, generate data that looks as irregular as typical aggregate economic time series.
By increasing the order of the equation, any sample of data can be modeled arbitrarily well with a linear non-stochastic differential equation. This approach is generally not adopted, however, because the order usually has to be so high that the model is not parsimonious in its parameterization (Box and Jenkins, 1970) and there will be insufficient degrees of freedom to allow efficient estimation. Further, it allocates no influence at all to shocks.
An alternative to high order linear models that can also produce an essentially endogenous cycle, in the sense that the shocks merely add irregularity to a cycle that would exist in their absence, is to use nonlinear models which can have stable limit cycle solutions. While it is generally accepted that stochastic models should be used, because economies are subjected to shocks, there is no general agreement over the relative importance of the shock-generating process and the economic propagation model in explaining the cycle, or on whether linear or nonlinear models should be used.
The dominant view, however, appears to be that linear propagation models with heavy dampening are probably correct and that we should look to shocks as the driving force of the (essentially exogenous) cycle. Blatt (1978), however, showed that the choice of a linear model, when a nonlinear one is appropriate, will bias the empirical analysis in favour of the importance of shocks. It is in the light of this finding that the empirical results discussed in the following chapters, which are invariably based on econometric and statistical techniques that assume linearity, should be viewed.
A related issue is the tendency to regard the business cycle as a deviation from a linear trend.8 Burns and Mitchell (1946) expressed concern about such a perspective and analysed non-detrended data as a consequence. In the post-war period, however, even the NBER has begun to analyse detrended data in order to identify growth cycles, although the trend used is not linear.
Nelson and Plosser (1982) warn of the danger of this approach, pointing out that much of the so-called cyclical variation in detrended data could be due to stochastic variation in the trend which has not in fact been removed. If the trend itself is nonlinear, linear detrending is likely to exaggerate the cyclical variation to be explained and introduce measurement errors. This and related issues will be discussed further in sections: The Monte Carlo Hypothesis and The Long Swing Hypothesis and the Growth Trend
Despite the voluminous empirical work of the NBER and the work of other economists, a number of questions remain unresolved. Firstly, are there long cycles and/or nonlinear trends? This question will be considered further in section: Towards a Theory of Dynamic Economic Development. It is of crucial importance because the analysis of the business cycle requires that it must somehow first be separated from trend and seasonal influences on the time series.
The appropriate method of decomposition will not be the subtraction of a (log) linear trend from the deseasonalized series if the trend is not (log) linear. Secondly, to what extent is the cycle endogenously and exogenously generated? Most business cycle research assumes that linear models can be used to describe an economic system that is subjected to shocks.
The stochastic linear models employed can replicate observed macroeconomic time series reasonably well because the time series they produce possess the right degree of irregularity in period and amplitude to conform with actual realizations. Such models are based on the Frisch-Slutsky hypothesis, discussed in the section: The Frisch-Slutsky hypothesis.
The hypothesis assumes that linear models are sufficient to model economic relationships. Because the estimated linear econometric models display heavy dampening, cycle analysts have increasingly turned their attention to trying to identify the sources of the shocks that offset this dampening and produce a cycle. Business Cycle Theory reviews some recent work on the sources of shocks which drive cycles in the US economy. The current trend is, therefore, towards viewing the cycle as being driven by exogenous shocks rather than as an endogenous feature of the economy. However, nonlinear mathematical business cycle modelling provides the possibility that stable limit cycles, which are truly endogenous, might exist; recent literature on such models is reviewed in section Nonlinear Cycle Theory
Mullineux (1984) discusses the work of Lucas (1975, 1977), who stimulated renewed interest in the equilibrium theory of the business cycle. Lucas’s cycle was driven by monetary shocks but subsequent work has emphasised real shocks; consequently, there has been a resurgence of the old debate over whether cycles are real or monetary in origin. Section Equilibrium Business Cycle (EBC) Modelling reviews the theoretical contributions to the debate, section 1.5 looks at work attempting to identify the main sources of shocks, and in The Financial Instability Hypothesis it is argued that monetary and financial factors are likely to play at least some role, alongside real factors, in cycle generation.
In the next section, the question of the business cycle’s very existence will be considered, while in section 1.3 the question of whether or not cycles are symmetric, which has a bearing on the appropriateness of the linearity assumption, will be explored.
The Monte Carlo Hypothesis
Fisher (1925) argued that business cycles could not be predicted because they resembled cycles observed by gamblers in an honest casino in that the periodicity, rhythm, or pattern of the past is of no help in predicting the future. Slutsky (1937) also believed that business cycles had the form of a chance function.
The Monte Carlo (MC) hypothesis, as formulated by McCulloch (1975), is that the probability of a reversal occurring in a given month is a constant which is independent of the length of time elapsed since the last turning point. The alternative (business cycle) hypothesis is that the probability of a reversal depends on the length of time since the last turning point.
The implication of the Monte Carlo hypothesis is that random shocks are sufficiently powerful to provide the dominant source of energy to an econometric model which would probably display heavy dampening in their absence. The simulations with large scale econometric models in the early 1970s showed that random shocks are normally not sufficient to overcome the heavy dampening typical in these models and to produce a realistic cycle.
Instead, serially correlated shocks are required. If shocks were in fact serially correlated the gambler (forecaster) could exploit knowledge of the error process informing predictions and we would move away from the honest MC casino. The need to use autocorrelated shocks could alternatively indicate that the propagation model is dynamically misspecified.
McCulloch (1975) notes that if the Monte Carlo hypothesis is true then the probability of a reversal in a given month is independent of the last turning point. Using as data NBER reference cycle turning points, McCulloch tests to see if the probability of termination is equal for ‘young’ and ‘old’ expansions (contractions). Burns and Mitchell (1946) did not record specific cycle11 expansions and contractions not lasting at least fifteen months, measured from peak to peak or trough to trough. The probability of reversal is therefore less for very young expansions (contractions) than for median or old expansions (contractions), and McCulloch (1975) disregards months in which the probability of reversal has been reduced.
A contingency table test, based on the asymptotic Chi-squared distribution of the likelihood ratio, with ‘young’ and ‘old’ expansions (contractions) as the two classes, is performed. Since the sample is not large, the total number of expansions being twenty-five, McCulloch feels that it is more appropriate to use a small sample distribution than the asymptotic Chi-squared distribution.
The small sample distribution is calculated subject to the number of old expansions equalling the number of young expansions. Results are reported for the United States, the United Kingdom, France and Germany. In order to facilitate a test of whether post-war government intervention had been successful in prolonging expansions and curtailing contractions, two periods are analyzed for the United States.14 In both periods the test statistic is insignificant, according to both the small sample and asymptotic Chi-squared distribution cases.
Thus the implication is that the probability of termination of young and old expansions is the same for both expansions and contractions and that US government intervention had had no effect. For France, the null hypothesis cannot be rejected for expansions or contractions, and a similar result is derived for Germany. In the United Kingdom, however, it is not rejected for contractions but it is rejected, at the 5 percent significance level, for expansions in both the asymptotic and small sample distribution cases.
The hypothesis would not have been rejected for the United Kingdom at 2.5 percent significance level and McCulloch suggests that the significant statistic can be ignored anyway, since it is to be expected under the random hypothesis. He concludes that the Monte Carlo hypothesis should be accepted.
McCulloch (1975) also notes that a lot of information is forfeited by working with NBER reference data rather than raw data, and that consequently tests performed using actual series are potentially more powerful. He assumes that economic time series follow a second order autoregressive process with a growth trend and fits •such processes to logs of annual US real income, consumption and investment data for the period 1929-73, in order to see if parameter values which will give stable cycles result. The required parameter ranges are well known for such processes (see Box and Jenkins 1970, for example).
McCulloch points out that one cannot discount the possibility of first order autocorrelation in his results but the regressions do, in many cases, indicate that stable cycles exist. He concludes that, due to the potential bias from autocorrelation, no conclusions can be drawn from this approach with regard to cyclically. The period is, however, calculated for each series that had point estimates indicating the presence of a stable cycle.
These series were log real income, the change in log real income, log real investment and the change in log real investment, and log real consumption. The required parameter values were not achieved for the change in log real consumption and quarterly log real income and the change in log real income. Further, a measure of dampening used in physics, the Q statistic, is also calculated, and it indicates that the cycles that have been discovered are so damped that they are of little practical consequence.
Finally, McCulloch notes that spectral analytic results, especially those of Howrey (1968), are at variance with his results. His conclusion is that the spectral approach is probably inappropriate for the analysis of economic time series due to their non-stationarity, the absence of large samples, and their sensitivity to seasonal smoothing and data adjustment.
Anderson (1977) also tested the Monte Carlo hypothesis. The method employed is to subdivide the series into expansionary and contractionary phases; analyze the density functions for duration times between troughs and peaks, and peaks and troughs; and then compare the theoretical distribution, associated with the Monte Carlo hypothesis, with the actual distributions generated by the time-spans observed.
The Monte Carlo hypothesis implies that the time durations of expansionary and contractionary phases will be distributed exponentially with constant parameters, a and P, respectively. Chi-squared goodness of fit test is performed to see if the actual (observed) distribution of phase durations is according to the discrete analog of the exponential distribution, the geometric distribution.
Unlike McCulloch, Anderson does not follow Burns and Mitchell in ignoring expansions and contractions of less than fifteen months since, by definition, this precludes the most prevalent fluctuations under the Monte Carlo hypothesis, namely the short ones. The seasonally adjusted series used are total employment, total industrial production and the composite index of five leading indicators (NBER) for the period 1945-75 in the United States. The phase duration for each series are calculated by Anderson and are consistent with the Monte Carlo hypothesis. They are short. The differences in length between expansions and contractions is attributed to trend.
The null hypothesis that expansionary and contractionary phases are geometrically distributed with parameters a’ and /3’ was tested against the alternative that the phases are not geometrically distributed. The null hypothesis, and hence the Monte Carlo hypothesis, could not be rejected. The hypothesis that the expansion and contraction phases were the same was also tested. The composite and unemployment indices showed no significant difference in the phase, but the hypothesis was rejected for the production series.
Savin (1977) argues that the McCulloch test based on NBER reference cycle data suffers from two defects. Firstly, because the variables constructed by McCulloch are not geometrically distributed, the test performed does not in fact test whether the parameters of two geometric distributions are equal and the likelihood ratio used is not a true likelihood ratio. Secondly, the criterion for categorising old and young cycles is random.
The median may vary between samples and it is the median that forms the basis of the categorisation. An estimate of the population median is required in order to derive distinct populations of young and old expansions. Savin proposes to test the Monte Carlo hypothesis by a method free from these criticisms. Like Anderson, he uses a Chi-squared goodness of fit test but he works with the NBER data used by McCulloch and concentrates on expansions.
He too finds that the Monte Carlo hypothesis cannot be rejected. McCulloch (1977) replied to Savin (1977), arguing that his constructed variables were indeed geometrically distributed and that the contingency table tests he had employed were more efficient than the goodness of fit test used by Savin.
Two methods have, therefore, been used to test the Monte Carlo hypothesis: Chi-squared contingency table tests, as used by McCulloch, and Chi-squared goodness of fit tests, as used by Savin and Anderson. In both testing procedures there is some arbitrariness in choice of categories, and although Savin uses rules such as ‘equal classes’ or “equal probabilities’ to select his classes, he ends up with an unreliable test.
In view of these findings on the Monte Carlo hypothesis, one might wonder whether further cycle analysis would be futile. The tests are, however, confined to hypotheses relating to the duration of the cycle alone. Most economists would also take account of the comovements that are stressed by both Burns and Mitchell (1946) and students of the cycle such as Lucas (1977) and Sargent (1979). There are, however, two sources of evidence that can stand against that of McCulloch, Savin and Anderson. Firstly, there are the findings from spectral analysis, the usefulness of which should be weighed in the light of the problems of applying spectral techniques to economic time series . Secondly, there are the findings of the NBER, which will be considered in the next section.
As noted in the previous section, the NBER defines the business cycle as recurrent but not periodic. The variation of cycle duration is a feature accepted by Burns and Mitchell (1946), who classify a business cycle as lasting from one to ten or twelve years. It seems to be this range of acceptable period lengths that has allowed the test of the Monte Carlo hypothesis to succeed. The approach pioneered by Burns and Mitchell was described by Koopmans (1947) as measurement without theory.
It leaves us with a choice of accepting the Monte Carlo hypothesis or accounting for the variability in duration. However, the sheer volume of statistical evidence on specific and reference cycles produced by the NBER and, perhaps most strikingly, the interrelationships between phases and amplitudes of the cycle in different series (comovements) should make us happier about accepting the existence of cycles and encourage us to concentrate on explaining their variation.
Koopmans (1947) categorizes NBER business cycle measures into three groups.16 The first group of measures is concerned with the location in time and the duration of cycles. For each series turning points are determined along with the time intervals between them (expansion, contraction, and trough to trough duration of ‘specific cycles’). In addition, turning points, and durations are determined for ‘reference cycles’.
These turning points are points around which the corresponding specific cycle turning points of a number of variables cluster. Leads and lags are found as differences between corresponding specific and reference cycle turning points. All turning points are found after elimination of seasonal variation but without prior trend elimination -using, as much as possible, monthly data and otherwise quarterly data. The second group of measures relates to the movements of a variable within a cycle-specific to that variable or within a reference cycle.
The third group of measures expresses the conformity of the specific cycles of a variable to the business or reference cycle. These consist of ratios of the average reference cycle amplitudes to the average specific cycle amplitudes of the variable for expansions and contractions combined and indices of conformity.
Burns and Mitchell (1946) are well aware of the limitations of their approach which result from its heavy reliance on averages. In Chapter 12 of their book, they tackle the problem of disentangling the relative importance of stable and irregular features of cyclical behavior, analyzing the effects that long cycles may have had on their averages. In Chapter 11 they analyze the effects of secular changes. The point that comes out of these two investigations is that irregular changes in cyclical behavior are far larger than secular or cyclical changes (see also section 4.3).
They observe that this finding lends support to students who believe that it is futile to strive after a general theory of cycles. Such students, they argue, believe that each cycle is to be explained by a peculiar combination of conditions prevailing at the time and that these combinations of conditions differ endlessly from each other at different times. If these episodic factors are of prime importance, averaging will merely cancel the special features. Burns and Mitchell try to analyse the extent to which the averages they derive are subject to such criticisms, which are akin to a statement of the Monte Carlo hypothesis.
They accept that business activity is influenced by countless random factors and that these shocks may be very diverse in character and scope. Hence each specific and reference cycle is an individual, differing in countless ways from any other. But to measure and identify the peculiarities, they argue, a norm is required because even those who subscribe to the episodic theory cannot escape having notions of what is usual or unusual about a cycle.
Averages, therefore, supply the norm to which individual cycles can be compared. In addition to providing a benchmark for judging individual cycles, the averages indicate the cyclical behavior characteristic of different activities. Burns and Mitchell argue that the tendency for individual series to behave similarly in regard to one another in successive business cycles would not be found if the forces that produce business cycles had only slight regularity.
As a test of whether the series move together, the seven series chosen for their analysis are ranked according to durations and amplitudes, and a test for ranked distributions is used.18 Durations of expansions and contractions are also tested individually and correlation and variance analysis is applied. They find support for the concept of business cycles as roughly concurrent fluctuations in many activities. The tests demonstrate that although cyclical measures of individual series usually vary greatly from one cycle to another, there is a pronounced tendency towards repetition of relationships among movements of different activities in successive business cycles.
Given these findings, Burns and Mitchell argue that the tendency for averages to conceal episodic factors is a virtue. The predictive power of NBER leading indicators provides a measure of whether information gained from cycles can help to predict future cyclical evolution and consequently allows an indirect test of the Monte Carlo hypothesis.
Evans (1967) concluded that some valuable information could be gained from leading indicators since the economy had never turned down without ample warning from them and they had never predicted false upturns in the United States (between 1946 and 1966). For further discussion of the experience of forecasting with NBER indicators see Daly (1972).
Largely as a result of the work of the NBER a number of ‘stylized’ or qualitative facts about relationships between economic variables, particularly their pro-cyclicality or anti-(counter) cyclicality, have increasingly become accepted as the minimum that must be explained by any viable cycle theory prior to detailed econometric analysis. Lucas (1977), for example, reviews the main qualitative features of economic time series which are identified with the business cycle. He accepts that movements about trend in GNP, in any country, can be well described by a low order stochastic difference equation and that these movements do not exhibit a uniformity of period or amplitude. The regularities that are observed are in the comovements among different aggregate time series.
The principal comovements, according to Lucas, are as follows:
- Output changes across broadly defined sectors move together in the sense that they exhibit high conformity or coherence.
- Production of producer and consumer durables exhibits much more amplitude than does a production of non-durables.
- Production and prices of agricultural goods and natural resources have lower than average conformity.
- Business profits show high conformity and much greater amplitude than other series.
- Prices generally are pro-cyclical.
- Short-term interest rates are pro-cyclical while long-term rates are only slightly so.
- Monetary aggregates and velocities are pro-cyclical.
Lucas (1977) notes that these regularities appear to be common to all decentralized market economies, and concludes that business cycles are all alike and that a unified explanation of business cycles appears to be possible. Lucas also points out that the list of phenomena to be explained may need to be augmented in an open economy to take account of international trade effects on the cycle.
Finally, he draws attention to the general reduction in amplitude of all series in the post-war period. To this list of phenomena to be explained by a business cycle theory, Lucas and Sargent (1978) add the positive correlation between time series of prices (and/or wages) and measures of aggregate output or employment and between measures of aggregate demand, like the money stock, and aggregate output or employment, although these correlations are sensitive to the method of detrending.
Sargent (1979) also observes that ‘cycle’ in economic variables seems to be neither damped nor explosive, and there is no constant period from one cycle to the next. His definition of the ‘business cycle’ also stresses the comovements of important aggregate economic variables. Sargent (1979, Ch. XI) undertakes a spectrum analysis of seven US time series and discovers another ‘stylized fact’ to be explained by cycle theory, that output per man-hour is markedly pro-cyclical. This cannot be explained by the application of the law of diminishing returns since the employment/capital ratio is itself pro-cyclical.
Symmetric Business Cycles?
Blatt (1980) notes that the Frisch-type econometric modeling of business cycles (see section The Frisch-Slutsky Hypothesis) is dominant. Such models involve a linear econometric model which is basically stable but is driven into recurrent, but not precisely periodic, oscillations by shocks that appear as random disturbance terms in the econometric equations.
Blatt (1978) had demonstrated that the econometric evidence which appeared to lead to the acceptance of linear, as opposed to nonlinear, propagation models was invalid (section The Frisch-Slutsky Hypothesis). Blatt (1980) aims to show that all Frisch-type models are inconsistent with the observed facts as presented by Burns and Mitchell (1946). The qualitative feature or fact on which Blatt (1980) concentrates is the pronounced lack of symmetry between the ascending and descending phases of the business cycle.
Typically, and almost universally, Blatt observes, the ascending portion of the cycle is longer and has a lower average slope than the descending portion. Blatt claims that this is only partly due to the general, but not necessarily linear, a long-term trend towards increasing production and consumption. Citing Burns and Mitchell’s evidence concerning data with the long-term trend removed, he notes that a great deal of asymmetry remains after detrending and argues that no one questions the existence of the asymmetry. De Long and Summers (1986a) subsequently do, however, as will be seen below.
Blatt (1980) points out that if the cyclical phases are indeed asymmetric, then the cycle cannot be explained by stochastic, Frisch-type, linear models. Linear deterministic models can only produce repeated sinusoidal cycles, which have completely symmetric ascending and descending phases, or damped or explosive, but essentially symmetric, cycles.
Cycles produced by linear stochastic models will be less regular but nevertheless will be essentially symmetric in the sense that there will be no systematic asymmetry (see also section 3.3). Frisch-type models consequently do not fit the data which demonstrates systematic asymmetry. To complement Burns and Mitchell’s findings, Blatt (1980) assesses the statistical significance of asymmetry in a detrended US pig-iron production series using a test implied by symmetry theorems in the paper and finds that the symmetry hypothesis can be rejected with a high degree of confidence.
He concludes that the asymmetry between the ascending and descending phases of the cycle is one of the most obvious and pervasive facts about the entire phenomenon and that one would have to be a statistician or someone very prejudiced in favor of Frisch-type modeling to demand explicit proof of the statistical significance of the obvious.
Neftci (1984) also examines the asymmetry of economic time series over the business cycle. Using the unemployment series, which have no marked trend, he adopts the statistical theory of finite-state Markov processes to investigate whether the correlation properties of the series differ across phases of the cycle. He notes that the proposition that econometric time series are asymmetric over different phases of the business cycle appears in a number of major works on business cycles.
Neftci presents a chart showing that the increases in US unemployment have been much sharper than the declines in the 1960s and 1970s and his statistical tests, which compare the sample evidence of consecutive declines and consecutive increases in the time series, offer evidence in favour of the asymmetric behaviour of the unemployment series analysed in the paper.
Neftci (1984) then discusses the implications of asymmetry in macroeconomic time series for econometric modelling. Firstly, in the presence of asymmetry the probabilistic structure of the series will be different during upswings and downswings and the models employed should reflect this by incorporating nonlinearities to allow ‘switches’ in optimising behaviour between phases.
Secondly, although the implication is that nonlinear econometric or time series models should be employed, it may be possible to approximate these models, which are cumbersome to estimate, with linear models in which the innovations have asymmetric densities. Further work is required to verify this conclusion, he notes.
De Long and Summers (1986a) also investigate the proposition of business cycle asymmetry. They note that neither the econometric models built in the spirit of the Cowles Commission nor the modern time series vector autoregressive (VAR) models are entirely able to capture cyclical asymmetries. Consequently, they argue, if asymmetry is fundamentally important then standard linear stochastic techniques are deficient and the NBER-type traditional business cycle analysis may be a necessary component of empirical business cycle analysis. The question of asymmetry is therefore one of substantial methodological importance.
De Long and Summers undertake a more comprehensive study than Neftci (1984) using pre- and post-war US data and post-war data from five other OECD nations. They find no evidence of asymmetry in the GNP and industrial production series. For the United States only, like Neftci (1984) they find some asymmetry in the unemployment series. They conclude that asymmetry is probably not a phenomenon of first order importance in understanding business cycles.
De Long and Summers observe that the asymmetry proposition amounts to the assertion that downturns are brief and severe relative to trend and upturns are larger and more gradual. This implies that there should be significant skewness in a frequency distribution of periodic growth rates of output. They therefore calculate the coefficient of skewness,2” which should be zero for symmetric series, for the various time series.
Overall they find little evidence of skewness in the US data. In the pre-war period they find slight positive skewness, which implies a rapid upswing and a slow downswing, the opposite of what is normally proposed. In the post-war period there is some evidence of the proposed negative skewness and in the case of annual GNP the negative skewness approaches statistical significance. Turning to data from other OECD countries, they find that skewness is only notably negative in Canada and Japan. There is no significant evidence of asymmetry in the United Kingdom, France or Germany.
De Long and Summers argue that the picture of recessions as short violent interruptions of the process of economic growth is the result of the way in which economic data is frequently analysed. The fact that NBER reference cycles display contractions that are shorter than expansions is a statistical artifact, they assert, resulting from the superposition of the business cycle upon an economic growth trend.
The result is that only the most severe portions of the declines relative to trend will appear as absolute declines and thus as reference cycle contractions.21 Consequently, they argue, even a symmetric cycle superimposed upon a rising trend would generate reference cycles with recessions that were short and severe relative to trend even though the growth cycles (the cycle in detrended series) would be symmetric. Comparing the differences in length of expansions and contractions for nine post-war US NBER growth cycles they find them not to be statistically significant, in contrast to a similar comparison of seven NBER reference cycles.
They conclude that once one has taken proper account of trend, using either a skewness-based approach or the NBER growth cycle dating procedure, little evidence remains of cyclical asymmetry in the behaviour of output. This of course assumes that detrending does not distort the cycle so derived and that the trend and growth are separable phenomena.22
De Long and Summers finally turn to Neftci’s (1984) findings for US unemployment series, which contradict their results. They argue that Neftci’s statistical procedure is inadequate and proceed to estimate the skewness in US post-war unemployment data. They discover significant negative skewness and are unable to accept the null hypothesis of symmetry. None of the unemployment series from other OECD countries displayed significant negative skewness, however. They are therefore able to argue that it reflects special features of the US labour market and is not a strong general feature of business cycles.
De Long and Summers are, as a result, able to conclude that it is a reasonable first approximation to model business cycles as symmetrical oscillations around a rising trend and that the linear stochastic econometric and time series models are an appropriate tool for empirical analysis.
They consequently call into question at least one possible justification for using NBER reference cycles to study macroeconomic fluctuations. They note that an alternative justification for the reference cycle approach stresses the commonality of the patterns of comovements (section 1.1) in variables across different cycles and that Blanchard and Watson (1986) challenge this proposition.
Within the context of an assessment of NBER methodology, Neftci (1986) considers whether there is a well-defined average or reference cycle and whether or not it is asymmetric. His approach is to confront the main assertions of the NBER methodology, discussed in the previous section, with the tools of time series analysis; these imply that NBER methodology will have nothing to offer beyond the tools of conventional time series if covariance-stationarity is approximately valid and if (log) linear models are considered. If covariance-stationarity and/or linearity does not hold, the NBER methodology may have something to contribute if it indirectly captures any nonlinear behaviour in the economic time series.
From each time series under consideration Neftci derives for the local maxima and minima of each cycle, which measure implied amplitudes, and the length of the expansionary and contractionary phases. This data, he argues, should contain all the information required for a quantitative measure of NBER methodology. Neftci first examines correlations between the phase length and maxima and minima and then between these variables and major macroeconomic variables.
If the length of a stage is important in explaining the length of subsequent stages then the phase processes should be autocorrelated and the NBER methodology would, by implication, potentially capture aspects of cyclical phenomena that conventional econometrics does not account for. To investigate such propositions Neftci uses an updated version of Burns and Mitchell’s (1946) pig-iron series.
Neftci finds that the length of the upturn does affect the length of the subsequent downturn significantly but that the length of past downturns does not affect the length of subsequent upturns. Using the series for local maxima and minima, Neftci examines the relationship between the drop and the increase during upswings and finds a significant relationship between the two. Again the result is unidirectional because he finds that the size of the upswing has no effect on the subsequent drop.
Introducing the paper, Brunner and Meltzer (1986) note that the latter result confirms the important finding described by Milton Friedman in the 44th Annual Report of the NBER and that the unidirectional correlations run in opposite directions for the lengths series and the drop and increase series (which might imply stationarity; see Rotemberg 1986).
Neftci regards the results as tentative, given the small numbers of observations employed, but nevertheless concludes that sufficient information apparently exists in the series derived to represent NBER methodology to warrant investigating the information more systematically. To do this Neftci defines a new variable that can express the state of the current business cycle without prior processing of the data.
This is done to avoid the possibility that selecting the turning points after observing the realisation of a time series will bias any estimation procedures in favour of the hypothesis that the reference cycle contains useful additional information not reflected in the time series, or, as Neftci puts it, a cyclical time unit exists separately and independently of calendar time.
The variable introduced is a counting process whose value at any time indicates the number of periods lapsed since the last turning point if the time series exhibits strong cyclically but no trend. When a positive trend is present, however, the variable will be a forty-five degree line and when the series is strongly asymmetric, with large jumps being followed by gradual declines, then the variable will have a negative trend with occasional upward movements.
It can, therefore, capture some of the nonlinear characteristics of the series. Counting variables were derived from various macroeconomic time series and included in vector and univariate autoregressions. The major findings from the vector autoregressions were the following. The counting variable significantly affects the rate of unemployment in all cases. It shows little feedback into nominal variables such as prices and money supply. The fact that the counting variable helps explain the variation in unemployment, which has no trend, implies that information about the stage of the cycle reflected in the variable in the absence of trend - carries useful additional information.
Since the counting variable is a nonlinear transformation of the unemployment series, the implication is that the NBER methodology may capture some nonlinear stochastic properties of the economic time series which are unexploited in the standard linear stochastic framework. The univariate autoregressions for major macroeconomic time series included lagged values of the counting variable and a time trend. For most of the macroeconomic variables the counting variable was significant and in many cases strongly so.
Neftci then considers the reasons for the significance of his findings that cyclical time units carry useful additional information. The first possibility he identifies is that turning points may occur suddenly and it may be important for economic agents to discover these sudden occurrences (Neftci 1982). The second is that the derivative of the observed processes has different (absolute) magnitudes before and after turning points. In other words, there is asymmetry as discovered by Neftci (1984) but disputed by De Long and Summers (1986a) (see discussion above).
Thirdly, the notion of trend may be more complex than usually assumed in econometric analysis. It may for example be non-deterministic (see section 4.3); consequently it may be useful to work with cyclical time units rather than standard calendar time. From a different perspective one could argue that the stage of the business cycle may explicitly enter into a firm’s or even a consumer’s decision-making process.23 If cyclical time unit, or average or reference cycle, can be consistently defined and successfully detected, then macroeconomic time series can be transformed to eliminate business cycles and highlight any remaining periodicity, or long cycles, in the trend component.
The phase-averaging of data employed by Friedman and Schwartz (1982) and criticised by Hendry and Ericsson (1983) is a procedure that uses a cyclical time unit. Phase-averaging entails splitting a time series into a number of consecutive business cycles after a visual inspection of a chart of the series.
The time series are then averaged over the selected phases of the cycle and the behaviour of the process during a phase is replaced by the average. Usually only the expansionary and contractionary phases are selected; consequently the whole cycle will be replaced by two points of observation. (See Neftci 1986, p.40, for a formal discussion.) The procedure effectively converts calendar time data into cyclical time unit observations.
Following Hendry and Ericsson (1983), Neftci concurs that if a traditional linear stochastic econometric model with a possibly nonlinear trend is the correct model, then the application of phase-averaging, which is like applying two complicated nonlinear filters that eliminate data points and entail a loss of information, would be inappropriate, even if there was a cyclical time unit. Consequently, phase-averaging can be justified only if a linear econometric model is missing aspects of the cyclical phenomena which, if included, would provide some justification of phase-averaging. Neftci (1986) notes that users of phase-averaging24 would reject the insertion of deterministic, rather than stochastic, trends in linear econometric models. In fact, Neftci argues, phase-averaging can be seen as a method of using the cyclical time unit to isolate a stochastic trend in economic time series.25
Neftci concludes that the introduction of the counting variable, which effectively involves a nonlinear transformation of the data, improves explanatory power and indicates that this was the result of the presence of (stochastic) nonlinearities. It therefore appears that nonlinear time series analysis will contribute to future analysis of the business cycle.
Commenting on Neftci (1986), Rotemberg (1986) expresses concern about the general applicability of Neftci’s procedure for identifying the stochastic trend. In series with trends where a growth cycle is present it is difficult to date local maxima and minima without first detrending, as the NBER has discovered in the post-war period.26 One possible way round the problem, he suggests, is to use series without trends, such as unemployment, to date the peaks and troughs and then use these dates to obtain phase-averages in other series. Since the timing of peaks and troughs in different series will vary stochastically, it would be important to analyse ‘clusters’ of peaks and troughs in detrended series to arrive at appropriate dates.
The Frisch-Slutsky Hypothesis
Econometric analysis of business cycles has tended to concentrate on testing various versions of the hypothesis arising out of the work of Frisch (1933) and Slutsky (1937). Frisch (1933) postulated that the majority of oscillations were free oscillations - the structure of the system determining the length and dampening characteristics of the cycle and external (random) impulses determining the amplitude. As noted in section 1.2, such systems can produce regular fluctuations from an irregular (random) cause.
If Frisch is correct then cycle analysis can proceed to tackle two separate problems: the propagation problem, which involves modelling the dynamics of the system; and the impulse problem, which involves the identification of the sources and effects of shocks and modelling the shock-generating process. Frisch believed that the solution of the propagation problem would be a system providing cyclical oscillations, in response to shocks, which converge on a new equilibrium.
As an approximation to the solution of the ‘propagation problem’, Frisch derives a macrodynamic system of mixed difference and differential equations based on the theory of Aftalion (1927). The model solutions have the properties sought by Frisch, namely a primary, a secondary and a tertiary cycle with a trend and, most importantly, the cycles are damped.
Frisch’s approach is clearly a useful one but unfortunately many students of economic cycles have forgotten that he tried to solve the ‘propagation problem’ prior to tackling the ‘impulse problem’. The testing of the Frisch hypothesis often involves deriving a shock-generating mechanism with sufficient energy to produce cycles from an econometric model and thus gives undue attention to the solution of the impulse problem and inadequate attention to the solution of the propagation problem, i.e. dynamic specification.
Frisch regarded his model as a first approximation, pointing to the work of Fisher (1925) and Keynes (1936) as sources of ideas for improvement. A systematic testing of various solutions to the propagation problem is noticeably lacking in the literature.
Frisch’s hypothesis that the propagation model should have damped, rather than self-sustaining, cycles has not been adequately tested. Questions that remain unanswered include the following. What degree of dampening, if any, should be expected? What are the relative roles of endogenous cycles and external shocks? Or, alternatively, to what extent is the cycle free or forced?27 It is to be noted that even if self-sustaining (endogenous) cycles are postulated, shocks will have a role to play in that they will add irregularity; so a solution to the impulse problem is still required. The role of the impulse model will of course differ in such cases from that attributed to it by Frisch, which was the excitement of free (damped) oscillations generated by the propagation model.
Frisch proposed two types of solution to the impulse problem. First, expose the system to a stream of erratic shocks to provide energy; second, following Schumpeter (1934), use innovations as a source of energy. The result of the former, Frisch finds, is a cycle that varies within acceptable limits in its period and amplitude. The dynamic system thus provides a weighting system that allows the effects of random shocks to persist.
Frisch suggests that erratic shocks may not provide the complete solution to the impulse problem and assumes that inventions accumulate continuously but are put into practical use (as innovations) on a large scale only during certain phases of the cycle, thus providing the energy to maintain oscillations. The resulting cycle he calls an automaintained cycle. Frisch illustrates with a description of a pendulum and a water tank, with water representing inventions.
A valve releases the water for practical use at certain points in the swing of the pendulum (economy), thus providing energy. Frisch notes that the model could lead to continuous swings or even increasing oscillations, in which case a dampening mechanism would be needed. He seemed to have in mind here something that reduces and slows movement, such as automatic stabilisers, rather than Hicksian ceilings and floors (Hicks 1950). Frisch regarded these two types of solution as possibly representing equally important aspects of the cycle.
Frisch (1933), therefore, provides two possible solutions to the impulse problem: the Frisch I hypothesis that exogenous, purely random, shocks provide energy to a system (propagation model), with a damped cyclical solution, to produce the cycles observed in the economy; and the Frisch II hypothesis that the shocks are provided by the movement of the economic system and these shocks supply the necessary energy to keep the otherwise damped oscillations from dying out.
These shocks are released systematically, but whether they are regarded as exogenous or endogenous depends on whether or not a theory of innovations is included in the model. It should be noted that the Frisch I hypothesis is a bit loose in the sense that the random shocks could apply to equation error terms, exogenous variables or parameters; and shocks to each have different rationalisations and, therefore, imply subhypotheses. Further, these various types of shock are not mutually exclusive.
Slutsky’s (1937) work (see also Yule 1927) largely overlaps with that of Frisch (1933) and tends to confirm some of its major propositions, but there are some useful additional points made. Slutsky considers the possibility that a definite structure of connection of random fluctuations could form them into a system of more or less regular waves.
Frisch (1933) demonstrated that this was possible. Slutsky distinguishes two types of chance series: those where probabilities are conditional on previous or subsequent values, i.e. autocorrelation within the series but not cross-correlation between series, which he calls coherent series; and those with independence of values in the sequence (i.e. no autocorrelation), which he calls incoherent series.
Slutsky derives a number of random series which are transformed by moving summation. We shall call the resulting series type I series. Slutsky then forms type II series by taking moving sums of type I series. Analysis of type I series shows that cyclical processes can be derived from the (moving) summation of random causes. Type II series display waves of a different order to those in type I series and, Slutsky notes, a similar degree of regularity to economic series.
The type II series are subjected to Fourier analysis which reveals a regular long cycle. Slutsky also finds evidence of dampening and suggests the system consists of two parts: vibrations determined by initial conditions; and vibrations generated by disturbances. The disturbances, he suggests, accumulate enough energy to counter the dampening, and the vibrations ultimately have the character of a chance function, the process being described solely by the summation of random causes. Tests of whether the business cycle is adequately described as a summation of random causes, rather than by a complicated weighting of such random shocks through a ‘propagation model’ derived from economic theoretic considerations, were discussed in section 1.2.
It is common in stochastic simulations of econometric models to feed in autocorrelated shocks. Since this is essentially what has been done by Slutsky to yield type II series, which provide his best results, we may regard these as tests of the Slutsky hypothesis. In terms of Frisch’s analysis, Slutsky hardly considered the propagation problem, using instead purely mechanical moving sums. His work is best regarded as a contribution to the solution of the impulse problem.
One further point arises from the work of Slutsky, and related work by Yule (1927). This has become known as the ‘Slutsky-Yule’ effect (Sargent 1979, pp.248-51), which is that using moving averages to smooth data automatically generates an irregular periodic function. It is likely that a number of series, smoothed by the same moving average process, will show similar cycles.
The Slutsky-Yule effect does not mean that cycles do not exist in economic series, but it does imply the need to be careful in dealing with series that have been smoothed or filtered, perhaps to eliminate trend or seasonal effects, since spurious cycles may be introduced. This problem is particularly relevant when tests of the ‘long swing hypothesis’ are considered, since it should be borne in mind that smoothing the series to eliminate shorter cycles could well have created longer cycles in the smoothed data. This likelihood is demonstrated by Slutsky’s finding that type II series had clearly identifiable long cycles whereas type I series did not.
It should be noted that the Frisch I hypothesis implies that economic oscillations are free (although damped), whereas Slutsky’s hypothesis, that the cycle is formed by the summation of autocorrelated shocks, implies that oscillations are more likely to be forced. It is also possible that the method of summation or weighting implicit in the propagation model could, in the Slutsky case, impart significant cyclical features in addition to those ‘forced’ by the autocorrelated shocks.
The greater the dampening factor, the larger the shocks needed to produce a regular cycle. The problem is that it is always possible to produce random shocks that produce cycles if they are of the right size and occur with the required frequency. What is needed is an indication of a reasonable magnitude of shocks and the frequency with which they occur. If this ‘reasonable’ random shock series cannot produce acceptably realistic cycles then something is wrong.
Kalecki (1952) illustrates the point that, with heavier dampening, a cycle that was regular becomes irregular and of the same order and magnitude as that of the shock series. The erratic shocks used by Kalecki in his demonstration were from an even frequency distribution, i.e. shocks with large or small deviations from the mean occurring with equal frequency.
Frisch (1933) and Slutsky (1937) also worked with shocks of even frequency. Random errors are, however, usually assumed to be subject to the normal frequency distribution, in accordance with the hypothesis that they themselves are sums of numerous elementary errors and such sums conform to the normal frequency distribution.
Kalecki observes that, whether or not random shocks in economic phenomena can be considered as sums of numerous elemental errors (random shocks), it seems reasonable to assume that large shocks have a smaller frequency than small shocks. Hence a normal frequency distribution of shocks will be more realistic than an even frequency distribution.
Kalecki finds that the cycle generated by normally distributed shocks shows considerable stability with respect to changes in the basic equation which involve a substantial increase in dampening and, even with fairly heavy dampening, normally distributed shocks can generate fairly regular cycles from a linear equation.
The Frisch-Slutsky hypothesis, that the business cycle is the result of a series of shocks to a linear economic model, which imparts dampening effects, has formed the basis of post-war business cycle modeling. It is implicit in the Keynesian approach, as demonstrated by the simulation analysis of the large scale econometric models in the 1970s,28 as well as the New Classical approach. Lucas and Sargent (1978) have explicitly observed that their equilibrium theory of the cycle (Mullineux 1984, Ch. 3) is also based on the Frisch-Slutsky hypothesis.
In the New Classical models, the ‘impulse problem’ is solved by the real and monetary shocks that result in unanticipated price changes. The shocks are assumed to be random and non-autocorrelated with constant mean and variance. In order to explain the persistence of the effects of the shocks and to provide a model of the cycle, the impulse model must be supplemented with a propagation model.
The ‘Lucas supply hypothesis’ (Lucas 1972, 1973) introduces a positive (negative) supply response to unanticipated price increases (decreases), so that a random, non-autocorrelated series for output would be expected to result from random shocks feeding through to prices (Mullineux 1984, Ch. 4). Lucas (1975, 1977) explains why these output effects might persist and thereby provides a solution to the propagation problem for these models, allowing them to explain the observed autocorrelation in output series.
In Lucas’s (1975) model, persistence is introduced by employing a modified accelerator hypothesis. The positive supply response leads to an increase in capital stock which cannot instantly be reversed if the supply response was incorrect, in the sense that it was a response to a monetary rather than a real shock. It must be reduced over time, at the rate of depreciation.
When simulated, most econometric models, which are essentially linear or log linear, display stable growth rather than damped oscillations. Thus these models cannot explain cycles, in the Frischian manner, when bombarded with random shocks and certainly cannot explain the cycle endogenously. Serially correlated shocks are usually required to simulate the economy to any degree of accuracy.
Blatt (1978) calls this the modified Frisch-Slutsky theory. One interpretation of serial correlation in the shocks is that it indicates dynamic misspecification and, in particular, insufficient lags. The success of autoregressive integrated moving average (ARIMA) models, whose strength is lag specification, in forecasting economic time series also points towards the conclusion that the weakness of large scale econometric models was in their lag structure.30
Attempts to improve models by refining their lag structures could, however, lead to further misspecification if it is to nonlinearities that we should be looking to solve the propagation problem. Further, if the nonlinear approach is correct, then it may be necessary to replace the traditional trend (growth) and deviation from trend (cycle) analysis with an integrated theory of the dynamic development of the economy.31
Attempts to improve models by refining their lag structures could, however, lead to further misspecification if it is to nonlinearities that One of the first problems to be resolved is whether a linear system can provide a reasonable approximation to the economy. (See also section: Are Business Cycles Symmetric? and Towards a Theory of Dynamic Economic Development.)
If it can, then efforts should be made to improve dynamic specification, and the Frischian approach of seeking the solution to the propagation and impulse problems should be pursued. In the case of explosive rather than the damped cycles usually associated with the Frischian approach, it would also be necessary to consider “billiard table’ or type I nonlinearities,32 such as ceilings and floors.
In specifying a model for testing a theory of the business cycle, it is necessary to consider the shock-generation mechanism or to solve the impulse problem because the dynamic path of the stochastic form of the model will differ from that of the deterministic form. The importance of the shock-generating mechanism will depend on the type of model being considered. It will be less important for a nonlinear model with a stable limit cycle solution than for a monotonically stable system.
To construct a cycle model one must first decide whether the principal active forces are endogenous or exogenous to the model. Haavalmo (1940) called the exogenous case an open model and the endogenous case a closed model. The choice between an open or closed model should ideally be made after a priori theory has allowed full dynamic specification of the model, which involves specification of nonlinearities and lags.
The model would then be analysed using simulation and/or analytical techniques in order to determine whether maintained, damped or explosive oscillations were present. In the case of damped cycles, or monotonic dampening, it is necessary to assume an open model in order to simulate observed cycles, whereas for the maintained or explosive cycle cases shocks would accentuate the explosiveness and add irregularity. Type I nonlinearities would be required to contain the cycle, and the model would be essentially closed. A nonlinear model with a stable limit cycle, in which shocks simply add irregularity, is clearly a closed model.
In the open model case it is also necessary to consider whether the driving force is itself cyclical, resulting in ‘forced oscillations’, or whether the cycle is the result of the way the system responds to non-oscillatory stimulating forces, i.e. ‘free oscillations’. In the cases of damped cycles or monotonic convergence, it is clear from the previous discussion of the Slutsky-Yule effect that an open model can generate business cycles.
For many of the large scale macroeconometric models, random shock simulations proved inferior to autocorrelated shock simulations. The resulting cycles were consequently forced oscillations, the driving force coming from the imposed error structure. In view of the fact that the presence of autocorrelation can be viewed as indicative of dynamic misspecification, it is not clear whether these oscillations should really be viewed as ‘forced’, on an otherwise monotonically stable system, or whether the system, with correct dynamic specification, would produce its own, perhaps damped, cycles that would be stimulated by random shocks to produce ‘free’ oscillations.
It is also to be noted that forced oscillations could arise from the exogenous variable generating process, a possibility largely ignored in the model simulation exercises of the early 1970s. With multiplicative errors, the exogenous variable generating process might even produce free oscillations, another virtually untested hypothesis.
Thus, especially where it is believed that the oscillations are forced, but also in the Frischian case, where random shocks solve the impulse problem and the model the propagation problem, it is essential to have a model of shock generation and also a model of exogenous variable generation. Further, to achieve a degree of realism, closed cycle models need to be analysed in stochastic form so that in this case too shock generation should be considered.
In order to formulate a theory in the forced oscillation case, it is essential to decide where the source of energy originates. Once the probable source of energy is located, and if we believe the dynamic specification of the model to be correct, it is not safe to assume that we can simply choose a (possibly ARIMA) process to generate the energy (impulses) to our propagation model that best simulates observed cycles. We ought to have a rationalisation for the forcing elements. In other words an impulse model is required.
This is difficult to derive because the errors could represent omitted variables - which are omitted because they are unobservable, not believed to be relevant, or due to considerations of model size. If a propagation model cannot generate an acceptable cycle when hit by random shocks we should look at it critically, unless we have good a priori reasons to expect ARIMA generated shocks of a particular degree, given the risk that the autocorrelated error shocks could represent misspecification.
It is necessary to decide whether the shocks are to be applied via the error term or the exogenous variables. If they are then more attention must be paid to the prediction of exogenous variables. It seems desirable that, instead of trend predictions for exogenous variables in simulation experiments, ARIMA processes should be used to derive optimal linear forecasts based on past observations of the exogenous variables. The only relevant information in forecasting a truly exogenous variable should be its own past history.
Sims (1980) opened up the whole question of the exogeneity and endogeneity of variables. He challenged the a priori approach to this choice and suggested that the division of variables, between exogenous and endogenous, should be based on causality tests and, at minimum, the a priori choice should be checked in this way. There seems to be some ambiguity in the choice of endogenous and exogenous variables which results from the size of model to be considered. For example, government policy variables have often been treated as exogenous because no government objective function is included in standard econometric models.
If a government objective function is included, however, policy variables become endogenous.33 Further, in concentrating on economic factors, it is common to treat non-economic factors such as weather and demographic trends as exogenous. There are, however, scientists who regard these variables as endogenous to their models. Thus in some cases it may be possible to utilise satellite models, for weather or population prediction for example, into which information can be fed to generate exogenous (to the economic model) variable processes.
Once these ‘forecasts’ for exogenous variables have been made they can be fed into the model, in place of linear trend predictions, in order to see how much energy is provided. The expected amount of energy can be gauged by comparing ex post simulations using true exogenous variables with those using trend generated exogenous variables instead.
Further, the use of ARIMA forecasted exogenous variables for ex ante forecasting will introduce systematic or random shocks to the exogenous variables which are not provided by the trend extrapolation of exogenous variables. The size and nature of the random errors can be gauged by comparing ex post simulations, using true exogenous variables, with ARIMA generated exogenous variables. Once the propagation model, with its ARIMA forecasts of exogenous variables, has been simulated it may be possible to decide how much additional energy is required to solve the impulse problem. It is then necessary to identify realistic sources of energy rather than impose on the equation errors the form that produces representative time paths.
Haitovsky and Wallace (1972) suggested adding error terms to exogenous variables and to parameters in simulation experiments. The latter introduces multiplicative errors if we assume the errors on the stochastic coefficients follow the same process, or have a common factor. The aim is to prevent overstating the error in the equation residuals. One rationale for parameter shocks is the introduction of errors to account for irrationality or erratic behaviour in decision-making by economic agents.
Multiplicative, as well as additive, errors and therefore stochastic parameters should also be considered as part of the impulse model. The additive residual errors on the equations of a model are usually assumed to represent one of the following: measurement errors, aggregation errors, omitted variables or specification errors.
It is clear from the previous discussion that, if errors are applied to the exogenous variable generating process and parameter values (multiplicative errors), a clearer picture emerges of what energy is required from the equation errors to solve the impulse problem. It has been noted above that an estimate of the error process on exogenous variables can be generated using simulations. It is not clear how the error process on parameters is to be determined.
The simplest solution is to assume errors are random. If the propagation model, with its exogenous variable generating process and stochastic parameters, still requires autocorrelated shocks to the equations in order to generate realistic simulations, then misspecification is a strong possibility and, in the absence of strong a priori reasons to expect autocorrelated shocks, an attempt should be made to identify it. It is not too difficult, using simulations experiments, to get a good idea of the ARIMA error process required to solve the impulse problem.
This could be used if misspecification is not identifiable; if the error process is believed to represent common factors of lag polynomials so that the model is specified in its most efficient estimation form; or if it is believed to be due to the omission of unmeasurable variables.
Zarnowitz (1972) suggests that we might expect some autoregression (AR) in the errors as a result of structural change and that the cyclical aspects of the simulations would probably be strengthened by application of autocorrelated shocks not only to the equations with endogenous variables, but also to exogenous variables. He notes that wars, policy actions and technical change (innovations), inter alia, would frequently result in autocorrelated ‘autonomous’ shocks to the economy.
The simulations in Hickman (ed.) (1972), for example, reveal a neglect of exogenous variable generation and shocks. Exceptions are the work of the Adelmans (1959) and of the OBE group.36 The OBE group found that cycles were increased in amplitude and showed absolute declines in GNP lasting three to five quarters when shocks were applied to the exogenous variables. This result suggests that movements commonly considered exogenous in large scale models may play a crucial role in the determination of business cycles.
The review in section: Equilibrium Business Cycle (EBC) Modelling will also show that AR shocks to equations have been used with some success and analysis of the forecasting performance of a number of models suggests that the errors may be AR due to dynamic misspecification of the lag structure.
Hickman (1972) points out that in broadening the class of shocks to include perturbations to exogenous variables and autocorrelated errors in the equations, the role of the model as a cycle maker is diminished. If the real roots dominate the cyclical ones and the lag structure does not propagate cycles from serially independent impulses, the model becomes simply a multiplier mechanism for amplifying the various shocks. There is still an impulse response mechanism but the cycles are inherent in the impulses rather than the responses.
This could be the correct position, in which case we should model, as accurately as possible, the shock-generating process by analysing carefully the effects of innovations and other sources of shocks in order to solve the impulse problem. Alternatively, if the autocorrelation in the errors does in fact represent misspecification of the structural model, more attention should be paid to the solution of the propagation problem, but a model of shock generation will still be required.
If the propagation model is believed to be ‘correct’ in that it forecasts well when subjected to AR errors, then omitted variables are likely to be the source of the AR errors. By definition large scale econometric models will be misspecified representations of the real world economy because the aim of a model is to explain the main features of the real world without being unmanageably large.
To prevent the model becoming too large certain variables must be omitted by choice; yet other potential influences on the chosen endogenous variables are probably omitted as a result of our view of the world through the narrow blinkers of economic analysis, which prevent consideration of political, sociological, demographic and other factors. As a first approximation an ARIMA process can be used to generate the errors but identification of the likely sources of the errors is essential.
For example, the period after a major war could be treated as a special period, and the time series could be divided into policy periods, technological periods and so on. De Leeuw (1972) suggested a systematic historical investigation of the role of identifiable exogenous influences. Further, to the extent that some external events (e.g. wars and oil crises) have a general impact on the economy, allowance should be made, in simulation experiments, for covariation between disturbance terms on exogenous variables and stochastic equations.
In order to test competing cycle theories each one has to be put into a testable form. This involves careful specification of the deterministic part of the model to solve the propagation problem. If the model is linear then careful attention must be paid to lag specification at this stage, and a priori theoretical considerations based on microeconomic foundations should be utilised, as far as possible, in order to avoid ad hoc dynamic specification.
A fully specified cycle model should also have a well specified exogenous variable and shock-generating functions, representing a solution to the impulse problem. For nonlinear models with stable limit cycles, the role of the shock-generating mechanism will be different because the shocks merely introduce the required irregularity to simulated cycles. In this case the role of the deterministic model is not really one of propagating exogenous shocks. It is one of producing endogenous cycles.
In order to test competing cycle hypotheses it will be necessary to consider various alternative cycle-generating or propagation models, exogenous variable generating models and shock-generating models; all of which would be linked by an overall model.
Has the Business Cycle Changed Since 1945?
The sustained growth of the United States and most other industrialised economies in the 1950s and 1960s raised the question: is the business cycle obsolete?’ A Social Science Research Council (SSRC) conference addressed this question in 1967 and the resulting papers are published in Bronfenbrenner (1969). Further light on the problem was shed by the NBER colloquium conference in 1970, the papers from which are published in Zarnowitz (ed.) (1972). Following the experiences of the 1970s, with its two oil price shocks, and the deep recession of the 1980s, a new perspective emerged from an NBER conference on the US business cycle in 1984, the papers from which were published in Gordon (ed.) (1986).
The 1967 conference was designed to be a successor to the 1952 conference on the business cycle in the post-war world (papers published in Lundberg (ed.) (1955)). It considered a number of papers outlining the post-Korean War economic experience of a number of countries, including the United States, United Kingdom and various European countries.
The general conclusion was that the business cycle still existed, albeit without strict periodicity, but that its character had changed. The period and amplitude seemed to be decreasing, although neither was clearly smaller than in the fifteen to twenty year period prior to the 1914-18. The cycle seemed to be taking the form of a growth cycle, with alternating rates of growth rather than the expansions and contractions, involving negative growth, of the classical cycle.
In addition some interest was shown in the possibility of a political business cycle (Mullineux 1984, Ch. 3) because of the observed alternation between government policies designed to reduce inflation and unemployment. R.C.O. Matthews expressed concern about the lags in policy and the possibility that the government might act out of phase with the cycle it was trying to cure, thus exacerbating it, and also about the severity of policy reactions in the United Kingdom to economic events.
An interesting by-product of the conference was the comparison of cycles in socialist and capitalist economies. The conference agreed that socialist planning in the USSR had reduced economic fluctuations to those due to random shocks emanating largely from political circumstances (e.g. Stalin’s death) and natural phenomena, such as bad weather.
Some doubts about the reliability of the Russian data were expressed and the conference conclusion was not unanimously supported. It was, however, felt that maintenance of a high degree of stability was compatible with capitalist organisation and that the catastrophe of 1929-33 was very unlikely to be repeated.
The comparison of socialist and capitalist economic experience could prove a fruitful avenue for further cycle research. If it could be shown that cycles in the Soviet economy result largely from exogenous shocks, then some measure of the size of the cycle expected from exogenous shocks could be imputed to other economies - the differences between these cycles attributable to shocks, and the observed cycles attributed to factors endogenous to the capitalist system. More generally, one could analyse the major differences between socialist and capitalist economies in the hope of isolating probable areas of cycle generation.
The 1970 NBER colloquium concluded that the business cycle, while not obsolete, had undergone important changes and that the evaluation of the economic system and its institutions required new tools of analysis. The papers by Mintz, Fabricant and Moore (all in Zarnowitz (ed.) 1972) considered various methods of analysing ‘growth cycles’.
It was argued that cumulative changes in the organisation of the economy can affect the nature of economic motion over time and that government attempts to reduce instability might alter the structure of the economy and change the character of economic fluctuations as a consequence.
Although cycles were believed to have attenuated since the War, it was felt that they were still potentially dangerous and, as a further motivation for the continued study of the business cycle, Zarnowitz noted that good forecasting requires knowledge of business cycles.
In support of the hypothesis that cycles had become milder and shorter, Zarnowitz drew on results from NBER studies. The four recessions in the United States between 1948 and 1961 had an average duration often months, whereas the twenty-two recessions between 1854 and 1948 averaged twenty-two months and were more than ten months in all but three cases.
Expansions had also become longer. Between 1949 and 1961 the average duration was thirty-six months and between 1961 and 1969 it was forty-nine months, whereas for the period 1854-1945 the twenty-two expansions had an average duration of twenty-nine months. This shortening of contractions and lengthening of expansions is clearly consistent with the growth cycle hypothesis.
Romer (1986), however, finds that the methods used to construct the conventional US industrial production figures have exaggerated the fluctuations in the series, especially pre-First World War. This was consistent with Romer’s previous findings that historical US unemployment and GNP series are excessively volatile, and suggests that the apparent stabilisation of the post-war economy might be a statistical artificat not warranting the status of a “stylised fact’.
Zarnowitz postulates that the observed changes could originate from a number of sources. Firstly, the intensity of external shocks could have been reduced since the strongest shocks are probably caused by major wars. Zarnowitz looks at years excluding wars and still finds that the more recent cycles had more moderate contractions and longer expansions, though the expansions are perhaps a little less vigorous.
As if to verify this view the 1970s brought larger shocks, particularly the oil price shocks of 1973 and 1979; these seem to have caused another structural change in the cycle, growth trends becoming less pronounced and zero and negative growth again being recorded in recessions. In addition, in the 1950s and 1960s the economies could still have been feeling the benefits of the major shock provided by the Second World War.
In this connection the long swing hypothesis would suggest that the 1950s and 1960s represented an upswing of the long wave, with typically strong growth trends, and the 1970s brought a downswing of the wave, with weaker, and possibly zero or negative, growth trends. Advocates of this view include Mandel (1978a, 1978b, 1980) and Van Duijn (1983).
Secondly, the system could perhaps have become less vulnerable to shocks in the 1950s and 1960s as a result of the stabilising influences of structural, institutional and policy changes. The wider application of built-in stabilisers through the tax system and in transfer payments probably had such an effect. The role of government policy intervention also needed to be examined.
In 1970 the belief seemed to be that although government policy reactions could sometimes get out of phase with the cycle - and had sometimes entailed overreaction and, therefore, had destabilising effects - government policy intervention, in the form of demand management, had contributed to the reduction in the amplitude of the cycle and to its conversion to a growth cycle in the post-war period. Alternative views of the role of the government in the business cycle are examined in Mullineux (1984, Ch. 3).
In the 1970s, scepticism about the potential for the government to stabilise the economy using demand management grew. The disenchantment with demand management policies and the preoccupation with supply side policies in the late 1970s and early 1980s was prompted, in part, by the seemingly markedly different experience of the 1970s, in which inflation and unemployment were significantly higher and growth was lower than in the 1960s.
The general findings of the colloquium as outlined by Zarnowitz (1972), were as follows. First, economic fluctuations had become milder in the post-Second World War period in the United States and other developed countries, slowdowns in growth largely replacing declines in economic activity. Many features of these growth cycles are similar, though perhaps in modified form, to the classical business cycle.
The Mintz paper shows that leading indicators are still useful for predicting declines and accelerations in growth. The Moore paper shows that rates of change of prices have a close correspondence with the US cycle. Fabricant finds that the 1969-80 diffusion indices resemble the patterns of past recessions. Second, structural changes were given much of the credit for the greater stability. The whole question of interactions between endogenous and exogenous forces, however, was judged to require further study, especially with reference to major historical changes.
Research needed to be extended in three directions:
- To examine the effects of fluctuations in and disturbances to exogenous variables.
- To learn about the specification errors of existing models, in order to decide how much of the serial correlation is due to misspecification.
- To include a greater variety of model, since most of the models analysed were Keynesian-dominated.
Further, whatever their causes, the moderation and modification of the business cycle in the 1950s and 1960s required a more complete reference chronology, ideally integrating classical and growth cycles. Finally, the 1969-70 US recession disclosed both important differences and similarities when compared with earlier recessions - the major difference being the persistence of inflation in the face of declining production and rising unemployment.
As noted above, this last point posed major problems for Keynesians, and the continuing experience of stagflation in the 1970s provoked speculation that there had been a further alteration in the structure of the business cycle. One of the major questions raised was the extent to which the essentially Keynesian, large-scale econometric models were misspecified, especially with regard to the monetary sector and inflation forecasting.
The purpose of the 1984 conference was to consider whether the US business cycle had changed since the War. Gordon (ed.) (1986) drew attention to the revival of interest in the business cycle, following the severe recessions of 1974-5 and 1981-2 and the intellectual ferment caused by the Lucas (1975) and subsequent equilibrium business cycle contributions.
He suggested that the stage had been reached where the terms macroeconomic theory and business cycle theory were virtually interchangeable and that another peak in the cycle of interest in business cycles had been reached following the trough in the 1960s. Seven of the twelve papers published in Gordon (ed.) (1986) consider specific components of economic activity, while the remaining five focus on aggregate economic activity. Of the latter, the papers by Eckstein and Sinai (1986) and Blanchard and Watson (1986) attempt to identify the shocks or impulses that generate business cycles, and the papers by De Long and Summers (1986) and Zarnowitz and Moore (1986) concentrate on changes in cyclical behaviour.
Gordon notes that, following the debates between Keynesians and the Friedmanite monetarists in the late 1960s and early 1970s, the oil shocks of 1973-4 and 1979-80 restimulated interest in the Frischian view that external impulses or real shocks, rather than the money supply or its rate of change, were a major source of business cycle fluctuation (Friedman and Schwartz 1963a, b).
Because the oil price shocks were of a supply side nature, it has now become common to distinguish between three types of shock: monetary, real demand and real supply shocks. The recognition of the importance of supply side shocks, Gordon notes, meant the government could not be regarded as the sole source of mainly monetary shocks.
The overall picture to be gleaned from the 1984 conference is that the propagation model may change over time due to structural and institutional changes and changes in government policy perspectives. Such changes are likely to be slow and may take years to detect; nevertheless, it may not be appropriate to treat the post-war period as a whole any more.
The 1970s and 1980s appear to be different from the 1950s and 1960s. In the 1970s it became evident that supply shocks could have a major impact, along with aggregate demand shocks. Thus the combinations of shocks hitting an economy may change over time. Although cycles may be all alike in the sense of being generated by the same, but possibly slowly changing, propagation model, they will still lend themselves to historical analysis since each cycle is caused by a unique combination of shocks, given the Frisch-inspired view of cycle generation. Historical analysis is also necessary to assess the impact of structural, institutional and policy changes.
Further, the institutional and related differences between countries can be used to explain differences in cyclical behaviour between them. In the conference it was, however, noted that the business cycle might be in the process of becoming a fluctuation in world economic output due to the increased synchronisation of cycles in the OECD countries and growing international interdependence. But the evolution of the Third World debt problem39 has demonstrated that the interdependence is increasingly North-South as well as intra-OECD.
It may therefore make little sense to confine analysis to business cycles in one country, even as large and important as the United States, in future. The conference included remarkably little discussion, beyond the oil shocks, of open economy influences on the US business cycle and its post-war changes in behaviour. With hingsight, this was the beginning of the period now referred to as globalization.
Following the declaration that the cycle was alive and well in the early 1980s, one of the longest sustained recoveries in the post-war period has been witnessed, particularly in the United States and the United Kingdom, since the recession of the early 1980s, which hit economies on both sides of the Atlantic very badly. Although sustained, the average rate of growth has been considerably below that of the 1960s. In the summer of 1988, signs of a buildup of inflationary pressure were beginning to emerge.
These are commonly associated with boom conditions and in the past all booms have eventually bust. Nevertheless, this experience of sustained growth, in countries that have experienced ‘Reaganomics’ and ‘Thatcherism’ respectively, has again raised the question of the cycle’s obsolescence. Others point to the possibility that the changes wrought by the Reagan and Thatcher governments have increased the risk of a future depression, probably on a worldwide scale, and financial crises - the 1982 Mexican debt crisis and the October 1987 worldwide stock market price collapse merely being a presage of worse to come. These changes, which included the weakening of automatic stabilisers and deregulation, run counter to the positive influences on increased post-war stability identified in the 1984 conference.
US, UK and other OECD governments are adapting their tax systems to achieve ‘fiscal neutrality’ and increasing the proportion of expenditure-related tax revenue relative to income-related tax revenue. These changes are reducing the degree of progressiveness in the tax structure. At the same time, particularly in the United Kingdom, unemployment benefit is declining in relation to average wages and being made harder to qualify for.
The automatic stabilisers are, therefore, likely to be less effective. Further, a number of OECD governments have succeeded in reducing the rate of growth of their expenditure. In the United Kingdom, the government’s share of total GNP has been declining and the government was able to announce a fiscal surplus in 1988. Working in the opposite direction, the financial liberalisation and innovation since the mid-1970s in the United States and in the 1980s worldwide have increased the access of consumers to credit and therefore their ability to smooth consumption flows in the face of income fluctuations. (See Mullineux 1987b, c for further discussion.)
The result may, however, be that financial crises will have a greater impact in the future unless central banks can avert them. Central banks were apparently successful in curtailing the economic impact of the October 1987 stock market crashes by adding liquidity to the financial, and especially the banking, system and/or reducing interest rates. In hindsight, they may even have overdone it since by the summer of 1988,world growth projections were being increased and talk of recession had been replaced by that of inflation.
A co-ordinated interest rate increase was engineered by the central banks of the major OECD countries in August 1988 to dampen inflationary expectations. From a post 2007-9 Global Financial Crisis perspective, it is clear that continued financial innovation and liberalization supported by crisis averting loose monetary policy was storing up problems for the future, Rajan (2010).
The financial liberalisation has also included the removal of capital controls.40 This has allowed international capital flows to react to interest rate differentials and other factors very rapidly and has further increased international economic interdependence and accelerated globalisation. The implications of this are discussed further by Eichengreen and Portes (1987), who note various parallels between the 1970s and 1980s and the 1920s, when financial liberalisation was also a prominent feature. The other major change in the 1980s was the decline in the inflation rate, following the post-war peaks of the 1970s, and the rise in the real interest rate to post-war high positive levels.
This has been attributed, inter alia, to the high, by historical standards, US budget deficit at a time when the savings ratio was low in the United States, perhaps due to falling inflation. Whatever their cause, the high positive real interest rates may account for the lower investment and slower average growth in many of the OECD countries in the 1980s compared with the 1960s.
But their effect on the nature and shape of the business cycle warrants further investigation if, rather than the real wage, they are indeed the most important price in the whole economy, as De Long and Summers (1986) assert. In the event, growing capital inflows into the US in the 1990s and 2000s lowered real long term interest sites significantly, Rajan (2010).