Henry II Forecasting

of 53
11 views
PDF
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Document Description
Econometric Modelling David F. Hendry ∗ Nuffield College, Oxford University. July 18, 2000 Abstract The theory of reduction explains the origins of empirical models, by delineating all the steps involved in mapping from the actual data generation process (DGP) in the economy – far too complicated and high dimensional ever to be completely modeled – to an empirical model thereof. Each reduction step involves a potential loss of information from: aggregating, marginalizing, conditioning, approximat
Document Share
Document Tags
Document Transcript
  Econometric Modelling David F. Hendry ∗ Nuffield College, Oxford University.July 18, 2000 Abstract The theory of reduction explains the srcins of empirical models, by delineating all the stepsinvolved in mapping from the actual data generation process (DGP) in the economy – far too com-plicated and high dimensional ever to be completely modeled – to an empirical model thereof. Eachreduction step involves a potential loss of information from: aggregating, marginalizing, condition-ing, approximating, and truncating, leading to a ‘local’ DGP which is the actual generating processin the space of variables under analysis. Tests of losses from many of the reduction steps are feas-ible. Models that show no losses are deemed congruent; those that explain rival models are calledencompassing. The main reductions correspond to well-established econometrics concepts (causal-ity, exogeneity, invariance, innovations, etc.) which are the null hypotheses of the mis-specificationtests, so the theory has considerable excess content.General-to-specific ( Gets ) modelling seeks to mimic reduction by commencing from a generalcongruent specification that is simplified to a minimal representation consistent with the desiredcriteria and the data evidence (essentially represented by the local DGP). However, in small datasamples, model selection is difficult. We reconsider model selection from a computer-automationperspective, focusing on general-to-specific reductions, embodied in PcGets an Ox Package forimplementing this modelling strategy for linear, dynamic regression models. We present an econo-metric theory that explains the remarkable properties of  PcGets . Starting from a general congruentmodel, standard testing procedures eliminate statistically-insignificant variables, with diagnostictests checking the validity of reductions, ensuring a congruent final selection. Path searches in PcGets terminate when no variable meets the pre-set criteria, or any diagnostic test becomes sig-nificant. Non-rejected models are tested by encompassing: if several are acceptable, the reductionrecommences from their union: if they re-appear, the search is terminated using the Schwartz cri-terion.Since model selection with diagnostic testing has eluded theoretical analysis, we study model-ling strategies by simulation. The Monte Carlo experiments show that PcGets recovers the DGPspecification from a general model with size and power close to commencing from the DGP it-self, so model selection can be relatively non-distortionary even when the mechanism is unknown.Empirical illustrations for consumers’ expenditure and money demand will be shown live.Next, we discuss sample-selection effects on forecast failure, with a Monte Carlo study of theirimpact. This leads to a discussion of the role of selection when testing theories, and the problemsinherent in ‘conventional’ approaches. Finally, we show that selecting policy-analysis models byforecast accuracyis not generallyappropriate. We anticipate that Gets will performwell in selectingmodels for policy. ∗ Financial support from the UK Economic and Social Research Council under grant L138251009 Modelling Non-stationary Economic Time Series , R000237500, and Forecasting and Policy in the Evolving Macro-economy , L138251009,is gratefully acknowledged. The research is based on joint work with Hans-Martin Krolzig of Oxford University. 1  2 Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Theory of reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.1 Empirical models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 DGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Data transformations and aggregation . . . . . . . . . . . . . . . . . . . 62.4 Parameters of interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.5 Data partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.6 Marginalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.7 Sequential factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.7.1 Sequential factorization of  W 1 T  . . . . . . . . . . . . . . . . . . 72.7.2 Marginalizing with respect to V 1 T  . . . . . . . . . . . . . . . . . 72.8 Mapping to I (0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.9 Conditional factorization . . . . . . . . . . . . . . . . . . . . . . . . . . 72.10 Constancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.11 Lag truncation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.12 Functional form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.13 The derived model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.14 Dominance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.15 Econometric concepts as measures of no information loss . . . . . . . . . 92.16 Implicit model design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.17 Explicit model design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.18 A taxonomy of evaluation information . . . . . . . . . . . . . . . . . . . 93 General-to-specific modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.1 Pre-search reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2 Additional paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.3 Encompassing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.4 Information criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.5 Sub-sample reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.6 Significant mis-specification tests . . . . . . . . . . . . . . . . . . . . . . 114 The econometrics of model selection . . . . . . . . . . . . . . . . . . . . . . . . . 114.1 Search costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.2 Selection probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . 154.3 Deletion probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154.4 Path selection probabilities . . . . . . . . . . . . . . . . . . . . . . . . . 164.5 Improved inference procedures . . . . . . . . . . . . . . . . . . . . . . . 175 PcGets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.1 The multi-path reduction process of  PcGets . . . . . . . . . . . . . . . . 195.2 Settings in PcGets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.3 Limits to PcGets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.3.1 ‘Collinearity’ . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.4 Integrated variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Some Monte Carlo results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256.1 Aim of the Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . 256.2 Design of the Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . 26  36.3 Evaluation of the Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . 276.4 Diagnostic tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276.5 Size and power of variable selection . . . . . . . . . . . . . . . . . . . . 296.6 Test size analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Empirical Illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337.1 DHSY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337.2 UK Money Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Model selection in forecasting, testing, and policy analysis . . . . . . . . . . . . . 408.1 Model selection for forecasting . . . . . . . . . . . . . . . . . . . . . . . 408.1.1 Sources of forecast errors . . . . . . . . . . . . . . . . . . . . 408.1.2 Sample selection experiments . . . . . . . . . . . . . . . . . . 428.2 Model selection for theory testing . . . . . . . . . . . . . . . . . . . . . 438.3 Model selection for policy analysis . . . . . . . . . . . . . . . . . . . . . 448.3.1 Congruent modelling . . . . . . . . . . . . . . . . . . . . . . . 459 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4610 Appendix: encompassing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50  4 1 Introduction The economy is a complicated, dynamic, non-linear, simultaneous, high-dimensional, and evolving en-tity; social systems alter over time; laws change; and technological innovations occur. Time-seriesdata samples are short, highly aggregated, heterogeneous, non-stationary, time-dependent and inter-dependent. Economic magnitudes are inaccurately measured, subject to revision and important vari-ables not unobservable. Economic theories are highly abstract and simplified, with suspect aggregationassumptions, change over time, and often rival, conflicting explanations co-exist. In the face of thiswelter of problems, econometric modelling of economic time series seeks to discover sustainable andinterpretable relationships between observed economic variables.However, the situation is not as bleak as it may seem, provided some general scientific notions areunderstood. The first key is that knowledge accumulation is progressive: one does not need to know allthe answers at the start (otherwise, no science could have advanced). Although the best empirical modelat any point will later be supplanted, it can provide a springboard for further discovery. Thus, modelselection problems (e.g., data mining) are not a serious concern: this is established below, by the actualbehaviour of model-selection algorithms.The second key is that determining inconsistencies between the implications of any conjecturedmodel and the observed data is easy. Indeed, the ease of rejection worries some economists about eco-nometric models, yet is a powerful advantage. Conversely, constructive progress is difficult, because wedo not know what we don’t know, so cannot know how to find out. The dichotomy between constructionand destruction is an old one in the philosophy of science: critically evaluating empirical evidence is adestructive use of econometrics, but can establish a legitimate basis for models.To understand modelling, one must begin by assuming a probability structure and conjecturingthe data generation process. However, the relevant probability basis is unclear, sincet the economicmechanism is unknown. Consequently, one must proceed iteratively: conjecture the process, developthe associated probability theory, use that for modelling, and revise the starting point when the results donot match consistently. This can be seen in the gradual progress from stationarity assumptions, throughintegrated-cointegrated systems, to general non-stationary, mixing processes: further developments willundoubtedly occur, leading to a more useful probability basis for empirical modelling. These notesfirst review the theory of reduction in § 2 to explain the srcins of empirical models, then discuss somemethodological issues that concern many economists.Despite the controversy surrounding econometric methodology, the ‘LSE’ approach (see Hendry,1993, for an overview) has emerged as a leading approach to empirical modelling. One of its maintenets is the concept of general-to-specific modelling ( Gets – ge neral- t o- s  pecific ): starting from a gen-eral dynamic statistical model, which captures the essential characteristics of the underlying data set,standard testing procedures are used to reduce its complexity by eliminating statistically-insignificantvariables, checking the validity of the reductions at every stage to ensure the congruence of the selectedmodel. Section 3 discusses Gets , and relates it to the empirical analogue of reduction.Recently econometric model-selection has been automated in a program called PcGets , which is anOxPackage (see Doornik, 1999, and Hendry and Krolzig, 1999a) designed for Gets modelling, currentlyfocusing on reduction approaches for linear, dynamic, regression models. The development of  PcGets has been stimulated by Hoover and Perez (1999), who sought to evaluate the performance of  Gets . Toimplement a ‘general-to-specific’ approach in a computer algorithm, all decisions must be ‘mechan-ized’. In doing so, Hoover and Perez made some important advances in practical modelling, and ourapproach builds on these by introducing further improvements. Given an initial general model, manyreduction paths could be considered, and different selection strategies adopted for each path. Some of 
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x