CBO regularly publishes economic projections that are consistent with current law—providing a basis for its estimates of federal revenues, outlays, deficits, and debt. A key element in CBO’s projections is its forecast of potential (maximum sustainable) output, which is based mainly on estimates of the potential labor force, the flow of services from the capital stock, and potential total factor productivity in the nonfarm business sector. This presentation describes CBO’s most recent 10-year projections of potential output, highlighting the importance of potential total factor productivity. It discusses the historic slowdown of growth in total factor productivity, as well as changes in total factor productivity during the 2020–2021 coronavirus pandemic, and it explores possible explanations for the slowdown and implications for the future.
Correcting misconceptions about markets, economics, asset prices, derivatives, equities, debt and finance
Tuesday, July 19, 2022
Understanding Productivity Growth: CBO’s Most Recent 10-Year Projections Of Potential Output And Total Factor Productivity
Posted By Milton Recht
From CBO, "CBO’s Economic Forecast: Understanding Productivity Growth" July 19, 2022 Presentation by Aaron Betz, an analyst in CBO’s Macroeconomic Analysis Division, at the NABE Foundation's 19th Annual Economic Measurement Seminar:
Wednesday, July 6, 2022
Limits On Modeling The Macro-Economy: Reprint Of My 12 Year Old Blog Post
Posted By Milton Recht
A reprint of my Feb 18, 2010 blog post about the Limits Of Econometric Models:
Thursday, February 18, 2010
Limits Of Econometric Models Of The Macro-Economy
Posted By Milton Recht
The following is a comment I posted On Econlog, "Macroeconometrics and Science" by Arnold Kling about the limits of econometric models of the macro-economy.
There is also the "Lucas Critique". Lucas said that economic models derived from historical data could not be used to recommend effective changes to government policies because the past relationships in the data are dependent on the effects of policies in place at the time of the data. Predictions based on past data will miss the effects of new policies and produce incorrect predictions unless the relationships in the model are calibrated for the new policy. One needs to build a model where the internal relationships (coefficients) vary depending upon policy recommendations. It requires an understanding (or at least an assumption) on how policy affects economic outcomes. It can become a tautology. The model predicts what it is set to predict. If a model is calibrated to predict that policies A, B and C will move the economy to long term trend, then a recommendation to use policies A, B and C will show in the model that the economy is moving towards its long term trend. The model will become the basis for a recommendation that was assumed as part of the model in building the model.
Additionally, when models are not validated against out of sample data, there is the problem of data mining and spurious results. When a 100 economists run a 100 different, independent models on historical data looking for economic and theoretical explanatory relationships, even at a 90 percent statistical significance level, there will result 1000 (100x100x.1), different models that meet model acceptance criteria.
Out of sample data tests would drastically reduce the number of acceptable models from 1000 to a much lower amount. Those that survive may again be spurious, because at the 90 percent significance level, the probability is that 100 out of the 1000 will survive and look meaningful. These 100 models exist based on luck without any need for there to be any economic meaning to their internal relationships. They can even be inconsistent with each other and known economic theories.
Economists then derive explanations to match the model results instead of vice versa. Data mining and spurious models can lead to inconsistent policy recommendations among economists.
It is like a gambler's hot streak at the roulette table, where the bettor develops superstitions about why he is winning, such as color of his shirt, etc. Economic policies based on the surviving models are equivalent to a gambler's idiosyncratic behavior that he thinks changes the odds at the roulette table and enables him to win. Each gambler has a different reason for the winning streak. Like different schools of economists.
The scientific method is based on the concepts of hypothesis testing against data and reproducible results. Model building is the reverse. The data is used to derive the hypothesis and it is not tested against new data (out of sample) to see if it is reproducible.
Both the Ptolemy (Earth centric) and Copernican (Sun centric) views of planetary motion were internally consistent with the data on planetary motion known at the time. Both were accurate in predicting future planetary position (actually, Ptolemy's method, revolving around the Earth, initially was more accurate than the Copernican method).
However, it is extremely unlikely that Sir Isaac Newton could have developed his theory of gravity under the Ptolemy system. Newton's gravity requires rotation around the larger mass body, the Sun, and not the Earth. While an equivalent gravitational system could probably have been mathematically built in a Ptolemy system, it would not be as simple to comprehend or visualize as the Newtonian system.
Expectation theory is in many ways equivalent to Copernican theory, but that is another long and controversial discussion. Suffice it to say, many economic models are inconsistent with expectation theory.
The above is a comment I posted On Econlog, "Macroeconometrics and Science" by Arnold Kling.
[See my March 16, 2010 addendum post: "GDP Bond Addition To My February 18 Post 'Limits Of Econometric Models ...'"]
There is also the "Lucas Critique". Lucas said that economic models derived from historical data could not be used to recommend effective changes to government policies because the past relationships in the data are dependent on the effects of policies in place at the time of the data. Predictions based on past data will miss the effects of new policies and produce incorrect predictions unless the relationships in the model are calibrated for the new policy. One needs to build a model where the internal relationships (coefficients) vary depending upon policy recommendations. It requires an understanding (or at least an assumption) on how policy affects economic outcomes. It can become a tautology. The model predicts what it is set to predict. If a model is calibrated to predict that policies A, B and C will move the economy to long term trend, then a recommendation to use policies A, B and C will show in the model that the economy is moving towards its long term trend. The model will become the basis for a recommendation that was assumed as part of the model in building the model.
Additionally, when models are not validated against out of sample data, there is the problem of data mining and spurious results. When a 100 economists run a 100 different, independent models on historical data looking for economic and theoretical explanatory relationships, even at a 90 percent statistical significance level, there will result 1000 (100x100x.1), different models that meet model acceptance criteria.
Out of sample data tests would drastically reduce the number of acceptable models from 1000 to a much lower amount. Those that survive may again be spurious, because at the 90 percent significance level, the probability is that 100 out of the 1000 will survive and look meaningful. These 100 models exist based on luck without any need for there to be any economic meaning to their internal relationships. They can even be inconsistent with each other and known economic theories.
Economists then derive explanations to match the model results instead of vice versa. Data mining and spurious models can lead to inconsistent policy recommendations among economists.
It is like a gambler's hot streak at the roulette table, where the bettor develops superstitions about why he is winning, such as color of his shirt, etc. Economic policies based on the surviving models are equivalent to a gambler's idiosyncratic behavior that he thinks changes the odds at the roulette table and enables him to win. Each gambler has a different reason for the winning streak. Like different schools of economists.
The scientific method is based on the concepts of hypothesis testing against data and reproducible results. Model building is the reverse. The data is used to derive the hypothesis and it is not tested against new data (out of sample) to see if it is reproducible.
Both the Ptolemy (Earth centric) and Copernican (Sun centric) views of planetary motion were internally consistent with the data on planetary motion known at the time. Both were accurate in predicting future planetary position (actually, Ptolemy's method, revolving around the Earth, initially was more accurate than the Copernican method).
However, it is extremely unlikely that Sir Isaac Newton could have developed his theory of gravity under the Ptolemy system. Newton's gravity requires rotation around the larger mass body, the Sun, and not the Earth. While an equivalent gravitational system could probably have been mathematically built in a Ptolemy system, it would not be as simple to comprehend or visualize as the Newtonian system.
Expectation theory is in many ways equivalent to Copernican theory, but that is another long and controversial discussion. Suffice it to say, many economic models are inconsistent with expectation theory.
The above is a comment I posted On Econlog, "Macroeconometrics and Science" by Arnold Kling.
[See my March 16, 2010 addendum post: "GDP Bond Addition To My February 18 Post 'Limits Of Econometric Models ...'"]
Subscribe to:
Posts (Atom)