Top PDF ECONOMIC FORECASTS BASED ON ECONOMETRIC MODELS USING EViews 5

ECONOMIC FORECASTS BASED ON ECONOMETRIC MODELS USING EViews 5

ECONOMIC FORECASTS BASED ON ECONOMETRIC MODELS USING EViews 5

Prognoza evoluţiei fenomenelor economice reprezintă, de cele mai multe ori, obiectivul final al modelării econometrice. Ea constituie, totodată, o încercare reală a validităţii modelului elaborat. Spre deosebire de prognozele bazate pe studiul seriilor cronologice, al căror caracter inerţial este recunoscut, predicţiile generate de modelul econometric cu ecuaţii simultane urmăresc să prefigureze viitorul unor importante variabile economice în raport cu influenţele directe şi indirecte exercitate asupra lor de către variabilele exogene. În scopul facilitării calculelor pe care le presupune realizarea de prognoze bazate pe modele econometrice este indicată utilizarea programelor specializate. Un astfel de program este EViews a cărui aplicare, pe lângă faptul că reduce semnificativ timpul destinat analizelor econometrice, asigură şi o mare exactitate a calculelor, iar interpretarea rezultatelor este facilă.
Mostrar mais

8 Ler mais

Can model-based forecasts predict stock market volatility using range-based and implied volatility as proxies?

Can model-based forecasts predict stock market volatility using range-based and implied volatility as proxies?

Based on table II, in forecasting range estimator the results are rather mixed; generally, asymmetric GARCH type model (1,1) such as APARCH (1,1) seem to adequately capture the dynamic of the future volatilities across 7 of 10 indices. This coincides with Poon and Granger’s (2003) conclusion that models account for volatility asymmetry generally performs well. Within these indices, when taking into consideration of distribution variations, normal distribution can explain 5 of 7 indices while Student-t innovation explains the other two indices namely NKY (Japan) and SMI (Switzerland). This result is in line with another research conducted by Brownless et al. (2011), where the authors found Student-t innovation generally did not yield any improvement in the
Mostrar mais

55 Ler mais

Hypothesis testing in econometric models

Hypothesis testing in econometric models

The chapter is organized as follows: section 1.2 introduces the model with one endogenous regressor vari- able, multiple exogenous regressor variables, and multiple IVs. This section determines su¢ cient statistics for this model with normal errors and reduced-form covariance matrix. Section 1.3 introduces one-sided invariant similar tests. Section 1.4 focus on the one-sided conditional t-tests. Section 1.5 …nds the power envelope for similar and nonsimilar one-sided tests. Section 1.6 adjusts the tests to allow for an estimated error covariance matrix and analyzes their asymptotic properties under weak IVs. Section 1.7 obtains con- sistency and asymptotic e¢ ciency for one-sided tests. Section 1.8 compares numerically the power of the tests considered in earlier chapters under WIV asympotics. Section 1.9 introduces novel unbiased two-sided tests. Section 1.10 shows that the one-sided conditional t-tests based on 2SLS, LIML and Fuller estimators are asymptotic similar in a uniform sense. Section 1.11 presents con…dence intervals for returns to schooling using the data of Angrist and Krueger (1991). An appendix at the end of the thesis contains proofs of the results. The supplement presents: power comparisons for di¤erent one-sided and two-sided tests; similar and non-similar power envelopes which are numerically very close (this fact further strengthens our optimality results).
Mostrar mais

76 Ler mais

Forecast-based financing: an approach for catalyzing humanitarian action based on extreme weather and climate forecasts

Forecast-based financing: an approach for catalyzing humanitarian action based on extreme weather and climate forecasts

in index insurance programs (Leblois and Quirion, 2013; Hellmuth et al., 2011; Barnett and Mahul, 2007). In fact, forecast-based financing is informed by precedents that in- tegrate seasonal forecasts into index insurance products. For example, Osgood et al. (2008) propose a mechanism to in- fluence the amount of high- yield agricultural inputs given to farmers according to whether favourable or unfavourable rainfall conditions are expected for the season. An El Niño contingent insurance product was developed for the region of Piura (northern Peru): a business interruption insurance policy was designed to compensate for lost profits or extra costs likely to occur as a result of the catastrophic floods as predicted by a specific indicator of El Niño (known as “ENSO 1.2”). Indemnities were based on sea surface tem- peratures measured in November and December, which were taken as a forecast of flood losses that would occur a few months into the future (February to April). The insured entity chooses the amount to insure (which must not be larger than a maximum amount determined by an estimation of the largest plausible flood losses). Designers of this instrument specifi- cally targeted risk aggregators: firms that provide services to numerous households or businesses exposed to El Niño and related floods, such as loan providers and the fertilizer sec- tor. This is likely the first “forecast index insurance” product to receive regulatory approval (GlobalAgRisk Inc., 2010). For a comprehensive analysis of insurance-related instru- ments for disaster risk reduction, see Suarez and Linnerooth- Bayer (2011).
Mostrar mais

10 Ler mais

Likelihood-based Inference for Multivariate Regression Models using Synthetic Data

Likelihood-based Inference for Multivariate Regression Models using Synthetic Data

Little [24] and Rubin [36], in 1993, first supported the use of synthetic data for SDC, using the framework of multiple imputation [35]. Rubin claimed that synthetic data so created do not correspond to any actual sampling unit, thus preserving the confidentiality of the respondents. Rubin also proposed that one could use fitted models to generate random and independent samples of the orig- inal survey data and release these synthetic versions of microdata publicly, called fully synthetic datasets. The quality of this approach is dependent on the model to impute the values, therefore, all the relationship between variables must be included and the joint distribution of these has to be specified, in order to not give biased results when using the synthetic data [5, 6]. Later that year, Little [24] proposed to only replace, with imputed values, the observed values that could contain sensitive information, leaving the rest unchanged, a proposed solution to overcome the problems inherent to the creation of fully synthetic datasets. This approach is called generation of partially synthetic datasets. This will be the context of the present work. In 1997, Kennickell [14] was the first to use multiply- imputed partially synthetic data to protect the confidentiality of respondents in the Survey of Consumer Finances. Only in 2003, inferential methods for fully synthetic data were developed by Raghunathan et al [27], while, at the same time, Reiter [30] presented the first methods for drawing inference for partially syn- thetic data.
Mostrar mais

110 Ler mais

Valuation of a portuguese company: Novabase, SGPS

Valuation of a portuguese company: Novabase, SGPS

The relative valuation, determines the company’s implied value through the average of common variables – multiples – of the comparable firms – a peer group – within the same industry and with similar risk and growth potential. This model can perform as a complement to other models, such as the DCF model (Fernández, 2015; Damodaran, 2005).

58 Ler mais

Evaluation of the Plant–Craig stochastic convection scheme in   an ensemble forecasting system

Evaluation of the Plant–Craig stochastic convection scheme in an ensemble forecasting system

it is not certain whether this is due to its stochasticity, or to different underlying as- sumptions between it and the standard convection scheme. In order to make a clean distinction, further studies could be performed in which the performance of the Plant– Craig scheme is compared against its own non-stochastic counterpart, which can be constructed by using the full cloud distribution and appropriately normalizing, instead

38 Ler mais

Validation of the Martin Method for Estimating Low-Density Lipoprotein Cholesterol Levels in Korean Adults: Findings from the Korea National Health and Nutrition Examination Survey, 2009-2011.

Validation of the Martin Method for Estimating Low-Density Lipoprotein Cholesterol Levels in Korean Adults: Findings from the Korea National Health and Nutrition Examination Survey, 2009-2011.

Because low-density lipoprotein cholesterol (LDL-C) is a major modifiable risk factor for car- diovascular disease (CVD) [1], its accurate assessment is important for therapeutic decisions. In routine clinical practice worldwide, it is typically calculated using the Friedewald formula [2]. In the Korea National Health Screening Program (KNHSP), LDL-C is calculated but not directly measured when triglyceride levels are lower than 400 mg/dL. Of 11,380,246 partici- pants who examined their triglyceride levels in the 2013 KNHSP, there were 11,143,810 per- sons (98%) with triglyceride levels under 400 mg/dL [3], implying that LDL-C was directly measured for only 2% of the participants. From the outset, the formula’s inaccuracies at triglyc- eride levels 400 mg/dL were recognized by Friedewald et al. [4]. However, even when triglyc- eride levels are under 400 mg/dL, a number of studies have suggested that LDL-C estimates by the formula (LDL-C F ) underestimate LDL-C and thus misclassify CVC risk [5–8], particularly
Mostrar mais

14 Ler mais

Proton pump inhibitors inhibit metformin uptake by organic cation transporters (OCTs).

Proton pump inhibitors inhibit metformin uptake by organic cation transporters (OCTs).

Metformin, an oral insulin-sensitizing drug, is actively transported into cells by organic cation transporters (OCT) 1, 2, and 3 (encoded by SLC22A1, SLC22A2, or SLC22A3), which are tissue specifically expressed at significant levels in various organs such as liver, muscle, and kidney. Because metformin does not undergo hepatic metabolism, drug-drug interaction by inhibition of OCT transporters may be important. So far, comprehensive data on the interaction of proton pump inhibitors (PPIs) with OCTs are missing although PPIs are frequently used in metformin-treated patients. Using in silico modeling and computational analyses, we derived pharmacophore models indicating that PPIs (i.e. omeprazole, pantoprazole, lansoprazole, rabeprazole, and tenatoprazole) are potent OCT inhibitors. We then established stably transfected cell lines expressing the human uptake transporters OCT1, OCT2, or OCT3 and tested whether these PPIs inhibit OCT-mediated metformin uptake in vitro. All tested PPIs significantly inhibited metformin uptake by OCT1, OCT2, and OCT3 in a concentration-dependent manner. Half-maximal inhibitory concentration values (IC 50 ) were in the low micromolar range (3–
Mostrar mais

11 Ler mais

Cross-Country Analyses of Economic Growth: an Econometric Survey

Cross-Country Analyses of Economic Growth: an Econometric Survey

The key assumption of the Solow model is that there are diminishing marginal returns to capital. Countries with higher capital stock will grow at a progressively lower rate until they reach their steady-state level of income. At the steady-state output per capita, capital stock and consumption grow at the same constant rate, equal to the exogenous growth rate of technological progress. The model is based on the assumption of a Cobb-Douglas production function with labor augmenting technological progress: 1

25 Ler mais

Generating automatically test cases based on models

Generating automatically test cases based on models

The term automated software testing can have multiple meanings for members of the software development and testing community. To some the term may mean test driven development and /or unit testing; to others it may mean using a capture & record tool to automate testing. Or can even mean custom-developing test scripts using a scripting language such as Pearl, Python or Ruby. Generally, all tests that are currently run as part of a manual testing program - functional, performance, concurrency, stress, and more - can be automated. How is manual software testing different from AST? First of all, it enhances manual testing efforts by focusing on automating tests that manual testing can hardly accomplish. It doesn’t replace the need for manual testers analytical skills, test strategy know- how, and understanding of testing techniques. This manual tester expertise serves as the blueprint for AST. It also can’t be separated from the manual testing; instead both AST and manual testing are inter-winded and complement each other. AST refers to automation efforts across the entire STL, with a focus on automating the integration and system testing efforts. The overall objective of AST is to design, develop, and deliver an automated test and retest capability that increases testings efficiencies; if implemented successfully, it can result in a substantial reduction in the cost, time and recourses associated with traditional test and evaluation methods and processes for software-intensive systems.
Mostrar mais

92 Ler mais

Using Raster Based Solutions to Identify Spatial Economic Agglomerations

Using Raster Based Solutions to Identify Spatial Economic Agglomerations

Considering the terminology used by the ArcObjects framework, two coclasses has been defined (RS_Densitate_Linii and RS_Densitate_Puncte ), and an abstract class (RS_Densitate) having the goal to close the hierarchy. RS_Densitate class contains work- ing elements used by the two coclasses. It is an inheritance relationship between classes, in the sense that coclasses inherit attributes and methods of the base abstract class. Raster type symbology can be generated, from spa- tial data having specific types like: line or point. RS_Densitate class has been defined as an abstract class, because it cannot be direct- ly instantiate. The Generare_Raster method used to generate the raster has been described as a pure virtual method because it cannot be concretized inside the abstract class, but it makes sense to be defined the concrete clas- ses derived from it. Generare_Raster method aims to generate raster, by using the input pa- rameters, some are common for both types of spatial data, and others are specific for each type of data. Once obtained, the raster can be appropriate symbolized, so that the areas from geographic studied are able to be dis- played as suggestive as possible, in terms of
Mostrar mais

13 Ler mais

FORECASTING LONG-TERM GOVERNMENT BOND YIELDS: AN APPLICATION OF STATISTICAL AND AI MODELS

FORECASTING LONG-TERM GOVERNMENT BOND YIELDS: AN APPLICATION OF STATISTICAL AND AI MODELS

The economic situation is important to interest rates. When the economy is booming and there is a high demand for funds, the price of borrowing money goes up, leading to increasing interest rates. Conversely, in economic recessions, everything else being equal, there is downward pressure on interest rates. The most important economic indicator for the output of goods and services produced in a country is the gross domestic product (GDP). However, this indicator is published only on a quarterly and annual basis. The PMI published monthly by the ISM (Institute for Supply Management) appears to be a good proxy for the GDP, as it generally shows a high correlation with the overall economy. For example, according to ISM analysis, a PMI in excess of 42.7 percent, over a period of time indicates an expansion of the economy. This month-to-month indicator is a composite index based on the following five indicators for the manufacturing sector of the U.S. economy: new orders, production, employment, supplier deliveries and inventories.
Mostrar mais

34 Ler mais

Scenario-based approach to the restructuring of enterprises on the basis of the economic and mathematical models

Scenario-based approach to the restructuring of enterprises on the basis of the economic and mathematical models

Abstract. The approach to the restructuring of engineering enterprises on the basis of economic and mathematical models complex and scenarios for its implementation are given in the article. The structure and features of the formation of organizational restructuring and economic provision are described. The components of economic and mathematical methods of restructuring are considered: a general model of restructuring, that determines the design of its planning, development and implementation; the overall business model of the enterprise as a set of interrelated models of different levels of detail that describes the key relationships between parameters of state, potential and external business environment of the company; set of models and methods that describe the processes of transformation, and approach of making decisions on restructuring options; a set of scenarios and appropriate means of restructuring. Approaches of solving of methodological problems of restructuring on three levels are considered and proposed. Problems of the first level associated with selection and evaluation of basic economic indicators that characterizes certain aspects of the business, the possibility of bankruptcy as well as development. Choosing a system of indicators that will provide an adequate assessment of the enterprise and solutions to prevent insolvency, increasing efficiency is important for managing the restructuring. Problems of the second level are related to the development or choosing of integral evaluation methods of the enterprise, the threat of bankruptcy, the potential of the company. Problems of the third level associated with developing an integrated methodology for planning, evaluation and restructuring. The principles of a holistic approach to design, specification and implementation of the strategy of enterprise restructuring, that ensures the success of restructuring are identified. The set of restructuring implementation scenarios are proposed. Areas of practical application of the proposed approach are identified.
Mostrar mais

21 Ler mais

A comparison of linear and non linear models to forecast the tourism demand in the North of Portugal

A comparison of linear and non linear models to forecast the tourism demand in the North of Portugal

In this respect, and given the substantial growth of this sector in the North of Portugal, it will be at all useful the development of models that could be used to make reliable forecasts of tourism demand, as it assumes an important role in the process of planning and decision- making both within the public and the private sector. At present, in the field of forecasting, is available a large variety of techniques and models that are emerging to meet the most varied situations, with different characteristics and methodologies, that range from the simple to more complex approaches (Thawornwong & Enke, 2004; Fernandes, 2005; Yu & Schwartz, 2006).
Mostrar mais

14 Ler mais

INFLATION FORECASTS USING THE TIPS YIELD CURVE

INFLATION FORECASTS USING THE TIPS YIELD CURVE

In table 2 we present the relative RMSFE of month-on-month inflation forecasts of the simple autoregressive methods and the augmented autoregressive forecasts using the breakeven inflation series with maturity of 10 years as an additional regressor. 11 For each horizon within each evaluation period (defined by the SO) the lowest relative RMSFE is highlighted in bold. From table 2 we conclude that, despite not having lower RMSFE than the benchmark IAR (FL) for any of the horizons and evaluation periods used, the DAAR forecasts improve simple univariate forecasts (AR Benchmark) for horizons of 2, 4 and 8 periods and have very similar RMSFE than the benchmark method IAR (FL) for forecast horizons of 2, 4, 6 and 8 periods, in the two main evaluation periods (SO=36 and SO=48), unlike the No-change method in the first evaluation period (SO=36). Taking into account these consistently similar results to those of the benchmark method IAR (FL) for these horizons and the fact that these benchmarks of simple autoregressive methods were artificially created choosing for each horizon the forecasts with the lowest RMSFE of various forecasts that differ in the number of lags, we decided that the DAAR method using inflation expectations extracted from the TIPS yield curve can provide worthy forecasts of month-on-month inflation to forecast quarterly inflation with the method explained in section 4.2.2.
Mostrar mais

37 Ler mais

Forestry trial data can be used to evaluate climate-based species distribution models in predicting tree invasions

Forestry trial data can be used to evaluate climate-based species distribution models in predicting tree invasions

Australian Acacia species that are not yet widespread are likely to spread to currently unoccupied climatically suitable ranges as SDMs predictions indicate that a large portion of Lesotho, South Africa and Swaziland is suitable for invasion by 13 of the 17 currently introduced or naturalized (Fig. 3). We believe this is a major invasion debt and not sim- ply an over-prediction in the SDMs, because there is a strong correlation between extent of usage and invasive distributions for Australian acacias in South Africa (Wilson et al. 2011). he widespread invaders are those species that have been planted for forestry, dune stabilization or ornamental purposes. However, many other introduced species were only ever planted in forestry trials or arboreta. As such their currently restricted distribution is the result of low propagule pressure, but given opportunities and time, these species can and do spread (Kaplan et al. 2012; Zenni et al. 2009; Kaplan et al. in press). Species that have a large potential range and are invasive elsewhere (Fig. 3 and Supplementary material Fig. S1) should be prioritised for management, and where pos- sible eradicated, e.g. Acacia implexa, A. paradoxa and A. stricta in South Africa (Kaplan et al. 2012; Zenni et al. 2009; Kaplan et al. in press). Commercial forestry is one of the major pathways to tree invasions and availability of introduction data can be useful for screening potential invaders when coupled to SDMs. SDMs provide useful information that can inluence management decisions on early detection, prioritization, and more targeted research. SDMs also provide information for rapid assessment of potential dis- tributions of alien species based on climate, even before introduction.
Mostrar mais

18 Ler mais

IMPACT OF ALIFE SIMULATION OF DARWINIAN AND LAMARCKIAN EVOLUTIONARY THEORIES

IMPACT OF ALIFE SIMULATION OF DARWINIAN AND LAMARCKIAN EVOLUTIONARY THEORIES

Until nowadays, the scientific community firmly rejected the Theory of Inheritance of Acquired Characteristics, a theory mostly associated with the name of Jean-Baptiste Lamarck (1774-1829). Though largely dismissed when applied to biological organisms, this theory found its place in a young discipline called Artificial Life. Based on the two abstract models of Darwinian and Lamarckian evolutionary theories built using neural networks and genetic algorithms, this research aims to present a notion of the potential impact of implementation of Lamarckian knowledge inheritance across disciplines. In order to obtain our results, we conducted a focus group discussion between experts in biology, computer science and philosophy, and used their opinions as qualitative data in our research. As a result of completing the above procedure, we have found some implications of such implementation in each mentioned discipline. In synthetic biology, this means that we would engineer organisms precisely up to our specific needs. At the moment, we can think of better drugs, greener fuels and dramatic changes in chemical industry. In computer science, Lamarckian evolutionary algorithms have been used for quite some years, and quite successfully. However, their application in strong ALife can only be approximated based on the existing roadmaps of futurists. In philosophy, creating artificial life seems consistent with nature and even God, if there is one. At the same time, this implementation may contradict the concept of free will, which is defined as the capacity for an agent to make choices in which the outcome has not been determined by past events. This study has certain limitations, which means that larger focus group and more prepared participants would provide more precise results.
Mostrar mais

43 Ler mais

Adaptive correction of deterministic models to produce probabilistic forecasts

Adaptive correction of deterministic models to produce probabilistic forecasts

Operationally, the flood forecasting model is run with a 15 minute time step and evaluated twice daily. At midnight it is run in a continuous mode using observed inputs and output data assimilation to generate a set of “warm states” which are used to initialise the forecasts. The first set of forecasts issued at midnight (00:00) give up to 36 h lead time using forecast precipitation. The second set of forecasts is issued at mid- day (12:00). These forecasts are initialised by evolving the “warm states” using the observed meteorological variables between midnight and midday. Forecast precipitation is then used to evaluate the flood forecasting model giving forecasts of the hydrological variables with up to 36 h lead time. Fur- ther details can be found in Weerts et al. (2011).
Mostrar mais

17 Ler mais

Forecasting human entrances at a commercial store using facial recognition data

Forecasting human entrances at a commercial store using facial recognition data

Considering the fact that more and more people are accessing information through mobile devices, many retailers in the United Kingdom have installed Free Wi-Fi devices in order to attract and maintain more customers inside their stores. One of these examples was the supermarket chain Tesco, which provided hundreds of its stores with Wi-Fi devices. As this trend started to emerge, an Indoor Location Intelligence program was launched in the United States of America using several technologies, such as Bluetooth, radio-frequency identification (RFID), video cameras , among many other devices, in order to track down their movement patterns. One disadvantage of this tracking method is that the Wi-Fi signals must be near of the shoppers’ smartphone in order to collect the MAC address, thus the Wi-Fi emitters need to be placed in key points through out the facility, in order to keep collecting the relevant data.
Mostrar mais

133 Ler mais

Show all 10000 documents...