TransWikia.com

Exchange rate trend-stationarity

Quantitative Finance Asked by Aron_t on December 14, 2021

I am kinda new to time-series analysis, I want model CEE (EUR/HUF, EUR/PLN, EUR/CZK, EUR/CHF) exchange rates with ARIMA. I understand that according to Box-Jenkins modeling, I should first check if my dataset is stationary. I ran the ADF, and KPSS test, for the exchange rates and I got the results that with a drift term and without trend my data is not stationary, however, when I ran the ADF test with drift and trend term the null hypotesis was rejected. As far as I understand this means there is deterministic trend within my data, so it is trend-stationary. The KPSS test in most of the case accepts stationarity (p value around 0.1), however again when I check for example for EUR/HUF (2013-2020), the ADF test with drift and trend suggests it is stationary, but the KPSS with the same terms (drift and trend) says that the p<5% so meaning non stationary. Also, the EUR/PLN exchange rate is very interesting because the ADF (both with trend and drift; and with drift no trend suggest that the data is stationary (<0.01).

Also, this changes with the amount of data I use for checking (for example 2000-2020 dataset, in almost all of the exchange rates rejects stationarity, but in case of the EUR/HUF it still suggests that there is a deterministic trend, same like before with trend and constant the null hypotesis is rejected).

My question is, even if there is deterministic trend, can I still use log-differencing (aka log-returns) to make it stationary or I need to fit the original dataset to an lm model and use the residual (aka detrending method as follows in R):

  • trend=lm(as.ts(eurhufadf.xts)~c(1:length(eurhufadf.xts)))
  • detrend=residuals(trend)

Using the following code in case of the EUR/HUF exchange rate the KPSS test still rejects with drift and trend that our data is stationary.

Also, does having less data for model building (2013-2020) means I couldn’t check the stationarity of a longer dataset (2000-2020)? I wouldn’t use the ma smoothing method, as I have daily frequency, and I wouldn’t like to lose data.

In case of differencing (1) of course every test says that our data is stationary, my worry is that it would be misleading not to detrend and use the arima model with such data.

2 Answers

Both @Con and @markleeds give good advice. Please don't worry - ADF is famously headache-inducing ;-)

The core problem here is that drifts and trends look horribly alike; and thus approaches like ADF will struggle to distinguish between them in finite samples. Suppose you have a 5% p-value rejection rate testing for trend and for drift. One comes in at 4%; the other at 6%. So one is rejected, while the other is not. One's confidence that the non-stationarity is thus one and not the other should not be that high!

There is an additional fundamental problem, if one is looking at spot FX rates. One should not expect these to be stationary in the first place, given interest rate differentials between the ECB and your CEE central banks. One does not even need to believe in Covered Interest Parity (CIP) here. The carry trade might still be profitable, because the CIP in forwards need not be 100% reflected in the future evolution of spot. The carry trade will remain profitable if CIP is only 50% reflected thus. But this 50% will still help to make spot significantly non-stationary. Plus theoretically, this mon-pol differential is neither a trend nor a drift, because it is itself time-varying (as interest rate differentials fluctuate over time). It's hard to win!

Generally speaking, log-differencing is usually good practice with most real-world financial market time series. Whether it's a trend or a drift, it will produce a much more stationary series either way. Any trend or drift thus would remain in the model; but its significance is easy to assess (and correct for). The significance of your drift/trend is the T-stat of your intercept; and the cleanest way to de-drift/de-trend (if you felt you had to) would be to subtract the intercept from the fitted values.

Answered by demully on December 14, 2021

This is done simply in R with Rob Hyndmans Forecast packages, you need to run ACF, and PACF, there is an automatic algorithm for calculating the model of best fit, which takes most of the difficulty out and from there you can modify. Gretl econometric package has a GUI interface for automatically estimating ARIMA as well and you can manually estimate it as well. Gretl is not part of the R environment but is a standard alone program like Stata and Eviews for econometrics, similar to SPSS and SAS for psychology and science. But unlike those Gretl is free and open source like R. I do not know how you are implementing it in R, but read the documentation of Rob Hyndmans Forecast package and he has some brief lectures on youtube as well as his site with his forecasting book published free,https://otexts.com/fpp2/. And a course on Time Series Analysis on data camp(subscription fee).

Answered by Con Fluentsy on December 14, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP