Thread: Why "Walk Forward Analysis" is still unreliable and useless!

Results 1 to 4 of 4

  1. #1

    Default Why "Walk Forward Analysis" is still unreliable and useless!

    Hello, I am Darwin and today I want to talk about the limitations that walk forward analysis suffers from. This is my 3rd article, so if you do not know how WFA works, please read the other 2 (you can find them here on the forum).

    The target audience of this article is everybody that deals with ExpertAdvisors and Backtesting / Walk Forward Analysis

    Some of you might already have seen a few posts of me where I talk about some research I do in the fields of Trading System Analysis (in the course of writing a meta-algorithm that can build, analyse and trade strategies on its own). The goal is to write an algortithm that is so powerful that it can take every EA and, due to in-depth analysis, tell you how and when to trade it in order to make profit, no matter how good or bad the underlying EA is

    So here is a new article in which I would like to lay down some insights that I could get in the process of writing this algorithm (DATFRA - Darwins Algorithmic Trading Framework)


    Well, lets begin. My first concern is that the design of Walk Forward Analysis is, in its nature, unrewarding and not the kind of analysis a trader wants.

    Also, I claim that the results of a WFA are more or less random, and if a system works well after a successful WFA, then not because the test was successful, but because the trader designing the system did a good job.

    In this article I do not yet want to show how this problems can be solved, I just want to demonstrate that they exist. In my next article I will explain how I think this all can be solved in an elegant way.
  2. #2

    Default

    So all this has to be pre-determined by the trader, out of intuition, and not based on true facts and data. But god, these are the most important decisions, how should one "guess" them?!

    And then, WFA will only be able to tell you if this construct would have worked in the past or not, thats it.

    So in order to find the best trading construct, you have to use trial&error and repeat WFA step multiple times. This would then, step by step, even lead to the worst case, your "unseen" out-of-sample tests would slowly become "known" in-sample data and the whole advantage of WFA over backtesting would fade away completely.


    This design related problems are already showing that WFA can not be the end of the road in terms of system analysis.

    In a perfect world, you should give the analysis algorithms only the trading system and the market/timeframe, no other parameters. And then, the algorithm should tell you the best choices for all the other parts of the trading construct, based on data and facts, not the other way round.

    Side Note: it should NOT just tell you how to trade your systems, it should give you the possibilities to look into the system's characteristics on your own. You should never be forced to trust any algorithms without the possibilities to check it's findings!
  3. #3

    Default

    Let's say the "fast" Moving Average Periods can be 10-50 and the "slow" ones 50-250, the RSI threshold can be 1-100 and the StopLoss 50-150 pips (this is no real system, just an example!)

    So this system can already be traded in 40*200*40*200*100*100 different ways. That is 640 billion (640.000.000.000), which is quite a huge number.

    One might question my exact example strategy, but can not question the millions or billions of possible parameter combinations, even for small systems.

    But thankfully, if we take into account that a lot of these parameter-combinations would behave very similar, we do not need to evaluate them all, but we need at least a meaningfull sample of it, like a few hundred thousand or a few million.

    So, keep this huge amount in mind, even for small systems, because with every new dimension for our optimisation problem's solution space (every new parameter) the amount of possible parameter-combinations grows exponentially.
  4. #4

    Default

    And then, remember that WFA relies on a trial&error principle, so you will most likely have to do this a few times.


    You see? Evaluating the real picture would take very, very long, and therefore most WFA implementations are forced to only evaluate an very much cropped fraction of the actual parameterspace because it is not possible to evaluate the whole parameterspace (or a meaningfull sample of it) in a reasonably small timespan, because optimisation has to be done in every single WF-Window.

    This means, WFA most likely does not evaluate 500.000 parameter combinations per window but only 10.000 or 50.000 or something like that. So eventually we already lose like 90 of all data in this step.


    This is a problem that could be solved if the trader has lots of time for his/her analysis (which is not likely, especially based on the trial&error method), or with a more efficient design of these algorithms. Nevertheless, in praxis, this problem is ever-present.

    For comparison: DATFRA, which is my private research project, only has to do one single simulation per parameter-combination, no matter how many WF-Windows it analyses. In the above example, that would already decrease the computing time by the factor 240.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts