Hi

I am not sure if this can workout but I write it here as an idea to avoid the peeking bias introduced by optimalF parameteres:
The point is to calculate the OptimalF parameters for each test period after each training period of each WFO cycle. Then we would get 1.fac 2.fac and so on as there are 1.par 2.par files for the parameteres of each training cycle. The Test period after each WFO cylce could be splitted in the test period to calculate the OptimalF and then a pure out-of-sample period where the optimalF calculated are applied. So each training cylce has in reality two test periods, one to calculate the optimalF and another to actually test the strategy.
At the end, the whole test would be cover by different out-of-sample periods which use diferent optimalF parameteres.


As well it maybe possible that the robustness of the strategy could be tested by slightly changing the OptimalF parameteres. Something like NumSampleCycles applied to the OptimalF values where at each cycle the OptimalF parameters are slightly different from the previous cycle. Since the strategy will behave slightly different during live trading, the optimalF parameteres would change if that live period would be considered as well into the calculation but that difference should not produce an unprofitable strategy from one that looks profitable in first place.

Last edited by Nanitek; 05/24/16 12:11.