HMA_200STRAT8


Algorithmic Momentum Trading System
The Limitations of Lagging Indicators
Price Momentum Shifts are Observed Too Late
Many momentum-based trading strategies commonly rely on moving average (MA) crossovers to signal changes in overall price direction. While effective in smoothing price data, these indicators inherently lag and/or mute underlying market dynamics. Shorter moving averages exhibit sensitivity to noise, and longer moving averages introduce delayed responses that reduce signal efficiency.
To address this limitation, this project utilizes the Hull Moving Average (HMA), whose calculation places greater emphasis on the second order behaviour of price, and captures local extrema of the HMA curve as signals of trade entry rather than MA crossovers. Second order price behaviour effectively reflects the acceleration and deceleration of a price series. By employing these two techniques, the strategy reveals momentum shifts earlier than conventional MA crossover strategies.
System Structure
Data Collection
Signal
Generation
Position Management
Trade
Execution
Data
Logging






This system fetches real-time and historical market data through Interactive Brokers API in 15min bar intervals aligning signal generation right at bar close. With seamless handling of missing/incomplete data, the algorithm calculates three different MAs for signal generation
4 distinct trade entry signals are generated using the HMA by detecting the change in curvature, rather than trend direction. By identifying local extrema in the HMA curve, the system captures price acceleration and deceleration while maintaining robustness against short-term noise.
To reduce overexposure, the system employs the Kelly criterion for determining the fraction of total equity to trade with. The winning probability (p) remains a fixed value which was initially calculated during the backtesting phase.
Orders are executed through an adaptive execution layer that accounts for slippage, partial fills, and delayed confirmations to simulate real trading conditions. For added robustness, a broker reconciliation function cross-checks with the brokerage and the system trading log to ensure consistent positions.
Trade records are locally logged after every trade position is closed.
Backtesting Methodologies
Backtesting Objectives
To evaluate the robustness of the strategy, multiple backtesting methodologies were employed to build a realistic and defensible case for live strategy deployment. Rather than optimizing over a favourable market window, the objective was to assess strategy behavior over extended time horizons, and resultantly varying market conditions. This approach helps mitigate time-specific bias, better reflecting real world uncertainty.
A central focus of the analysis involved balancing optimization with overfitting risk. Parameter selection and signal logic were not only evaluated for return generation, but also for stability and consistency over changing regimes.

The panel above displays two graphs; the first one being the trade positions (red = short, green = long) plotted on the NASDAQ: NVDA price chart between 2023-04-16 to 2025-04-15. The second graph shows a normalized comparison of an NVDA 'Buy and Hold' scenario (blue) with the trading strategy equity curve (orange) over the same period.
The primary performance metrics observed were the total equity growth, risk-adjusted return (Sharpe ratio), and mean max drawdown with aims at providing a view of both upside potential and downside risk exposure. As shown in the above panel, the strategy achieved a Sharpe ratio of 1.77 during this time period, with longer time periods showing consistent, if not better results. A Sharpe ratio > 1 is generally considered a stable, risk-adjusted return.
Methodologies
1. Monte Carlo Simulation (Sampling With Replacement)


Panel A demonstrates the bootstrap Monte Carlo simulations with reshuffling and replacement of 10,000 equity trajectories.
Panel B illustrates the distribution of those 10,000 final equities. The distribution is skewed right, exhibiting excess kurtosis which implies extreme outcomes are more probable compared to a standard Gaussian distribution.
It was also worth noting that from this sample set, all final equities from this sample set finished above the original equity value ($1000)
2. Perturbation Injection

To further analyze strategy stability, controlled perturbations were added to the price series by adding small randomized deviations to the closing prices. This was intended to simulate price inaccuracies, slippage, price pattern distortions, and other factors that may not be captured by basic backtesting. While the injections did reduce the Sharpe ratio by ~12%, overall risk-adjusted performance remained strong. This sensitivity analysis reinforced confidence of profitability despite noisy conditions.
3. Parameter Optimization
To evaluate parameter stability and guard against overfitting, a series of parameters used for the strategy were systematically analyzed for their overall performance. Rather than optimizing a single set of parameters, the objective here was to identify overall patterns and areas of stability.

As shown in the heat maps of Sharpe ratio and returns, strong results were not confined at a single parameter set, but showed an overall trend. This signalled an increased confidence in strategy robustness.
Assumptions
This backtesting assumed frictionless order execution at closing prices, with no explicit modelling of slippage, commissions, or latency. All capital is allocated in each subsequent trade, implying the existence of fractional position sizing.
Additionally, it was assumed that both long and short positions made possible. Order signalling and execution always occurs at the closing price, so there is no intra-bar signalling or stop-loss modelled. This process also assumed data integrity and completeness, implying no survivorship bias, as well as stationarity in the return distribution.
Risks and Limitations

To estimate a realistic estimation of maximum drawdown, a batched Monte Carlo framework was employed. Since the individual drawdowns for each simulation did not follow a normal distribution, the Central Limit Theorem (CLT) was applied to the distribution of batched max drawdowns. By doing this, a probabilistic estimation could be made on max drawdowns. Max drawdowns are characterized by the maximum aggregate losses incurred over a sequence of trades before recovery to a previous high in total equity. The mean max drawdown came out to be 11.66%.
To reduce risk, a blanket exit condition was included for all four position types as proxy to a stop-loss order. Recent historical analysis of NVDA 15min bars indicated that the 95 percent or bars closed within ±1.2% of the open price. Any movement exceeding 1.2% in the direction opposite to the current position, resulted in the system exiting that position immediately. This threshold was kept static throughout the duration of backtesting, and resultantly may not adapt adequately to cyclical volatility conditions observed in the market.
It is also important to note that the historical backtesting time scope was constrained by data available from the market data subscription. Consequently, the all backtesting was conducted over a relatively narrow time horizon, which means larger macroeconomic events/conditions variations may not be accounted for.
Too Long; Didn't Read
This project implements an end-to-end quantitative trading pipeline containing data intake, signal generation, position management, execution logic, and trade logging. The backtesting methodologies evaluated strategy robustness rather than for observing maximum in-sample returns. Some of the evaluation techniques include parameter sensitivity analysis, Monte Carlo resampling, perturbation stress testing, and drawdown distribution modelling. Assumptions such as frictionless execution and finite data windows are acknowledged, and statistical tools were applied to quantify realistic downside expectations. The result is not merely a high-performing backtest, but a systematically stress-tested framework designed with deployment and model risk in mind.