Dilip Kumar Astik Independent AI Investment Risk Assessor
Dilip Kumar Astik

"My mission is to equip boards and fiduciaries with a new lens of capital rationality where technical metrics no longer suffice as evidence of value."

Forged in the Toughest Terrain of AI and Finance

For over three decades, my professional life was shaped by one discipline above all else: risk-aware decision making.

I began my career as a Chartered Accountant, leading finance functions in manufacturing and software companies, and later advising startups and listed enterprises. As CFO and senior finance leader, my responsibilities spanned financial planning, budgeting, statutory reporting, and governance. I worked with boards, audit committees, and leadership teams where numbers were not abstractions—they carried consequences.

Along the way, I conducted IFRS training programs across India, Mauritius, Dubai, and Sharjah, and helped multiple listed companies navigate complex IFRS implementation journeys—translating standards into operational reality.

For most of my career, finance rewarded skepticism, discipline, and respect for uncertainty.

When Finance Became a Spectator

Around 2020, I observed a troubling shift.

Artificial intelligence was rapidly transforming trading and decision-making. Quantitative teams were building increasingly sophisticated models—but often without deep financial risk intuition embedded in their design. Systems looked compelling in presentations and backtests, yet behaved in ways that raised serious concerns under real market conditions.

Executives were being asked to approve AI investments based on metrics that appeared rigorous but often failed to answer the most important question:

Will this system behave sensibly when reality diverges from assumptions?

Rather than critique this shift from the sidelines, I decided to understand it deeply enough to challenge it responsibly.

I returned to structured learning through MIT Professional Programs—studying machine learning with Python, time-series analysis, and foundational computer science. My objective was not to become another data scientist, but to build a bridge between financial judgment and AI implementation.

That bridge would soon be tested—hard.

The 98% Win Rate That Revealed the Truth

We began developing a reinforcement learning system for NIFTY options trading—one of the most complex and unforgiving environments in AI.

At one stage, the system achieved a 98.2% win rate during training. Convergence was clean. Metrics were textbook-perfect.

Then we examined the parameters the agent had learned.

It had converged on a 6% stop loss—a value that would trigger repeatedly in a high-volatility options market, exiting trades on noise rather than risk. The agent had optimized perfectly for the training environment while learning nothing about real options trading.

The system wasn't broken. It had exposed a flaw in our problem definition.

That moment launched a journey across seven versions, each revealing a different failure mode that AI systems encounter when financial reality is insufficiently encoded.

V1–V3: Adding more features (97 of them) improves overfitting, not learning.
V4–V5: Unconstrained action spaces lead to mathematically optimal but financially dangerous behavior.
V6–V6.2: No amount of tuning can compensate for flawed state representation. Result: 42% win rate—worse than random.
V7: Success came only after reframing the question—not "Can AI trade?" but "Can AI optimize how we trade?"

The outcome was not just a validated system where out-of-sample performance exceeded training results. It was something far more valuable: a map of the AI failure terrain—earned in the toughest possible environment.

Why This Experience Generalizes

Reinforcement learning in options trading represents a worst-case testing ground for AI systems:

  • Non-stationarity — Markets shift; models trained on history face different futures
  • Adversarial dynamics — Markets adapt and react to participants
  • Path dependence — Sequence of decisions matters, not just final outcomes
  • Asymmetric risk — Losses compound faster than gains accumulate

If governance, metrics, incentives, or assumptions are weak, this environment exposes them quickly.

The assessment frameworks I apply today were not designed in theory. They were forged under stress.

Failure as Curriculum

I document what most teams quietly bury:

  • Metrics that misled
  • Versions that regressed
  • Evaluation methods that hid reality for weeks

In finance, failures are rarely discussed. In AI research, failed experiments don't get published.

This silence is expensive.

Treating failure as curriculum provides foresight—exposing the patterns that quietly destroy AI investments before they surface in production.

Risk as a First Principle

As a Chartered Accountant, risk is not abstract—it is what keeps institutions solvent.

I apply the same discipline to AI:

  • Independent validation gates
  • Explicit success and termination criteria
  • Monitoring frameworks that detect drift
  • Rollback and containment planning

AI systems are approached the way financial governance is approached: assume errors exist, design for detection, and never confuse elegance with reliability.