Dilip Kumar Astik Independent AI Investment Risk Assessor Chartered Accountant - MIT Professional Courses in Data Science
Dilip Kumar Astik

An Independent Perspective on AI Capital Risk

My role is not to promote AI adoption, accelerate implementation, or validate technical ambition. It is to independently assess whether AI initiatives deserve capital—before and during execution.

Why AI Requires a Different Kind of Oversight

Traditional governance models evolved for deterministic systems—software that behaves predictably under defined rules. AI systems, by contrast, are probabilistic, adaptive, and opaque by design.

This creates a governance gap:

  • Technical teams report progress through proxy metrics
  • Capital exposure accumulates silently
  • Boards are asked to approve continuation without decision-grade evidence

The result is not technical failure, but capital erosion masked as progress.

My Professional Foundation

I bring together three perspectives that rarely coexist:

Financial Governance
Chartered Accountant with over three decades of experience in capital allocation, board governance, audit committees, and risk oversight. I have worked with boards, audit committees, and leadership teams where numbers were not abstractions—they carried consequences.

Technical Literacy
Formal training in machine learning and AI through MIT Professional Programs—not to build systems, but to evaluate their claims.

Failure-Grounded Experience
An 18-month reinforcement learning implementation in options trading, documented across seven materially different system versions.

This combination allows me to assess AI initiatives as capital decisions, not technical experiments.

What One Extreme Case Revealed

The reinforcement learning project that anchors my work achieved headline success—including a 98.2% win rate during training.

It also revealed something more important:

Systems can converge perfectly while learning behavior that is financially irrational in reality.

The agent had optimized for a 6% stop loss—a value that would trigger repeatedly in a high-volatility options market, exiting trades on noise rather than risk. The system wasn't broken. It had exposed a flaw in our problem definition.

Across seven versions, we observed:

  • Metrics improving while real-world validity deteriorated
  • Confidence increasing even as capital risk escalated
  • Stopping becoming harder precisely when it became necessary
V1–V3: Adding more features (97 of them) improves overfitting, not learning.
V4–V5: Unconstrained action spaces lead to mathematically optimal but financially dangerous behavior.
V6–V6.2: No amount of tuning can compensate for flawed state representation. Result: 42% win rate—worse than random.
V7: Success came only after reframing the question—not "Can AI trade?" but "Can AI optimize how we trade?"

The system was not out of control. It was perfectly controlled by the wrong evaluative lenses.

Independence Is the Product

I do not design AI systems.
I do not sell implementation.
I do not benefit from continuation.

This structural independence is not a disclaimer—it is the mechanism that preserves credibility.

Engagements are accepted only when:

  • Capital exposure is material
  • Decision authority exists at board or C-suite level
  • The objective is decision clarity, not validation

This allows difficult conclusions to be reached before momentum and sunk cost remove the option to stop.

What Boards Can Decide Earlier

The practical outcome of independent assessment is not prediction—it is decision clarity.

Boards gain the ability to:

  • Distinguish technical progress from economic validity
  • Identify when proxy metrics are masking structural risk
  • Decide not only when to start, but when to stop

These decisions are rarely dramatic. They are simply made earlier—when capital is still preservable.

AI does not fail because it is complex.

It fails because governance treats learning systems as if they were finished products.

My work exists to ensure that capital decisions in AI are made with the same rigor, skepticism, and independence that boards apply to every other material investment.