Dilip Kumar Astik Independent AI Investment Risk Assessor

The Intellectual Foundation

These books are not thought leadership. They are documented evidence—capturing how AI systems actually fail under pressure.

Book 1: The Narrative

Forthcoming

The 98% Win Rate That Failed

What Seven Versions of Wrong Turns Taught Us About RL for Trading

This book explains why AI systems that look successful on paper often collapse in reality—and how leadership teams can recognize those risks before capital is committed.

The Core Paradox

We celebrated when our reinforcement learning system achieved a 98.2% win rate in training. Then we discovered the fatal flaw: it had learned a 6% stop loss that exited constantly on noise. The system was mathematically impressive—and financially useless.

What This Book Documents

  • The Seven-Version Journey: From over-engineering (V1–V3), to false confidence (V4–V5), to catastrophic regression (V6.2 at 42%), to validated learning (V7)
  • Hidden Failure Modes: Feature leakage, state aliasing, reward hacking, evaluation bugs that survived weeks of review
  • Counterintuitive Discoveries: Why wide stops, more trades, and flat scaling beat conventional intuition
  • Leadership Lessons: When to invest in RL, how to structure skeptical reviews, and when stopping is the correct decision

This is not a success story. It is a map of dead ends, written so others don't have to discover them the hard way.

Written for: Executives & Board Members · CFOs, CROs, and Risk Leaders · Technology Leaders overseeing complex AI initiatives

Book 2: The Principles

In Preparation

Principles of AI Capital Risk

How Boards Should Think About Go/No-Go and Continue/Stop Decisions

This book addresses a different question: not how AI systems fail technically, but how boards and investment committees should reason about AI capital allocation under uncertainty.

The Problem It Addresses

AI initiatives often appear rigorous—detailed architectures, impressive metrics, confident projections. Yet boards lack frameworks for evaluating whether these investments deserve continued capital, or whether stopping is the more prudent decision.

What This Book Provides

  • Mental Models: How to think about AI risk at the governance level—without requiring technical depth
  • Decision Logic: Frameworks for Go/No-Go decisions before capital is committed, and Continue/Stop decisions when projects stall
  • Governance Failures: Patterns of oversight breakdown observed in real AI initiatives
  • Capital Consequences: How AI failures translate to financial exposure and organizational risk

This book defines decision logic, not technical methodology. It is written for those accountable for capital allocation, not system implementation.

Written for: Board Members · Independent Directors · CEOs and CFOs · Investment Committees · Risk & Governance Professionals

How These Books Relate

Book 1 provides the empirical foundation—the documented failures, the specific versions, the lessons extracted under pressure. It establishes credibility through transparency.

Book 2 transforms those technical experiences into governance principles. It abstracts from the specifics of reinforcement learning to address the universal challenge: how boards should think about AI investments when evidence is ambiguous and momentum is strong.

Together, they represent two levels of the same insight: AI systems can optimize metrics perfectly while guaranteeing capital destruction—and governance must be designed to detect this before it becomes irreversible.

Supplemental Work

Case Studies

Focused analyses examining publicly documented AI failures across sectors, each mapped to specific assessment dimensions.

Articles & Essays

Short analytical pieces examining discrete elements of AI investment risk, including metric-driven misjudgment, sunk-cost dynamics, and early termination as a governance outcome.

"In AI, the most valuable asset is not the model that worked—it's the record of what failed, why it failed, and how early it could have been detected."