Dilip Kumar Astik Independent AI Investment Risk Assessor

Questions Boards Commonly Ask

Boards are accountable for capital allocation under uncertainty—not technical elegance.

1. Is this a technical audit of the AI system?

No. This is an investment risk assessment, not a code or architecture review. The assessment examines whether capital committed to an AI initiative is justified based on problem formulation, evaluation discipline, governance structures, and decision clarity.

2. How is this different from hiring consultants?

Consultants are typically engaged to make the project succeed. This assessment answers a different question:

Should capital continue to be committed at all?

The role is independent of delivery timelines, implementation ownership, and vendor success.

3. Does this assessment decide Go / No-Go?

No. The assessment provides structured findings, risk classification, and evidence-based recommendations. Final decisions remain with the board and management.

4. What kinds of decisions does the assessment support?

  • Go / No-Go — before committing capital to an AI initiative
  • Continue / Stop — when an ongoing project shows ambiguous or stalled progress

5. What evidence is reviewed?

The assessment relies on existing artifacts: project documentation, evaluation reports, governance records, and interviews with technical and business stakeholders. No new system development is required.

6. How early can this be used?

As early as concept approval. Early assessments often prevent mis-scoped initiatives, unrealistic success criteria, and evaluation designs that mask risk.

Late assessments tend to be more decisive—but more costly.

7. What if the assessment recommends stopping?

A Stop recommendation is not a failure judgment. It signals that the current approach is not economically viable, and that continued capital deployment increases risk without commensurate learning. Stop decisions often preserve capital and enable reframing under clearer assumptions.

8. How is independence maintained?

  • No implementation services
  • No performance incentives
  • No success-based fees
  • No downstream delivery roles

The assessment concludes with recommendations only—execution remains separate.

9. Can this apply outside financial markets?

Yes. The core elements address universal AI investment risks: problem formulation, evaluation validity, governance clarity, and incentive alignment. The framework stands on mechanisms, not domain-specific anecdotes.

10. How does one case study support a broader framework?

The framework derives from failure patterns repeatedly observed across seven system versions in an adversarial environment with strict capital constraints.

These mechanisms—misaligned incentives, invalid metrics, weak termination criteria—recur across AI initiatives regardless of industry.

11. How long does an assessment take?

Most assessments complete within a few weeks, depending on project complexity and documentation maturity. The objective is decision clarity, not extended engagement.

12. What does the board receive?

A structured report covering identified risk patterns, evidence gaps, governance concerns, and decision-ready recommendations—designed to support board-level discussion, not technical debate.

13. Is this relevant if the project seems to be progressing?

Yes. Many AI initiatives show surface-level progress while accumulating hidden risks. The assessment examines whether progress reflects real learning and whether continued investment remains justified.

14. One-time or ongoing?

Both are possible. Boards often engage once for a major Go / No-Go decision, then again when a Continue / Stop inflection point arises. The framework is designed for decision moments, not continuous oversight.


This assessment supports capital stewardship—systematically, independently, and transparently.