Why AI Requires a Different Kind of Oversight
Traditional governance models evolved for deterministic systems—software that behaves predictably under defined rules. AI systems, by contrast, are probabilistic, adaptive, and opaque by design.
This creates a governance gap:
- Technical teams report progress through proxy metrics
- Capital exposure accumulates silently
- Boards are asked to approve continuation without decision-grade evidence
The result is not technical failure, but capital erosion masked as progress.
My Professional Foundation
I bring together three perspectives that rarely coexist:
Financial Governance
Chartered Accountant with over three decades of experience in capital allocation, board governance, audit committees, and risk oversight. I have worked with boards, audit committees, and leadership teams where numbers were not abstractions—they carried consequences.
Technical Literacy
Formal training in machine learning and AI through MIT Professional Programs—not to build systems, but to evaluate their claims.
Failure-Grounded Experience
An 18-month reinforcement learning implementation in options trading, documented across seven materially different system versions.
This combination allows me to assess AI initiatives as capital decisions, not technical experiments.
What One Extreme Case Revealed
The reinforcement learning project that anchors my work achieved headline success—including a 98.2% win rate during training.
It also revealed something more important:
Systems can converge perfectly while learning behavior that is financially irrational in reality.
The agent had optimized for a 6% stop loss—a value that would trigger repeatedly in a high-volatility options market, exiting trades on noise rather than risk. The system wasn't broken. It had exposed a flaw in our problem definition.
Across seven versions, we observed:
- Metrics improving while real-world validity deteriorated
- Confidence increasing even as capital risk escalated
- Stopping becoming harder precisely when it became necessary
The system was not out of control. It was perfectly controlled by the wrong evaluative lenses.
Independence Is the Product
I do not design AI systems.
I do not sell implementation.
I do not benefit from continuation.
This structural independence is not a disclaimer—it is the mechanism that preserves credibility.
Engagements are accepted only when:
- Capital exposure is material
- Decision authority exists at board or C-suite level
- The objective is decision clarity, not validation
This allows difficult conclusions to be reached before momentum and sunk cost remove the option to stop.
What Boards Can Decide Earlier
The practical outcome of independent assessment is not prediction—it is decision clarity.
Boards gain the ability to:
- Distinguish technical progress from economic validity
- Identify when proxy metrics are masking structural risk
- Decide not only when to start, but when to stop
These decisions are rarely dramatic. They are simply made earlier—when capital is still preservable.