August 14, 2025
I have been having lots of conversations about the powers – and limitations – of AI when it comes to legal document analysis. There is the camp that believes AI is the solution to their legal analysis needs – a panacea of sorts offering the holy grail of "upload the document and get all the answers."
There is another camp that perceives AI as a tool for document analysis. Yes, it can dramatically speed up the process, but only if it knows what it's doing first. AI without conceptual context is making us dumber, as The Telegraph's Andrew Orlowski explains in his recent article, Why AI heralds a new age of stupidity.
AI can make us lazy. Excessive reliance on these tools absent of a conceptual framework can cost us big time. The thing about AI, as I explained to James Crombie on Bloomberg's The Credit Edge podcast, is that it will always make you believe that what it is telling you is fact. It is designed to be persuasive in its answers, but when pressed, it will admit that it's wrong.
You can't interrogate AI's answers if you don't have the conceptual framework to do so. And you will get far better answers from an AI that shares your conceptual framework, as I proved in a recent experiment inspired by a close friend and colleague's assertion that the newest Claude model might just be good enough to put FLT out of business.
After all, why would someone want to invest in training for a skill that AI can complete with ease? It's a great question – so I decided to put AI to the test.
Using the model that my friend referenced – Claude 4 Sonnet – I provided it with a Description of Notes and asked it to highlight key covenant risks. The answer it provided seemed pretty legitimate to the layperson and might appear right on point. However, to an expert like me, the answer was obviously too surface level to be useful. The AI identified basic covenant categories and flagged obvious structural issues, but missed the sophisticated gaming tactics and calculation vulnerabilities that create real risk.
So I decided to train the AI on Fox Legal Training's Leveraged Finance Covenant Training course, and then asked the question again. The second answer was dramatically different – much more fulsome, nuanced and clear. The risk assessments were, for the most part, spot-on. Something I could deduce with my own conceptual framework engaged to interpret the response.
Next, I asked Claude: How would you compare what you were able to do in the first review, before I added the files, to what you were able to achieve after having the additional training?
Here is its answer:
That's an excellent question that highlights a dramatic difference in analytical capability and sophistication.
5. Modern Covenant Erosion Recognition
"This deal has structural subordination risks because it's issued by holding companies. The leverage ratio of 60% seems reasonable. There are some carve-outs that could allow dividends."
4. From Reactive to Proactive
The modules essentially gave me specialist-level expertise that transformed me from a generalist who could spot obvious issues into a covenant gaming detective who can
It's the difference between knowing about covenants and thinking like a covenant specialist who has seen every trick in the book. The case studies and technical details created a mental library of patterns and vulnerabilities that completely changed the analytical sophistication I could bring to the deal.
This experiment validates the approach FLT has taken with our AI Covenant Tutor, Foxy – rather than expecting AI to magically understand complex covenant analysis, we've pre-trained it with the same conceptual framework that we teach our clients.
But I didn't stop there. I wanted to know how the AI's enhanced analysis would be interpreted by different audiences. What would the difference in comprehension be between myself as an expert and someone without the conceptual framework that I have, and that FLT imparts to our clients?
I asked this follow-up question: What would the difference be in comprehension of your informed answer by two different people - one who had also been trained on the conceptual framework that I provided, and one who had not been trained and was a layperson in these matters?
Here's the full reply:
Excellent question! This highlights the critical importance of shared conceptual frameworks in technical communication. Let me break down how each audience would interpret my covenant analysis:
Risk Assessment Framework: They'd instantly validate my HIGH risk rating because they understand:
My Analysis: "€171.7 million RP basket with no default blocker creates significant value leakage risk"
Trained Professional Reads: "Even if the company is struggling financially, management can still pay €171.7M to shareholders instead of preserving cash for debt service - that's a major red flag"
Layperson Reads: "There's some kind of €171.7 million thing that's risky, but I don't understand what 'RP basket' or 'default blocker' means, so I can't assess if this is actually concerning"
My Statement: "Reclassification mechanics allow optimization between debt baskets"
Trained: "Classic Monkey Business tactic - borrower can strategically move debt between categories to maximize capacity"
Layperson: "Something about moving debt around? Why would they want to do that? Is this legal?"
My Statement: "Development financing creates unlimited asset-level leverage risk"
Trained: "Aha -individual properties can be over-leveraged to 60%+ LTV, potentially priming the note collateral and reducing recovery values"
Layperson: "Unlimited sounds bad, but I don't understand what 'asset-level leverage' means or why it affects me as a bondholder"
My Statement: "Track restricted payments capacity utilization"
Trained: "Need to monitor how much dividend capacity is being consumed and whether they're approaching the dangerous thresholds"
Layperson: "I should track something about payments, but I don't know what data to look for or what numbers would be concerning"
What They'd Need:
Foundation Building:
This highlights a fundamental challenge in specialized fields:
The Reality: My covenant analysis is essentially written in a specialized language that requires the conceptual framework training to decode. Without that training, it's like reading a technical manual in a foreign language - you might catch some general themes, but you miss the critical details that drive the risk assessment.
This is why shared educational frameworks are so valuable in professional contexts - they create a common language that enables sophisticated analysis and communication.
And there you have it – whether we like it or not, finance contracts are written in a specialized language, and so are the analyses of these contracts. Here's the critical insight: training the AI was only half the battle. The human user still needs the same conceptual framework to properly interpret and act on the AI's enhanced analysis.
Without the proper conceptual framework, you're not just wasting time trying to decode AI's technical outputs; you're making decisions based on information you can't properly evaluate. In finance, that's not just inefficient – it's dangerous.
The choice isn't between human expertise and AI capability; it's between informed AI assistance and uninformed guesswork. The same training that makes AI more effective also makes humans better equipped to interpret and act on AI's analysis.
For those ready to bridge this knowledge gap, FLT's Leveraged Finance Covenant Training provides both the human conceptual framework and access to our AI Covenant Tutor, Foxy – ensuring that both you and your AI assistant speak the same sophisticated language of modern finance.