Before You Sign That AI Contract…
Glittery features. Hidden liability. Here’s what smart CFOs ask before someone clicks “agree.”
Today’s Sponsor: NetSuite
Building the Finance Org: Building a world-class finance org starts with the people. I collaborated with NetSuite on the best practices for building the finance org — including my framework of when and who to hire on the finance/accounting team.
Before You Sign That AI Contract…
Let’s be honest. AI is now showing up in your contracts like glitter after a toddler’s birthday party. It’s everywhere, it wasn’t invited, and nobody’s quite sure who brought it.
Most of the time, Legal isn’t even the one pushing it. It’s product. Or marketing. Or a vendor who swears their AI feature is “just an enhancement.” Everyone wants to integrate AI. No one wants to read the fine print.
If you’re a CFO, that’s a problem. Because AI-related contract language isn’t just legal noise. It’s quiet, cumulative risk. The kind that creeps into your IP rights, your data strategy, your compliance obligations, and eventually your bottom line.
These clauses don’t just shape legal exposure, they shape whether you can scale cleanly, pass diligence, or avoid being blindsided by your own tooling.
Here are four AI contract clauses worth asking Legal about before someone clicks “accept.”
💡This is a guest post by my new newsletter, OnlyLawyer. The writer is one of the best tech lawyers you can find. If you manage the legal dept or are a lawyer, then you should subscribe.
1. Training Rights
Sure, go ahead and use our data to improve your billion-dollar model. We’re cool with that.
This clause is usually disguised as something friendly and helpful, like:
“Customer grants Provider a royalty-free license to use Customer Data to improve the Services, including through model training and analytics.”
Translation: Your data is about to become part of someone else’s product. Permanently.
Let’s break that down:
You’re giving away value without compensation.
The data being “used for training” might include proprietary info, customer records, or internal metrics.
You might be on the hook for privacy or data transfer obligations you didn’t even know you triggered.
And the best part? Once the model is trained on your data, you can’t untrain it.
Real example: A startup signed a cheap SaaS tool with a “training rights” clause buried in the TOS. Six months later, the vendor’s pitch deck bragged about model performance using datasets “sourced from real customers.” Guess who didn’t see that coming?
Ask Legal:
Can we restrict model training to fully anonymized, aggregated data only?
Can we opt out of this clause entirely?
Are we sure this aligns with our privacy policy, customer commitments, and regulator expectations?
It’s one thing to get value from software. It’s another to be the value. Don’t be the value.
2. IP Ownership of Model Outputs
You used the tool, but we own what it created. Thanks for playing.
Inputs are yours. Outputs…not necessarily.
Some AI vendors sneak in language that gives them rights over anything the model generates, even if it’s based on your data, your use case, and your business.
Why does that matter?
Because your team is probably using model outputs for:
Internal dashboards
Investor decks
Product decisions
Public-facing content
Customer recommendations
If you don’t own it (or worse, if they do) you’ve just handed over commercial leverage and possibly even tainted your own IP.
Real example: A growth-stage company used an AI research tool to generate custom market reports. They presented the data to investors. A week later, a competitor posted an eerily similar report with the same vendor attribution. Turns out the vendor retained reuse rights…Oops.
Ask Legal:
Do we own all model outputs, exclusively and without limitation?
Can the vendor reuse, publish, or claim rights over anything generated from our usage?
Are we safe using this in commercial materials or deliverables?
Translation: You get the flashlight. They keep the gold.
3. AI-Specific Indemnities
If this thing breaks something or sues you…good luck.
Most indemnity clauses cover:
Third-party IP claims
Breaches of confidentiality
Maybe data protection
What they don’t usually cover? The weird, unpredictable, sometimes completely bananas things AI models do.
Here’s what that includes:
Hallucinations (i.e., confidently wrong information)
Copyrighted content pulled from training data
Biased or discriminatory output
Regulatory violations from misclassification or misuse
The vendor will almost always say: “That’s not our fault. You used it.”
Real example: A sales team used AI-generated outbound messaging to target customers. One email referenced a customer’s prior purchases…except they’d never bought anything.
Turns out the model made it up. They received a formal complaint under consumer deception laws. The vendor shrugged.
Ask Legal:
Are we indemnified for any legal or financial harm caused by model outputs?
Are there exclusions we should be worried about (e.g., “your use of the Services”)?
Is there a cap on their liability that would make this clause meaningless?
Ask what happens if the AI says something dumb and you get sued. If Legal’s response starts with “technically…” —start running.
4. Audit & Transparency Rights
We’re not going to tell you how the sausage is made. But please trust the sausage.
AI tools are rarely built from scratch. They’re stacks. Layers. Franken-models of third-party APIs, subprocessors, and open-source tools duct-taped together with a UI.
That means when something breaks (ethically, legally, or otherwise) you need to know:
Who built what
What data went in
How the model was tuned
Whether they can show any of it
And if the vendor says: “We can’t give you that information, it’s proprietary”…
…you’re going to love explaining that to your auditors, board, or the FTC.
Real example: A customer filed a deletion request under GDPR. The legal team discovered a third-party tool had already used their data to generate insights stored in downstream systems.
No audit rights. No recourse. Just legal scramble mode.
Ask Legal:
Do we have the right to request model documentation, data lineage, and third-party subprocessors?
Is there an audit mechanism, especially for compliance-related issues?
What happens if the vendor updates the model in a way that introduces new risk?
Sneaky clause to look for: “We reserve the right to update or modify the Services at any time without notice.”
Translation: Good luck keeping track of what’s changed.
To Recap…
What Smart CFOs Actually Do
AI in contracts isn’t coming. It’s already here.
And it’s buried inside vendor agreements that nobody’s reviewing closely, until it’s too late. If you’re a CFO, you don’t need to become the General Counsel. But you do need to know when your signature means:
Giving away proprietary data
Losing ownership
Accepting compliance risk
Or flying blind
The best CFOs don’t try to solve this alone. They pull Legal in early. They ask the hard questions before someone clicks “accept.” They treat AI contracts like operational risk, because that’s exactly what they are.
Footnotes:
Sign up for the OnlyLawyer Newsletter!! Whether you own legal or you are a lawyer, you need to subscribe to this newsletter. All things legal that tech companies should know.
The Finance Org Chart (by OnlyCFO): Get my deep-dive I did with NetSuite
Wow! The legal issues associated with AI contracts are potentially huge. Great insights in the article.