In the first installment of our four-part series introducing CloudQuant Data Liberator, we illustrated how companies that struggle to access and manage large data sets can save time, money, and resources by using Liberator's zero-ETL architecture to provide instant, secure access to new datasets, ready for AI queries.
In this post, we’re talking about talking. Specifically, talking to Claude, or your AI Agent of choice.
Picture this scenario:
A new analyst joins your team. They need to find customer purchase data. They ask around:
Three days later, they're still hunting for the right data. And they haven't even started analyzing it yet.
We built an MCP (Model Context Protocol) server for Data Liberator. This lets AI agents like Claude directly query your data while respecting all your security, entitlements, and access controls.
Here's what that conversation looks like now:
Analyst: "Show me all datasets I have access to."
Claude: You have access to 47 datasets. Here are the ones related to customers and purchases:
Analyst: "What's in the transaction_history dataset?"
Claude: The transaction_history dataset contains:
Analyst: "Show me total purchases by category for the last month."
Claude: [Executes query, returns formatted results with a chart]
Three days of hunting became three minutes of conversation.
This is fundamentally different from hard-coded chatbots or SQL generators. Claude:
There are no pre-programmed queries. Claude figures out what you need and how to get it.
The MCP server architecture provides:
Full integration with enterprise identity providers means Claude authenticates as you. Your existing security policies, user entitlements, and access controls all apply. Every query Claude makes carries your authenticated identity.
Every query is logged with user context. You know exactly who queried what, when, through which interface—whether it was Claude, a direct API call, or the web UI.
We tuned the MCP server specifically for AI consumption:
We've been using this internally. Here's what changed:
New team members explore datasets conversationally. Claude reads the descriptions and explains what's available in plain language. No more reading documentation or asking around.
Analysts ask questions without knowing exact column names or table schemas. Claude understands the descriptions and translates natural language into proper queries.
Questions like "compare volatility across these three datasets last month" become simple conversations instead of writing complex SQL across multiple systems.
Researchers test hypotheses in natural language before writing production code. The feedback loop goes from hours to minutes.
One engineer described it as: "Having a senior analyst who never sleeps and has perfect memory of every dataset."
The most interesting part? Claude can query multiple datasets and identify patterns humans might miss.
"Are there any unusual correlations between customer demographics and product preferences in the last quarter?" Claude can explore this across datasets, combine results, and present insights you weren't specifically looking for.
While we built this in finance, the pattern applies universally:
The all-too-common scenario at the top of this post plays out every day in organizations sitting on more data than they can actually use. Data Liberator and Claude change that. Your data stays where it lives, your security stays intact, and your team stops waiting. Part 3 of the series is coming soon — but if you're tired of waiting for your data, you don't have to wait for a demo, talk to us.