Back to blog
Security

Veille MCP: secure assistants with tools, not guesses

Josselin Liebe
Josselin Liebe

AI assistants are great for explaining a stack or suggesting code, but when it comes to whether an email is disposable, whether an IP is risky, or whether an IBAN is valid, the model alone is a black box: it can sound plausible and still be wrong on the facts. The Model Context Protocol (MCP) connects the assistant to official tools—in this case the same Veille API your backends use—so answers rest on verified data instead of improvisation.

MCP in two sentences: what is it?

MCP is an open protocol: the client (Cursor, Claude Code, Gemini CLI, OpenAI developer mode, and so on) exposes tool functions with typed parameters to the assistant. The assistant decides when to call them; the MCP server returns structured output. For Veille, each tool maps to a REST API operation: same behavior and billing as ordinary HTTP calls, as described in the MCP documentation.

Why it helps security

Fewer risk hallucinations. Without a tool, a model may “guess” that a domain is suspicious or label an address with no evidence. With the Veille MCP server (https://mcp.veille.io/), the assistant queries the same intelligence as your pipelines (email validation, IP reputation, IBAN checks, and so on). The decision is based on an API response, not generated prose.

Alignment across dev, support, and prod. If product already uses Veille server-side, wiring the same service through MCP avoids drift: no chat “rule of thumb” that contradicts what your production code does.

API keys handled like any serious integration. The recommended setup uses an x-api-key header (see API Keys), environment variables, or placeholders in READMEs—never committed secrets. Veille best practices also recommend separate keys per environment (dev, staging, prod), regular rotation, and not exposing the key in the browser: sensitive calls stay on the server or in a local tool context, not in a public web page.

Traceability. As with the REST API, you can correlate usage and credits; the assistant does not “invent” a source—it calls an endpoint whose policy and billing you control.

Reliability: why it beats a prompt alone

Aspect Assistant without Veille tools Assistant with Veille MCP
Source of truth Model text Structured API response
Consistency with your apps Variable Same endpoints
Maintainability Hand-maintained prompts Tools updated with the product
Errors Hard to tell from plausibility Explicit HTTP codes and schemas (see error handling)

The server uses Streamable HTTP at the documented URL; it is not a web page to open in a browser, but an endpoint for MCP clients. That sets clear expectations: explicit integration, not scraping or fragile copy-paste.

Practical use cases

  • Ticket or log review: the assistant enriches an address or IP with Veille signals before suggesting an action.
  • Developer onboarding: ask about fraud or validation using the same tools as the API docs.
  • Product scoping: test scenarios (risk thresholds, disposable email behavior) with real responses instead of made-up examples.

Setup (reminder)

The official docs cover Claude Code, Cursor (mcp.json with url ending in / and headers for the key), Gemini CLI, and OpenAI developer mode. You need one URL for the Veille MCP server, a valid key, and the usual security habits (no secrets in the repo, keys per environment).

In short: MCP does not replace your security policy, but it shrinks human and model error surface by giving the assistant the same levers as your API—more reliable for analysis, better aligned with production, and more controllable thanks to keys and usage tracking as documented for Veille.