Sooner or later, someone in management asks: "What's our data loss prevention situation? Can I see a report?"

If you have an enterprise DLP suite, it generates reports automatically — pages of charts showing how many emails were blocked, how many USB transfers were intercepted, how many policy violations were detected. These reports look impressive but rarely answer the question management is actually asking.

What they want to know is: where are we exposed, and what should we do about it?

That requires a different kind of report.

What a useful DLP report contains

A data loss prevention report that actually informs decisions should cover five areas:

1. Executive summary

A high-level overview of the organisation's data landscape:

  • How many services process data
  • How many users access those services
  • How many data items are documented
  • What the overall risk profile looks like (how many services at each risk level)
  • Whether there are any critical or high-risk items that need immediate attention

This section is for people who won't read the rest. Make it one page.

2. Data flow and access analysis

A detailed look at how data moves through the organisation:

  • User-to-service access map. Who accesses what? Where are the concentration risks? Are there users with access to far more than they need?
  • Device security assessment. What devices are used to access data? Are they encrypted? Managed? Up to date? Is remote wipe available?
  • Data transfers. Where does data flow between services? Are there cross-border transfers? Are backup transfers properly configured?
  • Access review status. When was the last access review? Were there revocations? Are there overdue reviews?

3. Data sensitivity assessment

An analysis of the data itself:

  • Which data items have the highest classification (special category, restricted, financial)?
  • Which items are missing a legal basis for processing? (This is a GDPR gap.)
  • Which items have no defined retention period? (This means data accumulates indefinitely.)
  • Are there special category data items accessible from unsecured devices?

4. Terms and policy review

For each service that processes data, what do their terms say?

  • Does the service provider have a data processing agreement (DPA)?
  • When does it expire?
  • What do their privacy policies say about sub-processors, data retention, and international transfers?
  • Are there any concerning clauses or conflicts with your own data handling policies?

5. Prioritised recommendations

The most valuable part of any DLP report: what should we do?

Recommendations should be:

  • Specific — reference actual services, users, and data items by name
  • Prioritised — critical and high-risk items first
  • Actionable — "Enable disk encryption on warehouse tablets" is actionable; "Improve security posture" is not

Group recommendations by priority level:

  • Critical: Fix immediately (e.g., special category data on unencrypted devices)
  • High: Fix this month (e.g., missing DPA for a processor handling customer data)
  • Medium: Fix this quarter (e.g., undefined retention periods, overdue access reviews)
  • Low: Nice to have (e.g., document exit strategies for low-risk services)

How to gather the data

The data for a DLP report comes from documentation — not from a DLP tool. You need:

Data needed Source
Service inventory IT team, department heads, shadow IT discovery
User access mappings Active Directory, service admin panels, manual review
Device inventory MDM, IT asset management, manual survey
Data classifications Data protection officer, department heads
Data transfers Integration documentation, API logs, backup configurations
DPAs and contracts Legal/procurement department
Risk scores Calculated from the above

Gathering this manually takes weeks. Service templates, automated risk scoring, and structured forms reduce this to days.

Generating the report

There are three approaches:

Manual (spreadsheet-based)

Create a spreadsheet with tabs for services, users, data items, transfers, and devices. Manually fill in classifications, legal bases, and retention periods. Write the analysis in a Word document.

Pros: No tools needed. Cons: Extremely time-consuming, error-prone, hard to maintain, no automated risk scoring.

Semi-automated (documentation tool)

Use a structured documentation tool to capture the data. The tool calculates risk scores, flags gaps, and exports standardised reports.

Pros: Consistent, maintainable, automated risk scoring. Cons: Requires initial setup effort.

AI-assisted

Feed the structured data to an AI that generates the analysis narrative — the executive summary, the gap analysis, the recommendations. The human reviews and adjusts.

Pros: Fast, comprehensive, identifies patterns a human might miss. Cons: Requires review for accuracy.

Putting it together with Readmodel®

Readmodel® combines all three approaches:

  1. Structured documentation — enter your services (200+ templates available), users, data items, devices, and transfers into a guided workflow.
  2. Automated risk scoring — each service is scored based on data sensitivity, access patterns, legal basis gaps, retention gaps, and backup compliance.
  3. Risk register — a prioritised view of all services with their risk levels and actionable items.
  4. AI-generated analysis — one click generates a comprehensive report covering all five sections: executive summary, data flow analysis, sensitivity assessment, terms review, and prioritised recommendations.
  5. ROPA export — the GDPR-required Record of Processing Activities is generated automatically from the same data.

The first DLP report takes about an hour to set up. After that, regenerating it takes seconds — and the underlying data stays current because it's part of your ongoing documentation process.

Management doesn't need a dashboard showing how many USB transfers were blocked last Tuesday. They need to know where the organisation is exposed and what to do about it. That's what a data loss prevention report should deliver.