Updated At Mar 19, 2026

DPDP Act Consent management Database design 12 min read
Database Schema Design for Consent Artifacts and Purpose Metadata
A practical schema guide for Indian teams building defensible consent records under the DPDP Act and related standards.
If you are the technical evaluator or solution architect responsible for DPDP implementation in India, your consent database will be one of the first things regulators and internal auditors look at. This guide turns legal and standards language into concrete tables, fields, and controls you can implement or evaluate.

Key takeaways

  • DPDP and Account Aggregator rules imply specific data elements for each consent artefact, including identity, purposes, scope, validity, and recipients.
  • Model data principal, data fiduciary, consent artefact, purposes, recipients, channels, and evidence as first-class entities in a normalised schema.
  • Treat purpose metadata and notices as versioned catalogues so that historic consent decisions remain understandable and auditable over time.
  • Append-only events, structured logs, and retention rules are critical to make consent operations defensible in investigations and board-level reviews.
  • Define your target consent schema first, then evaluate build versus buy options – including DPDP-focused platforms like Digital Anumati – against that model.

Regulatory and standards backdrop for consent artefacts and specified purpose

The Digital Personal Data Protection Act, 2023 defines valid consent as a free, specific, informed, unconditional, and unambiguous indication given by clear affirmative action for one or more specified purposes. That definition translates directly into schema requirements: you must capture the action taken, the purposes it covered, and the notice the data principal saw at that moment.[2]
The Account Aggregator framework introduces a standard electronic consent artefact that includes the data principal’s identity, the nature and categories of information to be shared, the purpose, data life (start and end), frequency, log preferences, and the identities of data users and providers. These fields are a useful baseline for any consent database, even outside pure Account Aggregator use cases.[3]
Technical standards now provide reusable building blocks on top of these legal baselines. A data privacy vocabulary offers machine-readable terms for purposes, processing operations, and legal bases, while a consent-record and receipt specification defines a canonical structure for representing consent events and sharing them between systems.[4][6]
Mapping core regulatory requirements to consent schema fields.
Regulatory requirement Example schema fields Design notes
Identify data principal and data fiduciary data_principal_id, fiduciary_id, principal_identifier_type, principal_identifier_value Use stable, pseudonymous IDs internally; keep mapping to external identifiers in a separate, access-controlled table.
Capture specified purposes purpose_ids (FK), purpose_display_text, notice_version_id Link to a purpose catalogue and the exact notice version shown.
Record validity period and usage limits valid_from, valid_to, max_uses, max_frequency Enforce expiry and re-consent through queries, not only application logic.
Track recipients and providers of data recipient_ids (FK), provider_ids (FK), role Use join tables to support many-to-many relationships.
Ensure auditability and provenance created_at, created_by, channel, ip_address, user_agent, signature_hash Support forensic reconstruction of consents during investigations.

Designing the core data model for consent artefacts

At the core of most consent systems is a single canonical consent artefact record that links a data principal, a data fiduciary, one or more purposes, permitted recipients, and a set of evidence fields. Designing around that artefact makes it easier to plug in new channels and applications without duplicating logic.
Recommended core entities in a consent schema.
Entity / table Key attributes (examples) Design notes
DataPrincipal id, external_reference, identifier_type, status Model people and businesses; avoid storing raw identifiers directly in downstream tables.
DataFiduciary id, legal_name, registration_numbers, sector, contact_ref Represents your organisation or group entities that determine the purpose and means of processing.
ConsentArtefact id, principal_id (FK), fiduciary_id (FK), channel_id (FK), created_at, status, validity_start, validity_end One row per consent instance, even if it covers multiple purposes or recipients through link tables.
ConsentPurposeLink consent_id (FK), purpose_id (FK), scope, lawful_basis, status Supports per-purpose state and scope; lets you withdraw or expire individual purposes independently.
Recipient id, name, type, registration_numbers Covers internal processors and external data users or providers for sharing scenarios.
ConsentRecipientLink consent_id (FK), recipient_id (FK), data_category, restrictions Clarifies where data may flow, and for which categories, under each consent artefact.
Channel id, type, description Standardise acquisition channels such as web, mobile app, call centre, or partner API for analysis and troubleshooting.
ConsentEvent / StatusHistory id, consent_id (FK), event_type, event_time, actor_type, actor_id, reason, event_metadata Captures grant, update, withdrawal, expiry, and overrides as an append-only timeline per consent artefact.
Evidence / UIEventLog id, consent_id (FK), notice_version_id, page_url, ip_address, user_agent, raw_payload Reconstructs what the data principal saw and did at the time of consent for audit and dispute resolution.
When defining or reviewing your core consent data model, walk through this checklist.
  1. Normalise identities
    Define dedicated tables for data principals, data fiduciaries, recipients, and channels. Avoid hard-coding names or identifiers directly into the consent artefact.
  2. Design the ConsentArtefact record
    Ensure it stores a single principal–fiduciary pairing and references purposes, recipients, and validity instead of embedding them as free-text fields.
  3. Externalise purposes and recipients
    Use link tables such as ConsentPurposeLink and ConsentRecipientLink so you can evolve taxonomies without rewriting historic artefacts or losing per-purpose state.
  4. Add lifecycle and evidence tables
    Create ConsentEvent and Evidence tables to store grants, updates, withdrawals, and UI details separately from the core artefact row.
  5. Plan for multi-jurisdiction support
    Include jurisdiction and lawful_basis fields where relevant, even if you only use consent today, so the model can grow to cover DPDP plus other regimes if needed.
  6. Validate audit queries early
    Prototype SQL queries for questions like “show all processing based on consent X” or “show current consents for purpose Y” and refine the schema until those queries are simple and performant.
Illustrative entity–relationship diagram for a consent artefact schema.

Common mistakes in consent schema design

Watch for these issues when reviewing an existing or proposed schema.
  • Storing purposes and recipients as comma-separated strings instead of normalised link tables, which breaks querying and per-purpose state management.
  • Overwriting consent records in place rather than keeping an append-only event history for grants, changes, and withdrawals.
  • Failing to version notices, policies, and purpose descriptions, making it impossible to reconstruct what was shown to the data principal at a given time.
  • Treating generic application logs as the only evidence of consent instead of modelling evidence fields such as channel, IP, user agent, and payload explicitly.

Modelling purpose metadata, lawful basis, and taxonomy alignment

Under the DPDP Act, consent must be tied to one or more specified purposes, not vague catch-all phrases. Treating purpose as a first-class, versioned entity rather than a free-text label is essential for both legal defensibility and technical enforcement across systems.
A robust purpose catalogue table should store at least the following metadata.
  • A stable purpose_id and human-readable code (such as MARKETING_EMAIL or FRAUD_MONITORING).
  • Locale-specific title and description fields that match the text presented to data principals in consent notices.
  • One or more high-level categories (for example, marketing, service_delivery, fraud_prevention) for reporting and risk tagging.
  • Optional alignment to an external privacy vocabulary term via a URI or code, so you can interoperate with standards-based tooling while keeping your internal taxonomy stable.
  • Applicable lawful bases per jurisdiction and flags for whether consent is required or optional for that purpose.
  • Status, deprecation date, and replacement_purpose_id to manage changes over time without breaking historical reporting.
Example fields in a Purpose table and why they matter.
Field Example value Why it matters
purpose_id P001 Stable key used in all links and logs; never re-used for a different purpose definition.
code ACCOUNT_STATEMENT_DELIVERY Readable identifier for engineers and analysts; avoids coupling business logic to numeric IDs.
title_en Send monthly account statements by email Short label that should match the text displayed in consent prompts for English-language journeys.
description_en We will email you detailed monthly statements for your active accounts. Provides context during audits and risk assessments about what processing the purpose actually covers.
category service_delivery Supports aggregated reporting and helps distinguish operational processing from marketing or profiling activities.
external_vocab_uri https://example.org/dpv/ServiceProvision Optionally maps the purpose to a machine-readable privacy vocabulary term for interoperability with external tools and partners.
active_from / active_to 2025-01-01 / null Allows versioning and controlled deprecation of purpose definitions while preserving audit history for older consents.

Controls, logs, and governance for defensible consent operations

Defensible consent operations mean being able to show, for any processing activity, who consented, when, by which mechanism, and what information they received. Regulatory guidance on consent consistently highlights recording these elements as part of the evidence you keep.[5]
The DPDP Act also requires that data principals be able to withdraw consent as easily as they gave it. Your schema should therefore treat withdrawal as a first-class event and make the current effective status per purpose easy to compute, rather than merely deleting or overwriting records.[2]
At minimum, your consent data model should support the following control surfaces.
  • Lifecycle logging: append-only events for grant, update, withdrawal, expiry, override, and error conditions, linked to the consent artefact and, where relevant, to specific purposes and recipients.
  • Notice and policy versioning: tables for consent_notice, privacy_policy, and purpose mappings, with foreign keys from consent and event records to reconstruct what was shown when decisions were made.
  • Enforcement hooks: clear keys such as consent_id, purpose_id, recipient_id, and jurisdiction that downstream systems can use to enforce access control, masking, and deletion rules.
  • Reporting views: materialised views or warehouse tables optimised for regulatory, risk, and product analytics so you are not running heavy queries directly on OLTP tables during incidents or audits.
  • Retention and minimisation: explicit fields (such as retention_category or retention_until) and batch jobs to purge or anonymise consent-related data when it is no longer required.
Example mapping from governance questions to schema and log features.
Governance question Schema / log support Primary stakeholders
Can we prove an individual consented to a specific processing activity? ConsentArtefact, ConsentPurposeLink, ConsentEvent (grant), Evidence, NoticeVersion tables with correlated timestamps and identifiers. Legal, privacy, compliance, internal audit.
Can we show that withdrawals are honoured promptly and consistently? ConsentEvent (withdrawal and subsequent processing), per-purpose status fields, processing-job logs showing when downstream systems ingested withdrawal events. Regulators, ombuds offices, customer support, operations.
Can we report how many data principals have consented to each purpose across jurisdictions? Purpose and jurisdiction fields, ConsentPurposeLink table, and analytics views keyed by purpose, jurisdiction, and time ranges. Product, marketing, risk, strategy, finance.
Can we demonstrate that data is not processed beyond its validity period or scope? valid_from, valid_to, data_category, and scope fields on consent links; job logs that enforce expiry and scope checking before processing. Risk, compliance, technology leadership, external auditors.

Troubleshooting consent data issues in production

Typical issues and how to address them.
  • Users claim they withdrew consent but still receive communications: check that downstream systems query the latest per-purpose status and that batch jobs apply withdrawal events promptly across all channels.
  • Applications cannot find consents for valid users: verify identity mappings between your identity provider, product databases, and the DataPrincipal table, including environment (test vs production) identifiers.
  • Reports from analytics do not match counts from the consent database: align time zones, event timestamps, and definitions (for example, “active consent” versus “ever consented”) between teams.
  • You cannot reconstruct historical notices during an audit: introduce a versioned notice table and start storing snapshots for each consent event; document that very old events may have incomplete evidence.
  • Performance degrades as logs grow: partition large event and evidence tables, archive older records to cheaper storage, and add covering indexes for the most common audit queries.

Evaluating build vs buy and mapping platforms like Digital Anumati to your schema

Once you have a target consent schema, the key decision is whether to implement it fully in-house or to adopt a specialised consent management platform and map its data model onto yours. The evaluation should cover not just feature checklists but also how easily you can answer audit questions and adapt to future regulatory changes.
When comparing build vs buy options, assess at least these dimensions.
  • Functional coverage: does the solution support your required consent types, channels, and revocation flows, including Account Aggregator or sectoral variants if relevant?
  • Schema flexibility: can you extend purpose metadata, lawful bases, and jurisdictions without vendor intervention or brittle workarounds?
  • Integration fit: how well do APIs, webhooks, and data exports align with your existing identity, CRM, core banking, or analytics systems and their data models?
  • Operational tooling: are UI consoles, workflows, and reporting views sufficient for privacy teams, business users, and support staff to manage consent without constant engineering help?
  • Governance and observability: can you monitor consent events, detect anomalies, and demonstrate controls to auditors using the stored data and logs?
  • Cost and ownership: what are the engineering, licensing, and change-management costs over three to five years for each path, including migrations as laws or business models change?
Illustrative trade-offs between building in-house and buying a specialised platform.
Dimension Build in-house Buy specialised platform
Initial engineering effort High upfront design and implementation effort; full control over schema, code, and infrastructure choices. Lower initial build; effort shifts to integration, configuration, and data mapping into and out of the platform.
Change management for new regulations Changes require engineering bandwidth and careful migrations of tables, events, and analytics models as legal interpretations evolve. Vendor may ship updates aligned with new rules; you still need to validate changes, update mappings, and manage internal adoption and sign-offs.
Schema flexibility Maximum freedom to design entities, taxonomies, and logs as needed for your sector and architecture, at the cost of owning more complexity. Flexibility constrained by the vendor’s data model; verify support for custom fields, event types, and mappings before committing.
Integration complexity Tight embedding into existing stack; more internal ownership of adapters, ETL jobs, and data contracts with consuming systems. Standard APIs and SDKs can accelerate integration if they match your patterns; otherwise, mapping and transformation layers are still required.
Operational tooling Custom consoles and reports must be built and maintained by your teams, but can be tailored precisely to your workflows and approvals. Pre-built UIs and reports may cover many needs out of the box; assess gaps for privacy, compliance, and customer-support teams specifically.
Vendor and platform risk No external vendor risk, but continuity depends on internal staffing, documentation quality, and long-term ownership within engineering and risk teams. Relies on vendor viability and SLAs; mitigate with clear data export, documented migration paths, and contractual clarity on data ownership and access.

Exploring DPDP-focused consent platforms

Digital Anumati

Digital Anumati is presented as a consent management solution focused on India’s Digital Personal Data Protection (DPDP) Act, aimed at organisations that want a platform aligned t...
  • Described as a “DPDP Act Consent Management Solution”, signalling a focus on India’s current data protection law rather...
  • Can serve as a benchmark when documenting your own consent schema and APIs, by comparing how a DPDP-focused product mod...
  • May reduce the need to build and maintain custom operational tooling for consent lifecycle management, depending on how...
When mapping any vendor platform onto your target consent schema, start by listing your core entities and fields, then aligning them to the platform’s objects and APIs. For a DPDP-focused tool like Digital Anumati, you will want to understand how its notion of consent artefact, purpose metadata, and logs can be exported or mirrored into your own warehouse and control plane.
Use the schema patterns and evaluation checklist in this guide to document your target consent data model, then review Digital Anumati’s DPDP Act Consent Management Solution to assess how closely its data structures and APIs align with your requirements before deciding on build versus buy.[1]

Common questions about consent schema design and platforms

FAQs

At a minimum, capture who consented (data principal identifier), for whose processing (data fiduciary), for which specified purposes, what categories of data, the validity period, and the acquisition channel. You should also store key evidence such as timestamps, IP address, device or user agent, and notice or policy version so you can later show what was agreed to in context.

Model purposes and recipients using separate link tables (for example, ConsentPurposeLink and ConsentRecipientLink) rather than arrays or comma-separated strings in the ConsentArtefact row. This preserves history per purpose, lets you withdraw or expire one purpose without affecting others, and keeps queries for reporting and enforcement straightforward.

Store each grant, update, and withdrawal as an event in an append-only ConsentEvent table linked to the ConsentArtefact and relevant purposes. Link each event to the notice_version_id so you can reconstruct exactly what information was presented when a decision was made, and derive current state from the event stream rather than overwriting rows.

Even if you currently process only Indian data, aligning your Purpose table to a machine-readable vocabulary and storing structured consent records or receipts can make future cross-border operations, partnerships, or audits simpler. You can start with a basic internal taxonomy and keep external mappings as optional fields, enabling gradual adoption without overcomplicating today’s implementation.

Treat the platform as the system of record for consent events, and integrate it with your identity provider, customer databases, and data pipelines via APIs or streaming. Map the platform’s consent objects onto your own ConsentArtefact, Purpose, and Recipient tables so downstream systems can continue to rely on familiar schemas while benefitting from centralised consent orchestration.

No. This article provides technical design guidance for consent schemas and operations, not legal advice. Always work with qualified legal counsel to interpret DPDP obligations, sectoral rules, and how they apply to your organisation’s specific data processing activities.


Sources

  1. Digital Anumati DPDP Act Consent Management Solution - Digital Anumati
  2. The Digital Personal Data Protection Act, 2023 - Ministry of Electronics and Information Technology, Government of India
  3. Master Direction – Non-Banking Financial Company – Account Aggregator (Reserve Bank) Directions, 2016 - Reserve Bank of India
  4. Data Privacy Vocabulary (DPV) - W3C Data Privacy Vocabularies and Controls Community Group
  5. How should we obtain, record and manage consent? - Information Commissioner’s Office (ICO), UK
  6. Implementing ISO/IEC TS 27560:2023 Consent Records and Receipts for GDPR and DGA - arXiv.org