Responsible AI Data Architecture: Embedding GDPR and PII Compliance into MLOps Pipelines at Enterprise Scale
Keywords:
Artificial Intelligence; Data Pipelines; Data Discovery; Data Protection; Data Security; Data Governance; Enterprise Architecture; MLOPs; Engineering; People; Processes; Technology; Policy; Regulation; GDPR; Personally Identifiable Information; Compliance; Data Governance; Responsible; Privacy; Auditing; Notifications; PII; PCI; CCPA; PIPEDA; ISO; IEC; ML; Machine Learning; Enterprise MLOPs; Development; Operations; Implementation; Design; Management; Automation; Research; Modelling; Testing; Training; Auditability; Explainability; Visualisation; Monitoring; Risk Management; Assessment; Legal; Andisit; Requirements; Procedures; Guarantee; Cloud; Modelling; Standards; CIF; CII; CIA.Abstract
Data retention, usage, and protection are often at odds with the speed, agility, and scale of machine learning operations (MLOps) systems, burdening organizations with risks, penalties, and security breaches. To resolve this contradiction in the context of enterprise-scale MLOps at a global financial services institution, a data architecture design is proposed that embeds General Data Protection Regulation (GDPR) and personally identifiable information (PII) compliance into the design and governance of data pipelines and data used in model training and inference. The recommendations are grounded in GDPR principles and requirements, as well as PII management best practices. Closely integrated with the data lifecycle and with clearly assigned accountability, these recommendations enable compliance mandates to become a natural by-product of MLOps, guardrails for ethical production AI, and ultimately a competitive differentiator. AI is increasingly used in critical social functions—in finance, healthcare, law enforcement, and the provision of essential services—often with significant real-time consequences for individuals. Misuse, malfunction, and bias in these systems can therefore have catastrophic consequences, and these concerns have spurred the introduction of regulations and proposed laws that demand adherence to foundational principles of transparency, equity, accountability, and social benefit. With the United Nations projecting that at least 3 billion people will be covered by AI regulations by 2024, it is essential that organizations adopt these principles consistently and at scale in the training and operation of machine-learned models.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Canadian Journal of Marketing Research

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.

