In one of my earlier posts, I wrote about shaping a TDM strategy using DAMA-DMBOK. It made me realise how much of test data management is really about structure and ownership in large organisations—not just masking scripts or tools. Since then, I’ve been reading more about data privacy, it gave me a new angle on how privacy actually plays out when we deal with test data.
So here’s a post—not from a trainer’s view, but from someone trying to make TDM work while also doing it responsibly in an organisation.
Just Because It’s Masked Doesn’t Mean It’s Private…
Let’s be honest—most TDM setups start with masking, and end with “job completed.” We hide names, change account numbers, scramble emails, and assume we’re safe. But reading about how privacy risks aren’t just about exposure—but also about inference and misuse—made me look at masking differently.
Sometimes, you can still figure things out from what’s left behind. A date pattern, a transaction trend, or linked references across tables—all of that can still reveal things even if names are gone.
As Daniel Solove puts it in his Taxonomy of Privacy, privacy violations can happen through activities like information processing, dissemination, or invasion, not just disclosure. That stuck with me, because in TDM, we often move data around, share it, transform it—thinking we’ve protected it—when we might have just moved the risk elsewhere.
Where TDM Quietly Breaks Privacy Rules
Most orgs don’t intentionally break privacy principles. But TDM moves fast. One day you’re refreshing UAT, the next day you’re pushing masked data into SIT and nobody remembers where the source was or how long it’s been sitting there.
The Fair Information Practice Principles (FIPPs) remind us of key ideas like:
Purpose Specification – Data should only be used for the purpose it was collected.
Data Minimization – Only collect or retain what’s needed.
Accountability – There must be someone responsible for how that data is handled.
Now, in real-life TDM, we copy everything “just in case QA needs it.” We keep it forever because no one knows who owns cleanup. And access is often granted based on whoever shouts the loudest.
What I Took Away from CIPT So Far
Reading CIPT didn’t give me all the answers, but it did give me better questions. Now, when planning TDM:
I think about purpose before pushing data across environments.
I double-check access rights, not just masking logic.
I try to minimise what moves around, not just scramble it.
Privacy engineering in Chapter 2 hit a point home: TDM isn’t just about hiding data. It’s about designing the process to avoid problems in the first place. It’s slower, yes—but more solid.
One line that stayed with me from the book:
“Privacy risk is not limited to what data is collected, but includes how it is processed, transferred, stored, and shared.”
That’s the TDM challenge right there.
Wrapping Up
TDM is where data privacy gets tested in real-time. Not on a whiteboard, but in deployments, refreshes, and approvals. And it’s where small changes—like thinking about why we carry certain data forward—can make a big difference.
I’ll keep digging into the CIPT topics as I go, and try to map what fits into our day-to-day TDM practices. Hopefully, we’ll find more ways to make test data useful and private.
More on that soon…