Privacy

Differential Privacy

Implement data-level privacy at every stage

At A Glance

  • AI tools find and extract any sensitive data
  • Ensure tokenization and privacy at the data level
  • Derive insight, abstracted from individual customer data

Problem

All big data analytics today will require working with potentially identifiable data that could be used to re-identify customers. Therefore, anonymization is a must. But how do you ensure behavioral data is protected in such a way that it can’t be tied back to an individual customer, while still being able to derive insight at scale and at speed?

What we solve

  • Individual customers no longer tied to raw personally-identifiable data
  • Reduce customer concerns about tracking
  • Move away from rigid, rules-based approaches

With this feature

  • Ensure sensitive data categories are stripped before processing
  • Create ‘digital twins’ that allow anonymized behavioral analysis
  • Privacy clusters and randomized data prevent identification

What is

Differential Privacy

Making sure privacy is embedded at the data level throughout the business

3 min. Read

Ensuring privacy at the data level requires multiple steps. To protect the rights of customers, anything sensitive must be extracted before data can be used. It’s also essential that data is tokenized in such a way that customers cannot be identified. Here’s how we do this. 

Ensuring anonymization

The first step is tokenization, so the data does not include personal identifiers. This isn’t just about securing information in the event of data breaches, but assuring your team that privacy is controlled from the start.

In preparing data for processing in a privacy-secure way, we ensure all Personal Data is extracted before any analytics work is carried out. This is vital in ensuring that protected categories of data, such as sexuality and ethnicity, are not inadvertently included.

We use our proprietary Intent HQ graph structure to find associations between data points and identify potentially sensitive data that must be removed. Because this uses AI rather than being a static, rules-based approach, it’s customizable, guaranteeing a much more nuanced and inclusive view of the customer can be achieved, beyond the capability of simple blocklists and allowlists.

Reassuring customers their rights are being respected

This is especially vital when it comes to handling behavioral data such as which websites someone visits. This is hugely valuable data as it can give a wealth of insight into a person’s interests. It can tell a company what sports team they support, the music they enjoy or how they like to spend their free time. But if not handled correctly, this can be used to track someone’s activities in a way that most customers will naturally feel highly uncomfortable with. 

We solve this by ensuring all this data is transformed into a format that’s unreadable by humans and impossible to link back to an individual customer. We also inject random noise into the data by substituting some URLs with random data, and finally by aggregating data into clusters of similar sites, breaking the link to raw URLs and driving insights from the clusters. The raw data can be discarded.

This ensures the direct link between an individual customer and their data is broken, and instead, telcos are working with an unidentified ‘digital twin’ that isn’t tied to the real customer and cannot be traced back. This ensures privacy at the data level, which is vital in ethical marketing and building the trust and loyalty of customers.

Want to see what Intent HQ can do for your business?

Contact Us