Blog

Cint Trust Score – Inside the thinking AI brain

Security, safety, and privacy are at the heart of our organization. As the market research industry at large adapts to changes and challenges within the sector, so do we. 

Poor sample quality is one of the biggest problems facing the market research industry today. Our sector is up against organized fraud networks who have the capability to devise and deploy highly sophisticated script fraud technologies which allow them to emulate real people and real devices. If that wasn’t enough, the industry is also susceptible to large scale, high-velocity attacks perpetrated by click farms and automated bots.

Fraudsters are constantly attempting to evade security checks. As such,  Cint is committed to developing products that ensure our customers can be safe in the knowledge that their data is as secure as possible.

One of our primary safety resources is the Cint Trust Score, a proprietary machine learning/AI service that proactively predicts when a session may result in a negative reconciliation. Vigilant and adaptive, it is ready to consume new data signals and adjust to changing patterns and conditions.

The model grades survey participants based on machine learning and artificial intelligence to ensure that potentially fraudulent behavior is flagged promptly, with the survey session in question being terminated with immediate effect. 

Benefits offered by Trust Score are manifold. As a Cint customer, knowing that your data is high quality leads to increased confidence in insights, allowing you to make effective business decisions. 

That’s all well and good, but how does the Trust Score actually work? We spoke with Jimmy Snyder, VP Trust and Safety, and Alex Namzoff, Principal Product Manager, to get inside the brain of the model that gives Cint the confidence to give our customers confidence.

The man-machine

‘Machine learning’ is an oft-used term in 2024, but what does it actually mean? And how is it applicable to the way the Cint Trust Score model detects potential fraud?


“Imagine that we had a hundred survey entries and ten of them got reversed and 90 of them were successful, as in they completed and they were approved by the buyer,” Namzoff says. “The idea is that we want to find a way to figure out the patterns associated with the ten that were reversed, so in the future we can identify something in a new entry which fits that pattern of behavior and block it, meaning we get a hundred successful events the next time around.”

In essence, as Namzoff puts it, “Machine learning is basically that process of training. It’s feeding the algorithm all of the information so that it can figure out the patterns. The model is essentially the tool that’s used to — in the future —review the current data and see if it matches the metadata of that pattern.”

Snyder puts it even more simply. “Trust Score is our machine learning service. It’s predicting what sessions might result in a negative reconciliation.”

The in’s and out’s of Cint Trust Score

New models are based on the previous three months of scoring data. “Once a respondent has engaged in a single session, we can perform an evaluation based on a seven-day session history,” says Snyder. 

“Let’s say that today, we trained a model on the last three months’ worth of history. So now we have a process that we can run any new attempt to get into a survey through in order to see if it fits this past pattern,” Namzoff says. “Let’s also say I’m going to take a survey today. We have to figure out what information from me is going to be compared to the past history model pattern that we’ve developed. What we do is grab the last seven days of history from me and compare it to the patterns that we see from that three-month chunk of data.”

Why seven days?

“Seven days is sort of a sweet spot in terms of how much information we need to gather to be able to compare against the model. Any more than that doesn’t give us any better information or more accuracy, and any less gives us too little,” Namzoff says. 

The methodology underpinning the timeframes reflects the fact that the processing behind the model is happening in real-time, and that the experience for the customer needs to be as seamless and quick as possible.  

“So we can’t go and grab the last three months worth of history and make a respondent wait for three minutes while we gather the information running through the model to check if we can let them in and take the survey,” Namzoff says. “It’s kind of like if you were going to the store and they made you wait for five minutes while they checked your background history.”

It is important to note that checks and balances like the Trust Score model are in place to protect customers and survey participants alike, and have been designed, developed and deployed with that in mind.

As Namzoff puts it, “We are constantly balancing the need to fight fraud with the fact that the vast majority of participants coming into surveys are legitimate, good respondents and we don’t want to over-impact them.”

Spy versus Spy

Fraud has grown significantly in the last five years,” Snyder says. “We have seen organized fraud groups grow, coordinate, and attack our industry at an alarming rate.”

He adds, “It’s grown in the last five years as organized groups are sharing and even monetizing ways to share how you can then make your own money. They’re almost productizing ways to profit from us.”

That growth potentially stems from the fact that recent years have seen a rise in the ease of access to technology that enables fraud of this kind, with products like generative chat tools becoming increasingly prevalent in day-to-day digital life. 

Fraudsters deploy various methods. One of these is the use of so-called ‘click farms’. The term refers to the practice of employing (often) poorly-paid workers to click through surveys to harness the rewards offered. 

“Click farms can be massive groups of people that are all building full of people using VPNs and taking surveys. It could be one guy with 85 monitors running a bot and attacking through that way. Since we don’t get to see the fraud in live action, we don’t know.”

Whatever attempted infiltration method has been used, one thing our customers, participants and partners can rely on is the knowledge that once the Trust Score model has picked up on a bad actor, they’ll be excluded from Cint’s survey ecosystem. 

You can read more about Cint’s approach to Trust and Safety here.

Blog

More from our blog