The Market Research Industry Isn’t Focusing Enough on Data Ethics; How to Solve It 


Insights
Market Research
Thought Leadership

In recent years, the topic of data privacy has grown to be a prevalent issue across the globe as more and more countries create legislation to protect their citizens from privacy breaches.

Italy alerted the creators of OpenAI that its platform is violating the EU’s rules for data protection, and the U.S. made strides to ban TikTok if the parent company ByteDance does not sell the app.

There is a growing concern regarding the use of personal data by tech companies.

This raises an important question: Are market research companies adapting to the changing times and recognizing the value and risks associated with handling consumer’s personal information? What more can be done to ensure that personal data is sourced and used ethically? 

The Rise of AI 

There is no escaping AI.

Businesses are ready to implement it into their everyday practices, and rightfully so. It can streamline everyday tasks and leave room for more creativity in the workplace.

Although there is excitement around companies using AI, the privacy of clients and consumers should be at the forefront of all innovation. AI algorithms are still in their beginning stages and are trained by the data we provide to improve their performance.

According to a Cint survey pertaining to data privacy conducted in January 2024, 58% of UK respondents expressed concern about how their information is being used to train AI services. Clients are concerned and this should be the top priority for industries looking to evolve with AI. 

The Risks

For market research companies, AI is being used to quickly analyze audiences and provide synthetic responses where data is lacking.

What used to be a tedious and time-consuming process can now be done in seconds and checked for accuracy by humans. As impressive as this is, it’s critical to keep in mind that uploading customer data sets to external AI services without proper protections is a breach of data ethics.

However, it can be easy to get swept up in the excitement of this innovation; as market researchers and data collectors, we are responsible for keeping client safety in mind.

AI is not perfect. Frequently, the information provided by AI can be biased.

Different phrasing of the same questions can elicit varying responses, making it inconsistent and unreliable. Also, depending on how the model was trained, the AI algorithm can misrepresent specific demographics’ thoughts, beliefs and behaviors. In market research, when using AI for synthetic paneling, it is important to keep this in mind before providing AI-prompted information to a client.

Before diving into a new AI product, get answers to the following questions: How is this algorithm trained? How is this data being used once uploaded? 

Solutions

Currently, data protection officers are faced with the task of balancing data protection and learning how to incorporate AI into their daily activities.

At Cint, we make it an ongoing priority to monitor various laws and cases around the world, read specialized literature, and share what we learn with other professionals. 

Since new technologies are being introduced rapidly, make sure to carefully consider the risks and benefits before introducing them into your day to day.

Market research organizations need to take an extra step to understand and educate their employees on what technology is being integrated into their tech stack before it is used for client work. Also remember, its always best to be transparent about how you use and store data with clients and employees. 

In light of the varying laws across states and countries, companies must ensure compliance with the regulations in the jurisdictions where they operate.

The European Union recently released the AI Act, which has set a precedent as the first legal framework specifically designed to regulate the use of artificial intelligence (AI) within the region.

This Act is a prime example of how laws can adapt and evolve in response to the introduction of new technologies, providing a structured approach to ensuring responsible and ethical AI practices. The General Data Protection Regulation is also an excellent resource for getting informed on the laws and best practices for new technologies, updated cookie banners, and cross-border data privacy. 

With the use of AI, there needs to be clear set parameters on how it is allowed to be used with client work. Make it a point to be on the same page with your employees by stressing the responsibility to protect the company’s and clients’ online safety. To ensure that all employees understand the policies and guidelines, Cint has recreated its Authorized Used Policy for AI tools as a mandatory part of training for all employees. 

AI has great potential to arm the industry with superpowers to be more efficient and to create more valuable insights and recommendations for its customers.

However, it also comes with risks. Therefore, it is important to conduct your own research and evaluate where AI would make the most sense in your company. With new technology being introduced every day, we must learn how to adopt and adapt the rules.