Q&A with Steven Millman, Global Head of Research & Data Science, Dynata

22nd March 2024
Neutronian Q&A Series

Thanks for stopping by! This mini-interview series features various industry thought leaders that share their perspectives, opinions and predictions. Today’s interview is with Steven Millman, currently the Global Head of Research & Data Science at Dynata. Steven is also an award-winning researcher, a Chair for the Advertising Research Association (ARF), and a valued advisor to Neutronian. 

Steven recently published a piece, titled De-Risking the Use of Artificial Intelligence (GENAI) and Large Language Models (LLM) Without Stifling Innovation, for an ARF research initiative. This interview contains a few pieces from that previously published piece. 

Q: How can organizations use generative AI and large language models while adhering to principles of data privacy?

A: A big piece of this is training. Ensure that staff understand and commit to not using sensitive or confidential information as an input into any external LLMs in the same way that we would not externally share such information in other contexts. As part of this process, staff should be educated so that the types of information which present risk are clearly articulated and be sure it’s understood why they must not be used. Examples could include an organization’s proprietary data, client information, partner data to which you do not have proprietary rights, etc. Always involve corporate legal counsel to ensure practices are in regulatory compliance.

In addition to training, there needs to be oversight as to how staff are using these tools. Employees should have appropriate supervision when using GenAI/LLM and junior team members in particular should have their use of these tools overseen by more senior staff members. In all cases, use of these tools for an organization’s work products should be disclosed to group leadership both for risk abatement and for organizational learning. All uses of LLM and GenAI tied to product development or client delivery should be described and submitted to the relevant team to avoid duplication of efforts.

Q: We know generative AI and LLMs have limitations – can you share a little about what you know about the limitations, and do you have an example of the consequences of ignoring the limitations?

A: These technologies can be effective tools but are not authoritative and don’t understand the concept of facts. It is vital that the outputs from LLMs can be critically evaluated by individuals with relevant expertise. For example, it is known that GenAI/LLMs have limitations that include but are not limited to:

  • A lack of understanding context
  • They make factual errors with great confidence that are not always obvious (i.e., hallucinations)
  • They are not natively good at math – especially multi-step problems
  • Their training sets are not current and can be a year or more out of date
  • They may provide highly toxic and biased results; especially related to minority groups who are not well represented in the training sets.

 

“One doesn’t need to look far for examples of GenAI/LLM’s limitations, but the most well-known recent one is Google’s Gemini LLM producing images of a racially diverse picture of the Founding Fathers and separately an image of German soldiers in Nazi uniforms that included a black woman.”

 

Q: What are some good safeguards for using generative AI/LLMs?

A: Staff should thoroughly document any insights, conclusions or recommendations generated by GenAI/LLMs and identify that these were used in the work products to which they contributed. Including the prompt that was used to elicit the result, as well as the tool used, is also a good practice. This helps trace back the decision-making process, facilitates transparency, and allows staff to more effectively reproduce or troubleshoot.

Ongoing review and testing is vital for both the consistency and the integrity of the results. Human review is essential to this process. In addition, continuous reviews of the emerging legal landscape is also necessary.

GenAI and LLMs are exciting and can drive innovation and efficiency but they do need to be managed properly to mitigate risk. 

We hope you enjoyed this post and we look forward to sharing more perspectives, opinions, and discussions with other industry thought leaders. If you missed the previous interviews in this series, you can access them below.

Arielle Garcia, Founder, ASG Solutions, shares privacy-related challenges she’s seeing in the industry as well as her opinion about the industry’s readiness for the looming cookie apocalypse. 

Lynn Tornabene, CMO, Anteriad, discusses challenges she’s seeing in the industry and Anteriad’s data driven, tech-enabled, and growth obsessed mindset. 

Share this Post: