So I will go ahead and hand it over to the first panel. It is, gonna be moderated by Lisa, Cortado. And I’m gonna hand it over to Lisa now, and I’ll be back for q and a. Hello, and welcome. I am so excited to be with you here today and to introduce you to this amazing panel of practitioners who are really changing the game in market research. I’m not gonna spend a lot of time reading through everybody’s bios. I know you have the the links to that, but I am gonna ask each of our participants to introduce themselves and their role and give you just one of the many examples they they each have and how they’ve been able to combine artificial intelligence and human intelligence to drive meaningful business impact for their organizations. So I’m gonna start us off with Theresa. Thanks, Lisa. So I’m Theresa Correa Pavlak working on leading insights at Halion on, brand and incubator, and I’ve been testing and learning with AI for a little over a year and a half. And I think one of the most exciting pieces is being able to enhance messaging opportunities where you’re able to feed in what you know about your consumer and really be able to get to some, tonality and, linguistics phrasings that seem even more consumer friendly and approachable. So really seeing the integration of the two as being more of an enhancement and inspiration. Fantastic. So we’re gonna move now to, Tripp. And I I love that we have some different industries represented here, but understanding, introducing us to to Tripp and some of the work he’s been doing. Thanks, Lisa. I’m I’m Tripp Hughes. I work with, Organic Valley Crop Cooperative. We’re a farmer owned cooperative, in here in the US, and I lead our, consumer strategy and brand strategy work and really trying to get inside the the hearts and minds of our consumers. And so, I mean, AI, as we all know, is happening so fast. We’ve had it in the background, for a long time. Now to have it kind of front and center in tool sets is pretty amazing. I mean, I’m gonna talk about a few examples, but our our group is starting to dive heavily just into our Microsoft Copilot. Right? And the ability to go back and and recap meetings and notes, when I’m talking with different consultants or different, you know, folks I’m working with. It’s it’s crazy, and its ability to summarize. And so we’re doing a lot of summarization with it, but we’re also doing a lot of generation for, thought space, in a really rapid way. And I’ll I’ll talk about that a little bit too. Fantastic. I’m gonna shift over to another regulated industry, Anna. Hi, everyone. Anna Eslin. I’m a senior director of insights and strategy at Pernod Ricard. We’re a spirits and wine company with brands like Jameson and Absolute. So, you know, we’re experimenting with AI constantly. I’ll give you a good example quickly of when you scale, and that’s in product testing. So the the man meets machine here because we actually take a small sample of human consumers, that give us some examples, and give us some evaluations of products. And then AI uses that small sample to protect to predict that scale. So, essentially, it’s taking, you know, a few different models of product attributes and demographic translation to be able to connect that across countries to then give us a generation and a prediction of of things from product liking to what the polarization risk might be on that product test to optimizations, and can even go all the way to white space opportunities for innovation, which we’ve done in categories such as whiskey. So what it’s really giving us is cost savings, time savings, and helping us with various use cases from either, you know, places we used to just team judge where there was prone thought to be biased or where we’re doing bigger, more robust product tests, and it’s much cheaper, much faster. Wow. Now, Olaf, you’re you’re in a a really interesting position, as a provider of services to to clients such as these. So you get to see a wide range of of applications. Where do you see, AI plus HI delivering, the strongest impact? Yeah. Thanks. So, yeah, my name is Olaf Lenzman. I’m I’m a cofounder, of Market Logic and, also responsible for product innovation here. We provide our customers with a system with a software, which, of course, is in AI enabled and infused where they can bring together all the, insights, research, knowledge they have in house, they procure externally from third parties, from syndicated providers, etcetera. And then help them to navigate that, to make sense of it, to connect the dots, and to apply it to questions and and business cases. And as you said, it’s really amazing to be able to see how how customers apply that and and what use cases there are. I mean, one one example that is is very much, front and center for me because we also recently, spoke about that is with with our customer Novartis in the pharma space. We’re using our solution. We’ve been able to reduce their research spend by using AI to make sense of what they know already by more than fifteen million euros last year. And who are even actually AB tested different teams, who had similar research questions to answer, like what packages to use for a certain drug and, like, the traditional way of spending, fifty k on a research and and waiting for three months versus literally zero incremental cost using AI to make sense of what you know already and having a robust recommendation in a matter of one or two weeks. That’s a huge difference both in in terms of cost saving and time to insights and actually also agility in the market. So that’s quite insights, and that’s still, on the other hand, the kind of mundane benefits. There is a lot more exciting stuff to come, I believe, when it comes to new ways of applying the AI like, Anna said, for example, for innovation topics, and I hope we can touch on that a little bit. Oh, absolutely. I wanna come back. Anna, you were describing how the work is changing. So AI hasn’t taken the human out of the loop or taken market primary market research out of the loop entirely either. It’s it’s more been an augmentation. As you think about that, how have you seen in the context of how work happens, the role and the skills required of the researcher changing as well? I think one of the things that we’re all wrestling with is how to go beyond the black box with some of our research partners. And I would argue that’s not necessarily a brand new skill. Like, if you think when you first came into marketing research and you were learning your vast array of tools and, like, the standard concept testing and product testing and comms testing and stuff, you had to, like, kinda dig deep with your vendors to understand. Right? The difference here is things are changing so rapidly. People aren’t always super transparent about the models they’re offering. You have to really understand how to ask the questions, probe the models, and go beyond the sales teams of your vendor partners to the technical teams would be my biggest thing and advice I give to my team. So, you know, all of those things, I think that interrogation has really changed. But, you know, it’s a skill, I guess, we all learned as we came into the function. And so I would almost say it’s kind of hearkening back to some of that interrogation, curiosity, learning the vast array of what’s available to then be able to experiment pilot with that eye towards scaling. That would be the other kind of nuance. Right? Especially as a leader, there’s a lot of different pilots happening, which is a great thing across my team. So as a leader, it’s that role that eye towards, you know, what can scale, what are the broad use cases, who do we wanna invest in, you know, how do we make something that’s more systematic and take it beyond pilot. Oh, that’s fantastic. And I I could see Olaf is, like, really leaning into that. So how would you respond to that, Olaf, and how are you seeing your your customers changing and their roles and interacting with you changing over the time that you’ve been working in the space? Yeah. I mean, there’s a couple of interesting angles. On the one hand, from from the user’s perspective, not from the client’s perspective, I I believe what always will remain front and center is being a researcher, being deeply immersed in insights. You need to understand people, and that remains the core of it. Now you have different tools to do it. But, of course, it also requires a bit of curiosity, as you said, Anna, to think about how can I now use and deploy those tools to elevate my expertise and to get more leverage and to be more productive, by by, like, delegating to those tools? But that’s a challenge we all have even outside the professional life as you work more with AI. How can you make more more, with it? On the other hand, an interesting thing we also observe from our perspective is more and more, we’re not talking on the customer side, on our client side with the insights folks, but then AI teams come in from the customer and say, oh, we have our AI initiatives, corporate AI initiatives. We also drive those things. And, of course, that is laudable and correct, but then again, there’s this disconnect that most of the time, those folks are not necessarily insights experts, I would say. And struggle a little bit to understand what’s the nuance here, what’s different, and why is it really worthwhile looking at that in a different way, and they tend to try to put everything into one of the same box and say, we have enterprise check gbt. Why not just stick everything in there, and be done with it? So that also requires a bit of, I think even upscaling and better understanding, for example, on the side of the IT teams to wrap their heads better around the the use cases and and this vertical special challenge of insights. Terrific. Now I’m gonna come back to to Tripp, and I know, when we were we were talking before the call, you gave lots of examples, of how AI is helping the the humans work smarter, faster, better. What do you see as that that role of of the researcher? What skills do they need more of, and which ones do they maybe need a little less of, in this environment? I think I think this is such an important question as we think about our current talent base and our future talent base in terms of how does AI change the job. Great question. I mean, first and foremost, I am not a researcher. Right? I’m coming over from the brand management side, strategy side, and still kind of sit in between, but I realize the power of research to unleash insights. Right? And and that’s how we drive our strategy. And so, if, you you know, I think I’m a I’m I’m I don’t have in house researchers, so I rely on my partners. So, you know, knowing who your partner is, developing a relationship, making sure they understand who you are and what your capabilities are is great. I rely heavily on it on my network that I’ve met through, TMRE, asking them questions. Or you you know, Anna and I were in the on that call before. You know, I need to follow back with her. I wanna hear about, you know, what she was experiencing. That might be something that I wanna try, and pilot. Right? And so that’s one of the ways. You know, I was sharing some some of the ways, and it it comes so fast. We’ve been working with Mintel, in their, concept, innovation platform. You know, they’ve got Leap and Spark now. What used to take my team month and then a couple weeks, we’re sitting in a room together now, one person at the keyboard and just riffing, and coming up with hundreds of ideas, that we’re able to take instantly. We’ve now taken that where if we are doing any type of focus group work, we’re able to move in between focus groups, adapt, the innovation concept, and and bring it into the next group. I mean, including with at least rough imaging, and that’s that’s crazy. Right? That’s what we that is what we dreamed of. I remember actually hearing that a big brand was doing that, like, two or three years ago. I thought, oh, man. Like, it would be so fantastic if we could harness that, and now we can. You know, the other platforms like, there’s a small group, Net, that we’ve been working with, and Net is kind of a qual, but what they’ve got is a really powerful AI tool in the background that allows me to do these call interviews, allows me to scale them up, and allows me to efficiently, kinda analyze, summarize, create quick show reels. You know, that’s another example. And so I think what I’m getting at is almost everything we’re doing requires our human input. Right? And it requires our human, you know, assessment once we get the data back or information back or idea back. But the speed at which this is happening is game changing. And so I’m I I think I’ve literally doubled the number of projects I’m gonna run this year, without, you know, any more dollars to it or time to it. Right? And so that’s fascinating. Thank you. And lots of lots of good callouts there. Teresa, coming back to you, I know you had had shared you’re doing a lot of test and learn, a lot of experimentation, in your role. Is that requiring your researchers to show up with a different mindset, different skill set? It’s well, some of it is the same, but the other piece is just getting comfortable with knowing that the tools do evolve and they do get smarter. So for example, a online call that maybe about a year and a half ago or two years ago that used both AI to moderate may not be may have evolved completely, whereas now the AI model is much more sophisticated, knows the consumer much better. So I think one thing of it is that you may have tried something before, and perhaps it wasn’t quite there yet, but don’t give up on it. Always circle back and see maybe the model has evolved and and gotten more precise where now it is a value add, so with that piece of it. The other piece of it is just working iteratively. Certainly, there are best practices and innovation where ultimately you want to get to a viable proposition and what its in market potential, but I think where AI really helps is accelerate that discovery stage and helping you get to a minimum viable product, much faster. So that that’s another piece of it is just getting comfortable with, you know, as Tripp said, you could shorten and accelerate that early iterative process, by integrating not only the feedback you get from AI, but also what you know from consumers and then determining at what points do you need to have that live touch base with consumers. So discerning that versus where is AI feedback enough to get you to the next stage. That’s an important concept, I think, all the way across the board with AI and everything that’s happening in our space. The the test and learn, the fail fast, the iterate. So I’ve learned the most from some of the errors I’ve made in the past. I prefer to learn from other people’s mistakes so I can avoid them. And putting this out, I’m gonna throw it out to whoever wants to jump on it first is, could you offer a cautionary tale or some guidance or learning for for our audience on maybe where you learn something important that you need to consider to get the most out of AI, that maybe it didn’t deliver quite what you had hoped, but there was a there was a learning in it. I can start, Lisa. Thanks. So first, just big picture guidance would be to ground in the business case first. And the reason I say that is, you know, Olaf kinda touched on it. In bigger companies, you increasingly see kind of structures and systems being built to assess the AI strategy and investment for the company. And so I’m increasingly seeing you almost have to get on that prioritization or road map list if you wanna do really big things. And so, yes, we’re experimenting and piloting with many different research vendors. But if you wanna talk big picture and how your systems work together, you know, you really have to think about the business case and what the cost savings, time savings, etcetera, is gonna look like. The cut I always hear, hey. Let’s do AI, or let’s do something with this AI tool. And it’s like, to what purpose? Exactly. And, you know, supply chain is asking the same. Finance is asking the same. Right? So to get on IT’s road map, you really have to be, like, buttoned up with your case. The quick cautionary tale I would say is AI is still not great at human emotion and nuance. So we’ve actually done a lot of AI only pilots, and I think this is where the AI plus human intelligence becomes so important. AI only, I’ve seen it be good at, like, high level buckets or narrowing, but it will miss some of the diagnostics underneath. Right? So it can kinda get you, like, is it great, average, bad, but then the stuff underneath isn’t very good. Or it’s not very good at some of those nuances of emotion, like, outliers that I’ve seen in the pilots we’ve done are, like, it can’t pick up celebrities. It can’t pick up humor. It can’t pick up cultural relevance of a timely issue. To Teresa’s point, like, we’re still going back now a couple years later to see if it’s gotten better. But a lot of the pilots we’ve done, like, where the caution is, is that a lot of times, we’ll, like, revert to the mean and force an average bell curve and all of those things. So it’s like, we have a lot of times in our normal day to day testing, like, pre AI, a lot of stuff in the middle, which is true for AI. But, usually, you then have enough diagnostics underneath to split and t stuff apart, and it’s not great at that yet. That’d be my caution. Fantastic. I I’ll chime in on that. I think that’s a great, great insight. And what we’ve done is we have been forcibly kind of testing high, middle, low to get these series of responses and then aggregating back in because it’s it’s to your point, it’s gonna cluster around where you ask it. Right? And so if you’re asking something up the middle, you’re not gonna get these outliers. And so we’ve purposely been creating outliers to see, okay, what’s it look like over here? What’s it look like over here? So that’s a great point to keep in mind. I, again, I think the other the other piece I always go back to is it the human factor is not gonna go away and is more critical than ever because, I was just talking to a partner of mine who got a piece of work back, and they’re like, it feels kind of like the person who did it for me just asked AI. I’m like, well, ask them. But if your gut’s telling you that, it’s probably true. And, you know, make sure you’re asking questions. Make sure you’re you’re staying forward with not just accepting what comes back to you. But that’s always been what we’ve had to do. Right? Usually, it’s from a partner, and you’re kinda like, really? You got that response? How’d you get that? Right? Thank you. Moving on, I you know, you talked about this a bit, Anna, in thinking about starting with the business case, which I I think is critical. There’s gotta be a reason for doing AI. There’s gotta be a business reason for doing a study in the first place. As you are looking to determine which projects or what capabilities in AI or technologies to either bring into your companies or Olaf, you’re sitting in that position where you get to decide what you wanna develop next. Are there certain criteria that you are applying or a framework to understand what might be a good application for AI versus maybe not such a great application? Maybe I’ll start with you, Olaf, looking at it, through the future of what you wanna create, and then I’ll ask our other panelists for how do you decide what to do what to bring on next? Sure. Happy to try and answer that. Of course, our perspective is more an aggregated view as we’re trying to build products that hopefully, many of our customers will will wanna leverage. So the way we think about that at the moment is now AI gives us finally the opportunity to really elevate also a lot of the insights much closer to the actual, let’s say, job to be done in the business. What do you actually wanna accomplish? What’s the outcome you wanna generate? Not just create some in well, surface insights, provide insights, but really have in mind and contextualize what is this what it is, that we’re trying to accomplish. And then, we’re thinking about the problem in this framework of saying, k. What’s what’s the value stream that our customers have? What are the tasks that need to be done in there? What are the jobs to be done that internally the the brand managers, the innovation people have to to work on, and how can we best support that and then try and use that framework to find where’s the biggest impact, from the demand side and where’s the best fit that that we can see technology wise and then from what we can bring to the table, to drive that forward. Of course, there are other aspects like, is the AI really up to that job yet? As as, we heard earlier here, it’s maybe not now always up to the task when it comes to the last bit of nuance and interpretation. So that also plays a role. But also here, a little bit, going back to what we said earlier, we really have to keep in mind things are changing and improving so quickly whenever I personally also have that experience whenever you form a perception and a learning on what works, what doesn’t work, and you come back three months from now. Technology probably has already advanced two steps, and and you need to reassess and you need to reshape, your expectations there. So, that’s a little bit the way we approach it. We’re looking at what is really the business use case, we can try to support, and, of course, what can technology do credibly at this point. Thank you for that. And so from a client perspective, there are hundreds, if not thousands, of agencies with AI solutions showing up on your doorstep in your inbox every single day. How do you go about determining what you want to to explore deeper? What kind of criteria do you apply? Theresa, do you wanna start us? Yeah. I mean, some of the things I look for are in terms of efficiencies, quality, and speed as well. So to your point, there are many agencies that are integrating AI tools into their platform. So are there capabilities such as, helping to, synthesize and look at learnings across a range of studies that have been done so that that way it’s easy for anyone to go into the tool and just be able to understand a consumer or a topic or even from a survey generation standpoint, be able to accelerate that process further. So those are some of the criterias, but I also think that just even looking at other tools, you know, Tripp had mentioned Copilot, but tools that can help us get to comprehensive strategic analysis a lot faster in terms of looking any at our panel, our consumer insight data. So being able to take a look at the business holistically and and sort of, lean on some of these tools to help to help us through that process and simplify it is another, criteria too that I look for. Fantastic. That aligns really closely with with what Olaf was saying around. What are the jobs that you’re you’re trying to do, and and how could this be a tool? Tripp, anything you would add to that? Yeah. I mean, I’d I guess I’m a a bit of a dreamer and and perhaps because I’m not a researcher, but, like, I I see and I have this idea of, to Teresa’s point, like, five years out. Right now, we’re thinking about all of these as kinda separate tools. I want that partner that’s AI integrated who is doing all of that. Right? And and so I I’m just briefing. And it’s identifying the tools. It’s identifying the data I’ve had. Is it understands who my consumer audiences are and and away we go. Right? But and I I don’t think we’re radically far out. And so right now, it feels like I’m playing with a lot of tools. I would like to get to that spot where I’ve got a partner. That partner is the anchor, and then I’m able to run. I imagine your inbox is filling up with partner requests. Anna, what would you add to that? I agree, Tripp. Like, I’m a dreamer to it. And I guess I’ve almost been disappointed with some of the standard top five ish companies we’ve all used in market research over the last twenty years that they haven’t cracked that integration of their different solutions yet. They’re all promising it, but I have yet to see a very good tool that will take all the dollars I’m giving them on brand equity checking and comms testing or the concept and the product test and, like, the com the shopper and the consumption and and bring it together in a very succinct way. I think it is pushing those big vendors to connect the dots because they’re the ones that are sitting on a lot of that data. I think where I’ve been hunting is, to build on that, the the the place that my team is spending the most time and money. Right? Like, that’s where you’re gonna get the most efficiencies, and it tends to be those kind of more standard tests. A lot of the just besides that experimentation we’re doing are with more boutique firms, and it is more of that discovery phase of a project or qualitative or semiotics or things like that. And, you know, I I’m taking both the kind of top down, here’s the team where we need to go hunt and bottom up. Like, let them bring me a bunch of pilots and do a bunch of things to see what works, and that seems to be working well. But I would say my the biggest opportunity is in those big partners that we’re already spending a lot of money with, that already have a lot of our data that haven’t quite cracked it. That’s great. So now I’m gonna bridge that to, a question around there’s so much stuff on the horizon. I mean, Tripp says we’re not that far from it. Probably not. What excites you most about integrating AI and human intelligence in the field of market research? I mean, I’m I’m seeing how the consumer, shopping journey is being impacted through AI. We’re seeing an increased number of consumers now using ChatGPT. There’s many, AI shopping agents. I I think we’ve seen a lot of publishers talking about this as well. So I’m excited for the opportunity in terms of from an insights and analytics standpoint, how do we learn and get ahead of of the learning so that that way we can bring that back to the business and determine then what are the jobs to be done from a comms or activation perspective to be able to integrate and weave ourselves into that shopper journey. So I’m I’m excited for that. I think we have ways to go as that still is early on, but I’m I’m excited to see how, from an insights perspective, we can impact that. Fantastic. Tripp, anything you wanna add to your dream of the future? Yeah. I mean, I mean, Anna, you’re talking about the big companies. I think it’s gonna be a little company that comes along and solves it because they’re eager and hungry and and pull it all together. But, in a I just the the speed and scope of we never have enough resources. And so specifically a company like ours. And so to know that I’m really close to doubling what I did a year ago on probably even a slightly smaller budget, is really, heartening. And so I do think, you know, this ability to help us understand and and therefore, now my challenge is continuing to disseminate it across my organization. Right? Before, it was, like, getting these these one task after another done. And now it’s wow. We actually have a lot of information to move out and integrate now. So I’m excited for that. Anna, anything you wanna add? I mean, you’ve you’ve thrown down the the gauntlet for some of the big agencies to bring it together. What else is exciting for you on the few in on the horizon? I’m excited to use it to tackle some of these big business questions that just take so much time and stakeholder. So scenario planning and forecasting would be one that’s on my mind now. So, you know, as we define, like, leading and lagging indicators for business performance, it’s still a pretty complex process in a world with so much data that’s changing all the time, and it’s super uncertain. So especially in a moment like now where I’m being asked, like, what’s gonna be the impact on tariffs? Right? Like, my team’s looking at it. Finance is looking at it in different ways. Supply chain’s looking at it in different way. We’re still spending a lot of time on what’s happening now versus what do we need to go do, what could happen, what do we need to be prepared for. It was the same during COVID. Right? It was kind of a scramble for all of us to, like, manually almost, like, you know, look at all of our different indicators. And because all that data lives in so many places, I’m excited for further integration, not only within our own house of insights, but across some of these other functions. So we can spend more time actually talking about the so what now what. Oh, that’s awesome. So I saved you for last on this, Olaf, on purpose. As chief innovation and product officer, I’m sure there are a lot of things you’re excited about, but you’ve heard these challenges from from these client companies. What what do you see next? Yeah. Well, my my dream is maybe not so far from some other dreams. I think we have seen now great many applications that can be unlocked with AI, but there’s a lot of fragmentation still. It’s a little individual use cases and tools here and there. And to take that further, of course, improving those use cases for the specific jobs to be done, but rather also now creating an environment where in a more structured way, teams can also work together with the AI. I think that’s one of the big things that’s missing today. Everybody can go there and interact with the tool, but then what? Then you are again in in fragmented corporations, also on collaboration. So, coming to a moment where where the the AI becomes kind of like you were a virtual coworker that you can interact with and delegate your tasks and questions to in a more seamless way, in a more holistic way, like you would maybe send somebody a message on Slack or whatever your communication tools are, and they would just participate and interact with you at that level like another colleague and and do some things, prepare some stuff for you, rip and and share it back with you. So I think there is a way, and we all haven’t figured out really how it exactly needs to work, but there’s a way in the future for how this corporation, this true collaboration between humans also in the team and AI, needs to work to make that much more seamless and and also much more, yeah, fruitful and beneficial, and and overcome this little bit more, well, quite still fragmented, and isolated way of working we have at the moment. Yeah. We’ve just gone from synthetic respondents to synthetic partners. Yeah. My my brain is going down a sci fi path, but I wanna invite Seth back, to our conversation. I have to believe that there are a lot of questions and responses in the group chat that that we can’t see from from where we’re sitting. And, hopefully, Seth has gone through those and and picked out a a couple for this this panel because your experience is so rich and in a lot of different areas that we barely scratched the surface in this time. Yeah. I know. What a great, great panel. Thank you so much. You do have a ton of questions. So let’s just dive in. If the speakers have hired lately, have they been requiring candidates to have some experience or knowledge of AI, or has it made a difference in breaking through on getting a job? I I’ll go quickly. Like I said, I I’m I do use outside partners. But as a result, I am I am finding and talking to scrappier, younger, like, you know, I’m not looking for people like myself. Let’s just say that. Right. Let let’s leave it there and and continue. And by the way, Tripp, you said that you might be a dreamer, and I’m here to let you know that you’re not the only one. That’s a lyric. Okay. Yes. It is. It’s from Imagine by, some guy named John Lennon. How anyone can distinguish between AI and human information you acquire or or I guess it’s how can anyone distinguish between AI and we’re gonna skip this one. Question for the panel. Curious to know how the panel feels about AI moderation, conversational AI. Have they run projects without the human moderator? Would they consider that, or do they believe in the human moderator cannot be replaced? We are actually piloting that right now. Purely AI moderation, entirely, entirely moderated videos. We are starting more low risk, so more of a qualitative sense, of course, but also on something that is kind of adding color and depth and knowledge to something a topic we already know about. But we’re actually looking at it across multiple, you know, projects at the moment. So, yes, absolutely experimenting with that and seeing where we can scale it. Excellent. And as we have affirmation, we’re gonna, roll on related to the above. How are you moderating monitoring for errors or hallucinizations? I find myself going through cycles where, because of the efficiencies, I trust the tech too much and then start finding errors. So if anyone besides Anna has any insight there, let’s do it that way. Right? This way we have, multiple insight. I think that’s a good call, and I think that that’s where challenging the learnings as you would with any other, you know, kind of piece of learning, it that remains the same. So if you’re doing a if you’re pilot testing on a topic and and you have some insight already on it, it’s you have to challenge it to make sure that you’re covering for those potential hallucinations, as were said. And, also, the AI, at least from my standpoint at the moment, it’s not the final word. It’s an input into the process. So I always you know, there there should be another point in the journey where you can sort of vet and and validate that as well. I if if you all don’t mind just saying AI is not the final word, so that everyone in attendance can absolutely take that as sacrosanct to what we’re doing here, that would really be at least raise your hand if you agree that AI does not and should not have the final word no matter what you’re doing. I mean, it’s it’s right. It’s your butt in the seat at the end of the day. And so so if you’re gonna trust that it does, good luck to you. Good luck. Exactly. Alright. And I I can only add Hold on, please. Yeah. I can only add that also from our perspective. What we see is when customers adopt, an AI solution at the outset, there’s very critical review. Once that phase is passed, everything is believed. So you really need to challenge yourself to to do, like, sample checks and always be critical and use your critical thinking and be the judge. There is exactly this tendency to become overconfident. And that you’re you’re mentioning the human in the loop, you know, kind of Yeah. Yeah. Both genders. It it whether it’s, you know, at this point in the supply chain, this point doesn’t matter as long as the human has adjudicated. Great talk. I’m curious how much validation you’ve done. This is overly simplistic, but if you ask AI a question, it gives an answer. How often have you run AI generated insights in parallel with rigorous professionals research based insights, and how did those results align? And somewhat related, when don’t you use AI? So there’s three or four questions there. I can give you a quick story. Like, yes. We have absolutely done man versus machine. The results were what I nodded to that will give you some good high level answers, but not enough diagnostics. We are actually, though, two years later, rerunning a similar pilot on more of a global scale. So to Teresa’s point earlier of things are getting better, you know, we’re testing it again to see. So it’s a kind of a constant experimentation. Okay. And this last one is for you, Anna, and then we’re gonna move on. Hi, Anna. What is Productive’s strategy in product innovation with AI? We understand you might not be able to share a ton here. RTDs are leading the trend of the market. How do you identify the white space opportunity with data and I AI? What can you share? Yeah. We have like, what I was talking about earlier on product testing, we are using human plus AI, you know, models to be able to understand what RTD flavors products to launch next. That was absolutely something we’re doing. And how do we two or three years? I totally understood. As as far as testing, this said strategy, so I just wanted to make sure that we have yeah. Yeah. I do it. Yeah. Yeah. Product is probably the best example, but strategy still requires human input, I would say. Yeah. Alright. Fair enough. And so does a panel like this. Lisa, fantastic job moderating. Thank you so much. Olaf, Theresa, Tripp, Anna, thank you so much for your participation. What a great way to get us kicked off here on day two of team earning at home.
For this TMRE@Home panel, we discuss how to balance artificial and human intelligence in the consumer insights field, joined by experts at Haleon, Pernod Ricard, Organic Valley, and Organon.
Panel Discussion:
The insights industry is evolving at an extraordinary rate, and we are at the dawn of a new era. As Artificial Intelligence accelerates at unprecedented speed and continues to integrate with Human Intelligence, market researchers are tasked with reimagining how research is done. This panel will explore how combining AI+HI boosts efficiency and effectiveness – and the impact this can have on your business and consumer. The participants will also share their visions for the future state of AI+HI and what it could mean for MR as we know it.
