Thank you very much, and and good morning. And, yes, it is not only me, but I’m very happy today to be joined by Louise to have this session. And, Louise, you wanna introduce yourself maybe? Yeah. Hi. So I’m Louise Hitchen, and I head up the digital research solutions team at MMR Research. Excellent. Thank you. And, what we wanna be talking to you about in the next twenty odd minutes is, first of all, I’ll give a brief presentation about who we are, what we’re doing, and what our journey was on generative AI as a tool and software provider. Then Louise will take us through what MMR has been doing and how they are approaching, AI and what their experiences are and what their further plans are on that front, and then we’ll conclude with a future outlook of where we think, the world is going, where we are going, and and how things will evolve. So starting with the generative AI journey that we at as Market have been on and, well, Market Logic, first of all, the inevitable introduction chart, we are a software as a service company. We provide a platform that allows our customers to capture all the knowledge and insights they have, be it in house, be it secondary or other paid sources, be it public data, bring that all together, and then allow either AI or experts in the insights function to work with those data and act ultimately inform business stakeholders to activate the insights, to bring them to life, to use them in decisions, and inform them. But we wanna talk specifically about the generative AI journey we have been on, and we started that already quite a while ago, actually well in advance of the chat GPT moment, which was a nice coincidence in, how the world evolved. So we had started already, when when those GPT three models came out, early to think about how can we apply this technology to those insights use cases. And not only because AI and Gen AI is fascinating, but we really believe that this is now an enabler that allows us to bring those insights to life in a completely different way, compared to how we were able to do it previously with technology. And so we set out to do that, to explore how to do it. We work with lead customers. It took us one and a half years to get to a product, which we then finally launched last year, early last year, and, of course, then also had further iterations, further learnings, and, actually also then enhancements on our way to provide more capabilities, for our customers. And, of course, you could ask, well, Gen AI insights, why wouldn’t I just take all my reports, stick them into chat GPT, and ask my question and be done with it? What sets us apart, I would say, is first of all, our product, we call it DeepSights, is really specifically trained for the insights use cases, and I’ll speak a little bit more in a moment about what that means. But, also, we are already bringing these out of the box connections to the insights ecosystem. Like, if you have your secondary providers, of course, you wanna have the information on the platform, and, well, good luck getting that into ChatGPT. Of course, also, what we do is secure, is, enterprise grade, and is is trustworthy. And trust, of course, is also a big topic in in the general applications that we do. And, therefore, as we apply AI to these use cases, our initial focus was trust as a topic. If you, as a stakeholder, go to a system, ask a question, and get information, you need to be sure you can trust what you get out. You can trust that the answer you get is correct, that it’s grounded in the actual facts and evidence, and also that it doesn’t miss relevant evidence because that might as well be a major error or omission. And so therefore, we invested a lot of efforts and time, into getting that to work, And therefore, also and I don’t wanna go to too much detail. We invented, if I may call it, invented or developed some proprietary mechanisms that are go beyond what what you would do with a normal shared GPT style interaction. Of course, we start at the core of it with the reports and documents with the knowledge and insights that our customers have, and then we use AI to extract what are the key findings, learnings, insights in those documents. Oh, unfortunately, this whole thing is somewhat broken. I should have maybe checked that slide as well this morning. Anyway, so starting with reports, then we we extract what the findings are. Then we go in, And as people ask their questions, we pull out what what seems to be important from your knowledge estate, but we not just rely on what we find there. We have developed our, what we call here, deep evidence analysis, a layer where we fine tune AI together with lead customers to really make sure the information we pick is applicable for the question. And and that’s really important here, there’s a big difference to other knowledge management applications because in insights use cases, it’s all about you can rely on this piece of evidence to be not only accurate, but also current. So whatever was maybe the most brilliant insight eighteen months ago might be completely irrelevant today. So these aspects all need to be taken into account, and and that’s what we do here before we provide answers to our customers. And then what we do is we we have a system that now allows very natural interactions where people can ask their questions, stakeholder holders can go in and ask their questions and get these answers generated from all the knowledge estate and can even pull out more complex ad hoc reports that go across all the sources, integrate all the information, generate detailed insights, always keeping the references, always keeping the citations, and the transparency where everything comes from. That’s the AI part of what we do, and that’s what we wanted to talk about today. But, of course, not everything is AI. Of course, the human element is still absolutely crucial and important, so, of course, we also have in our software universe tools to help experts bring their expertise to life, to curate content, to publish and target content, and also, to keep their stakeholders up to date with their specific knowledge and insights. But having said all that, I wanna hand it over to Louise to talk us through how MMR approaches this. Thanks very much, Taylor. So, yeah, first, a little bit of an introduction to MMR and myself. So MMR was founded about thirty five years ago with the idea of bringing together consumer and sensory research. And since then, as you can see, the MMR family has grown as a collective of brands to truly champion product experience, and we bring together the right skill set at the relevant stage of the MPD cycle. We work with some of the biggest FMCG and CPG brands in the world. And MMR alone is a a growing team of five hundred people across the world in nine different offices, and we continue to grow. And it was this growth that really, I think, initially led us to explore a partnership with Olaf and his team. So we’re also fortunate to have quite a strong, support from the SLT for internal innovation. So we’re in quite a unique position in MMR because we have an internal, tech innovation team called Nova, and they really focus on identifying, experimenting with different technologies. So we still believe that the human is at the heart of everything that we do, but they really do try and kind of drive, innovation for our clients, exploring different ways to generate insight. And they definitely, kind of believe that it’s not just tech for tech’s sake, so we have quite a rigorous, evaluation process that I think Olaf and his team thank us for, but, yeah, hopefully, it it makes a better partnership. So they’re platform agnostic. We we explore the market, test different things out, and, really, anything that we deploy internally needs to have tangible value. I head up the digital team within MMR, and we focus on implementation, and kind of ongoing usage of these different platforms. So we ensure adoption, ensure that we’re getting maximum benefit out of the partnerships. So alongside client research needs, Nova do focus on innovation opportunities for internal ways of working, and this is where we really started to explore a GenAI knowledge solution. So what were we looking for? We were looking for, first of all, accessibility. So we wanted all teams across the business to be able to expand their knowledge. We wanted to find an efficient solution to free up the back and forth that was essentially being spent on answering the same questions over and over again. Thirdly, we’re wanting something quick, so as near instant as possible. And as Olaf touched on, you know, when you were developing a deep deep sites, we wanted to ensure relevancy. So with the combination of our team growth and evolving methodologies, we needed to make sure that the information returned was relevant and effectively one source of of truth. And linked to that accuracy, and surfacing, information in an unbiased way, I think, was was super important to us. So although our initial interest in a Gen AI knowledge solution, was internal, we also wanted to partner, with a solution that could grow with us as we explore different ways, and different use cases. So why did we choose to partner with Market Logic? So as I left us to talk about, and as I think, Vanessa, very beautifully illustrated yesterday, from Google, the importance of consumers’ data privacy, Our, you know, the security of our information and the the confidentiality of that is is paramount. And as I have said, that, again, is very important to, Market Logic as well, so we’re aligned on that. The second is integration. So a lot of the partners that we explored wanted us to migrate everything across with our own systems, and that really wasn’t what we wanted to do. We wanted a solution that fitted in with existing apps, existing tools, and really connected with things that people are already using. Thirdly, as like, I know I’ve mentioned, the fact that Market Logic and DeepSights is really focused on the insights industry was super important to us. So it’s trained on similar data. The models are really relevant, and you understand our needs as a research business, which makes for a great partnership. So how’s it going, and what impact has it had within the business? So we’re we are still in the early stages. So as I left mentioned, DeepSights launched officially last year, and we’ve been, partnering with them since the end of last year. But we’re seeing some really positive initial feedback. As I said, humans are still at the heart of everything that we do at MMR, but the way that we’re seeing DeepSights is acting as a a really useful team member. I think particularly its ability to concisely summarize a lot of content and particularly citing sources for further human investigation. I think that that is really where where we’re seeing it have the most benefits. And the two main channels, I suppose, in terms of mechanics, the business is mainly using it through the team’s integration. So it’s how our research teams are used to interacting. Interacting. We wanted to really mimic that conversational behavior. So in the same way that you might have popped a question to a team member, you can, you know, have have that direct conversation with DeepSights and have have that interaction. And the other way it’s being used is for to generate more in-depth reports. So they’re thematically structured. You can just get a bit more, depth on a particular topic, and that’s created I think I did another trial yesterday within within about thirty seconds. I think it was this, yeah, brilliant report on on a particular question that I had. And I wanted to show a couple of specific examples with the way that people are using it internally. And the first is, as I think Sandeep mentioned yesterday, with virtual influencers, DeepSights doesn’t sleep. Hopefully, your teams do. Whereas DeepSights is always on whether you’re writing a proposal slide and burning the midnight oil, you can get an instant answer. You don’t have to wait wait for your team. And it facilitates access to a wider pool of learning, particularly as I mentioned, MMR has grown, quite rapidly recently. We’ve got lots of different teams across different brands, and it means your pool of inspiration. You’re not just asking the same people, and circulating the same information within those teams. It really broadens your horizons and has a lot of benefits to that cross team learning. And the third is, I think, Jenny from Babbel mentioned yesterday about democratizing, access to to knowledge and and insight, and we found that it’s particularly useful for onboarding new employees. And even if you’ve got a role shift within, you know, a company as as large as ours, it can be useful as a refresher on certain topics and things like that. I recently came back from, Matliv and I think the other Jenny from Catalyx, weaning a baby is, yeah, definitely, like trying to feed a combative octopus is the way I would describe it. So and when I came back from Matt Leve, there are a few things. We know how many acronyms we have, in this industry, so even just a refresher on some of those things was, yeah, really helpful. And DeepSights doesn’t judge a question. There’s no such thing as a silly question, literally, and no one ever needs to know that you didn’t know the answer, which is great. So I think this particular quote sums it up, quite nicely and reflects the the sentiment, within MMR and and how we’re using it. So DeepSights is an AI powered search engine on steroids, I think. Yeah. Paints paints quite the picture. And I wanted to share a little bit about what we’ve learned from deploying DeepSights and what we’ve learned so far about the enablement, including some ideas for how we’re planning to continue to improve the usage and implementation within MMR. So I think first, having a solid foundation of of content is really key. It has highlighted some, I think, knowledge gaps and inconsistencies in our own materials, which I think is, yeah, something that that we can address and continue to improve the results that it returns. I think, secondly, starting small with traction. We’ve spoken a lot about kind of how to how to implement AI and things like that within the research cycle. So I think starting small, having a hook for teams that they can really see its usefulness and make it memorable. So even if it’s just one use case that an individual has, if it sticks with them, then that’s great. I think making it easy. So as I mentioned, one of the selection criteria was having it integrate into our own workflows, which, obviously, has been hugely beneficial if it seamlessly, fits into the ways that people are already working, adoption, and usage. You you’re setting yourself up for success. I think the fourth thing is being consistent with the messaging. So in my team, we often talk about saying it thrice before it sticks. So just really keeping on obviously, taking feedback on board, but being consistent with the message and the goal that you’re trying to achieve with any new implementation, I think, is really, really key. And on that as a a conclusion, we want to take a look into MMR and Market Logic’s crystal balls, for the future of of qual and and the future goals that that we both have. And I think within MMR, we do see the future of qual research as AI enabled. So I think the the human is still at the heart of everything that that we do, but we’re continuously leveraging technology in incremental ways to really enhance the work that we deliver for clients and internal ways of working as well. And, really, the way that we’re doing this is enhancing our knowledge, our creativity, and feeling better connection with consumers. So the first pillar is synthesize, and this is, you know, one of the ways in which DeepSights fit fits into our goals. This is naturally where Gen AI has exceptional potential. We’ve seen that already with DeepSights, you know, fueling desk research, and Marissa mentioned social listening. Yep. Boolean query writing is definitely an art form, I think. So so we’re seeing the opportunity within that synthesis of a lot of data, but we’re also seeing the ability for AI to to feed into analysis of primary qual, data, I think. Our approach is, again, still very much human centered, but we can remove language barriers with machine translations through DeepL. We can pull together, content from a variety of different that otherwise would have been more difficult to manage, and we can actually what we found is use AI to pull apart segment differences, more easily. So I think we’re finding these ways that it can just enable the human that’s at the center of the analysis to focus their efforts on the interpretation, the storytelling, and and really that execution side of things. And I think under the create pillar, we’re chatting to Olaf earlier, and we thought this maybe should be changed to generate rather than create, but semantics. We’re seeing AI used really powerfully here to as a starting point for discussions, so whether it’s creating more pointed stimulus to go into research to put in front of consumers or just fueling internal conversations for hypothesis generation. You know, you can put creative prompts into chat GPT, and and it surfaces, some interesting things that you might not have have thought of, and they can just be starting points for, for for thought pieces and fueling conversations. And I think from a a visualization point of view, we’re seeing really effective use, of AI combining tools like Midjourney, and DALL E three with our internal creative expertise. We found that’s really, really effective even within groups. So if you’ve got you’re cocreating, you know, pack or something like that, you can feed some prompts in. You can generate images based on, in real time, what consumers have said, and it enables them to better articulate their thoughts if they give some feedback on something that you’ve you’ve put in front of them. And it’s quite rewarding for them to to visualize something that they’ve actually, suggested and and created in that. So it’s really just creating that engaging and dynamic environment within a a group situation. And from a connection point of view, we’ve been deploying AI in in hugely powerful ways to connect with consumers. So, particularly, we’ve developed our own kind of internal capabilities with conversational AI and and chatbots specifically, and we’ve done that both qualitatively and quantitatively. I think for now, what we’re doing and focusing on I think a couple people have have mentioned it, Marissa from a, yeah, kind of LLM perspective and Olaf in terms of training the models. What we’re doing is embedding our own proprietary IP in training those models specifically, developing solutions with building a a sensory chatbot at the moment that we’re doing a lot of training of the models to really decode the entire, sensory product journey. A lot of work, but it’s, yeah. We’re seeing really promising results from things like that. And importantly, we’re enabling that depth but at scale. So that’s, yeah, an overview of of where we see MMR’s focus on, kind of AI enabled research or the thing as think we were calling it in our roundtable yesterday. So, yeah, I’ll pass over to to share Market Logic’s future journey. Thank you very much, Louise, for those very insightful glimpses at how you use this. If I now flip the lens again from to to our side, how do we see AI and specifically our product platform DeepSights evolving? There’s a couple of major areas that that we see focus on. One, of course, is this insights ecosystem that I talked about. We already have a lot, but all of the value of this lives and dies somehow with the data that’s in there. So, of course, we are committed to to broaden the base there. And, for example, we have already a couple of integrations also with consumer video platforms where we can then pull in video transcripts, etcetera, but this is something which we are still, building out and expanding further to make sure that everything’s in the platform and accessible to the AI, but also for the human, of course. But on top of that, maybe more interestingly, a next step will be not only to ask questions and get kind of, well, here’s everything we know. You can be sure that this is trusted. You can be sure that we found everything, and it’s easy to consume, but also give the researcher then a tool at hand like a workbench that is AI enabled to do the work to identify new findings and insights and to work through the content that you really have, but not only to work through the specific content maybe of one specific transcript, but also at the same time leverage the benefit that all the other knowledge is available on the platform so you can contrast and compare that and work everything together and and weave all these insights together in a more holistic point of view. Another very important aspect which we call the AIified best practices is that today AI to a large extent is a black box. Like, you do something throw something in, something comes out, which may be great, but you don’t know exactly how it was done. Usually, you don’t get it in exactly the format you want to have it in. And, of course, being experts, you all have your best practices, your methods. You know how you wanna do something. But currently, it’s very difficult to impossible to really map that to how the AI does a job. And so, therefore, one big focus area is to enable the customer to bring their best practices into the AI, to instruct the AI how certain things shall be done, what sources to be used, how to look at information, through what steps to go, and then also ultimately in what kind of format and structure to come back with the end results so that it really becomes like your way of working that the AI executes and not just a black box. And finally, we think that also now bridging the gap to this creative process is is very promising. We have already today a couple of customers who use also, GenAI to create, for example, concepts, and already there, we are also integrated to feed those concept generators with insights from all the knowledge states or to help guide the creation process. But also a very intriguing, aspect we’re working on is to then help evaluate the creative simply by deconstructing it with the AI and mapping against everything we know already, like, from our customers, from things we tried previously, from things that work, that didn’t work in same or different contexts. So, again, of course, not to, replace the human, but rather to speed up to get to the seventy percent solution much, much quicker, and then focus all the expert brain on getting the last thirty percent right. So these are the main areas that we see for the shorter and midterm, evolving. Of course, nobody knows what the world will look like in three years from now, but we’ll we’ll see. And with that, we wanna conclude the session. Of course, also a little call to action as after the session, you may be file out for the coffee machines. Of course, we’re happy to also demonstrate what we can do to you. Happy to share a free trial, to you so you can actually convince yourself that trusted AI can be trusted, but, also, this is something you absolutely have to work through for yourself. And the only, encouragement I wanna I wanna make in whenever you evaluate an AI solution, make sure you really drill down to the last detail of understanding that what you get is really what you should get, and it’s really truthful and trustworthy because that is not so easy, and I have seen many customer evaluations fail to do that. And and I really encourage you to be very diligent whenever you evaluate AI. But that was what we wanted to share, and, with that, we wanna conclude and, I think, take some questions. Thank you so much. Very complex and interesting topic, I have to say. So I bet there are many questions. Just okay. Hi. PJ from Bose again. As an agency, I imagine sometimes you have clients who work in the same industry or are competitors with each other. You might work with one company a year and another company two years later. Do you have any steps in place for their knowledge not to contaminate each other, especially if it’s if you’re legally obliged to do so? Yeah. No. It’s a great question. I think this is where we started small with using it just internally at the moment on our own kind of methodologies and things like that as a yeah. The a kind of toolkit search function. OLIF can potentially speak to the way that DeepSights is developing tools to be able to ring fence client confidential material and, as you say, not learn from each other, and that’s where I think the training the models and particularly Market Logic and DeepSights understands the needs from an insights business and specifically, yeah, the requirement to keep things separate, have those walls in between. So, Patrick, you can speak. Yeah. Of course, segregation of data is a big topic, not only for, well, for for customers as a a whole, so to speak, but also then in these cases. That’s what what we support, for example, then with very fine grain access control, so you can partition the data universes and make sure that only, those pieces of the information cake, can be accessed, according to whatever your policies and needs are. That’s a good question. It’s something that we had a lot of discussion back and forth internally, before we, yeah, chose a partner. Hi. Isabelle from here. Thanks for your very, interesting presentation. And I have a question about, you mentioned the fall solid foundation that’s needed. And, coming from a, I would say, rather smaller company and having a smaller repository, how small is too small for this to work? I mean, I could I I could definitely speak to MMR. I think, obviously, yeah, we do have a large repository of methodologies and and things like that, but I think where we found it, even if you’ve just got a a small number of documents that references it, it’s still very good in summarizing and, you know, giving those little nuggets and concise insights into a particular topic. But yeah. I think Yeah. I mean, technically, of course, the lower limit is one document. But, practically, for it to make sense, obviously, you would need to have a a document corpus that is big enough so that that your your team wouldn’t be able to have that all top of their mind. But there there is no really requirement to be big data in any sort here. Okay. Again, Paula Marino from Ferrero. Before a question, so you are saying that you are using so far just internally. So you are piloting not yet Yeah. With the client. Okay. Yeah. Exactly. My question is not relevant anymore. Okay. This was a yes, no question. We shouldn’t ask those questions. Hi. Vicky from BAT. I guess my question is not just for, Market Logic and MMR. For all the people who are doing AI, LLP, at some point, it will be there’s some proprietary, there’s some intelligence you put in there to make it to have an output that digestible and relevant, right? I wonder if there will be an industry wide, you know, acceptable truth because everything seems to be great. K. You know? Like, because it’s all new. But in the next in the in in a year or so, what is the acceptable truth, is there anything that is defined to be industry standard? I I think it’s a question for everybody who’s doing AI and NLP. So are you thinking of industry standard in terms of judging whether quality is quality is really Yeah. Good for me. Yeah. That that that’s an Fifty percent good would be good. Right? You know, because it’s all new. Right? Yeah. That’s an excellent question, and we we we invest well, spend a lot of time on that question. On the one hand, because there’s this wide gulf between what what a user perceives and what actual reality is. Because what we see with all these AI responses, users mostly look at, does it sound plausible? Does it kind of vibe with what I expect? And then they take it for granted. But that is not necessarily highly correlated with whether it’s actually factually correct what you get. So, yes, we have done a lot of things internally, but we’re also working, for example, on a, let’s say, we call it benchmark, setup, which we could then use to, like, formally report against how well certain challenges are solved by the tool, which, of course, still leaves you in the situation to judge whether seventy five percent is good or not good. But at least then there’s there’s an objective benchmark and not only, let’s say, PowerPoint slides claiming that we do everything right, but also something that can be then traced and tracked. But it is a complex topic because there’s also many complex aspects to measuring this. Yeah. I have a true question now. So what do you think Steve from Ferrero. What do you think is the main benefit for the end client using AI? I mean, cost, speed, depth, what is the main focus for us? So I think for our end clients, the way that we’re seeing it obviously across those those four pillars, I think efficiency is definitely one of them, whether that comes with the cost saving or not or whether that just allows you to focus more on better storytelling output that makes more of an impact or speed to to delivery. We were chatting yesterday in the roundtable about the demands for quicker turnaround for, you know, shorter short ever shorter timelines. So I think where we’re certainly seeing the benefits is enabling the humans in the loop to focus on delivering to clients versus some of those inefficiencies that you might have had waiting for transcripts or, you know, collating lots of information, and that enables us to produce better deliverables, better outputs that make more impact with clients, I think, is where we’re really seeing it. Yeah. And if I may complement that with one remark. So we we see the efficiency exactly as you say, and we also have some case studies in terms of how much it is really that you can save by just asking a question instead of going out and trying to collect the data yourself, which is I think we found, like, on the order of fifteen minutes or so for a question you have. But, also, we see a big benefit in the effectiveness in that this lowers the hurdle and the barriers of people to use insights because instead of having to either go to maybe more arcane complex systems to uncover documents that they have to read or maybe call colleagues and wait for a response, they can very easily just ask a question. And, therefore, we see higher adoption and higher usage of insights. Somebody that that is, of course, harder to quantify, but we believe that’s very important. Yvonne from the again. We we all try or cloud, and we noticed that the type of prompt will generate a different kind of answer. So and you mentioned that those, you know, like, large language models has Yeah. Specifics to insights or and a closed system. So I want I went I was wondering whether at MMR, for instance, you have developed programs to train your researchers in terms of the right prompts or certain types of prompts and how do you kind of approach prompting with the use of DeepSights? Yeah. No. It’s it’s a really good question. And I think from so my my team, as I mentioned, are involved in the implementation of lots of different platforms, and that central, I suppose, center of excellence is where we can really develop that knowledge and then share that within the business. So the example I mentioned about visualization of, you know, consumer feedback within a focus group, yes, you do need to have thought about how you prompt the AI to generate that image. So we’ve got best practices that we’ve learned from doing that a lot of times, and we can then share that that knowledge whether it’s examples of prompts, queries, which are about social listening, things like that we can share within the team. I think where we’re also developing the the IP and training the models is, as I mentioned, within the sensory chatbot is a really good example. That is a a lengthy process for us to trial and, train it and that the human is doing that. And I think in order to develop your proprietary tools and that information, I think that’s where we really see the value for the clients is developing that kind of IP. Alright. Thank you so much. Thank you.
The Market Research industry stands to be the most transformed by new generative AI technologies. In our presentation at Qual360 EU, we showcased how AI for market research and insights is currently being deployed, as well as look at the attitudes and perspectives that researchers and business stakeholders alike hold towards these advancements.
Key session takeaways:
1. Challenges and opportunities presented by using generative AI for Insights
2. How will AI change the roles and responsibilities of market research professionals
3. Current use cases and deployments for AI in the market research sphere
