Fergal Reid is the Chief AI Officer (or VP of AI) at Intercom. He is the key leader responsible for driving the company's shift toward generative AI and specifically for the development of Fin, their autonomous customer service agent. His role focuses on the engineering and strategic decisions needed to deploy reliable, high-scale AI products.

 
 
 
DOWNLOAD TRANSCRIPT

295 Audio.mp3: Audio automatically transcribed by Sonix

295 Audio.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Eye_on_AI_295_-_Audio.mp3

FERGAL: With a lot of AI. While we've built this very deep product targeted at customer service, we have made a customizable and then customers will start to stretch it because it can just do a lot. It can deliver a lot of value. We don't really want our customers, you know, writing individual prompts and trying to get down into the machine learning because, you know, we're able to go and test and optimize across thousands of businesses, and that just helps us get better quality than you get by targeting anymore.

CRAIG: Build the future of multi-agent software with agency. That's a g t c y. Now, an open source Linux Foundation project agency is building the Internet of Agents, a collaborative layer where AI agents can discover, connect, and work across any framework. All the pieces engineers need to deploy multi-agent systems now belong to everyone who builds on agency, including robust identity and access management that ensures every agent is authenticated and trusted before interacting. Agency also provides open, standardized tools for agent discovery, seamless protocols for agent to agent communication, and modular components for scalable workflows. Collaborate with developers from Cisco, Dell Technologies, Google Cloud, Oracle, Red hat, and more than 75 other supporting companies to build next generation AI infrastructure together. Agency is dropping code, specs and services no strings attached. Visit agency to contribute. That's a g N t c y o r g.

FERGAL: So yeah. So my name is Fergal Reid and I'm chief AI officer here at intercom. I've been with intercom about eight years. I have a technical background with a PhD in sort of machine learning applied to network analysis and degrees in computer science and maths and um, being kind of working in AI and machine learning for, for a long time now and joined intercom about eight years ago to kind of start building really the machine learning function in intercom. So yep, that's me, okay.

CRAIG: And we're talking about agents today and specifically Intercom's agent. Uh, is it an agent platform or is it an agent?

FERGAL: Yeah, there's sort of a blurry line between those two things. Uh, we would describe Finn as more of an agent. Um, however, it's a highly configurable agent. You can really adapt it to do a lot of things for your business. We're still a little skeptical about very pure agent platforms, and they tend to not deliver the quality that's needed today to accomplish any one task, end to end. So we've kind of been a bit more opinionated than that. We've built an agent product, which is fine, but we've done a lot of work to make it very highly configurable and customizable. But we don't really want our customers, you know, writing individual prompts and trying to get down into the machine learning because, you know, we're able to go and test and optimize fine across thousands of businesses. And that just helps us get better quality than you get by targeting anymore. So yeah.

CRAIG: And intercom, what was intercom doing before it launched fin.

FERGAL: Intercom is traditionally a SaaS company. It's been around, you know, 12, 13 years, something like that, and really launched the business messenger way back when. It's one of these messengers that appeared in the little corner of a website, and you could talk to people. And India.com was one of the first, if not the first to build a product like that, which then kind of went everywhere and become kind of evolved from that into a customer support platform. So, you know, people at the other end of that messenger, customer support agents answering customer support questions at scale, giving really good customer service. Intercom's mission traditionally was, you know, make internet business personal, really, really kind of give great product experiences on the internet like you might get if you had previously. You know, you went into a coffee shop and you had that highly personal service. So that's kind of what we did traditionally. And we're sort of an early adopter of AI. We had a previous generation product before fin resolution bot. We put a lot of work into to answer customer questions, and we were really well positioned when the current wave of AI came to sort of take advantage of that.

CRAIG: Uh, and you're squarely in the customer service space, is that right?

FERGAL: That's where we are today. I think, um, you know, our future aspirations are broader than that. But today, definitely the product we have, fin is targeted at doing customer service and does a really great job of it. Now customers do stretch that. Fin can do a lot. It's very powerful. Product can do a lot of functionality. So we do see customers stretching it where they're sort of, you know, they buy fin primarily for customer service, but then they start doing things like, you know, upselling or customer success. So, you know, it's definitely an area in which our product is, is being stretched and being pulled broader by customers. We have some customers who use Fin today, and they use it entirely in a pre-sales way. So, you know, I think with a lot of AI, while we've built this very deep product targeted at customer service, we have made a customizable and then customers will start to stretch it because it can just do a lot. It can deliver a lot of value for them. So yeah, that's roughly where we are.

CRAIG: And the. Agentic capabilities. What what can fin do And is it is the interface through chat or or voice? I mean, it's text or voice or what?

FERGAL: Yeah. So if it's a pretty fully featured product at this point, uh, with about over $50 million of annual recurring revenue and, um, it's used by, like, thousands of businesses. I think we've got more customers than any of the other sort of sophisticated customer service agents. And so the product we've really built out, the product, we've been doing this for over two years at this point. And so, you know, we have a very full offering. We built just fine for the messenger initially. Then it would run across things like WhatsApp and all the different channels. But then we built fin for email. We had to build like a different architecture there to really deliver very high quality performance. So we're kind of doing that 18 months ago, and over the last year, we've built a cutting edge version of Fin that uses voice, uses like the latest sequence to sequence voice models that's now running at scale in production. There are loads of customers now, loads of end users who are sort of like talking to fin over a voice interface, and it's magical. It feels amazing to, you know, be hear a phone conversation where someone is talking to fin, and it's just kind of answering their questions. And and so, yeah, the product is getting getting pretty broad and it's getting a lot of capability.

CRAIG: And as I said, the agentic capabilities I mean fin can you give us a use case where, where it's uh, it's not simply, uh, question and answer. It's, it's, uh, actively doing something.

CRAIG: Yeah.

FERGAL: I mean, it does a lot of things. Um, and, you know, obviously, I would say, like, I wouldn't say just question answering. Answering questions is a huge part of customer service. And, you know, we've invested a lot in building the best kind of question answering piece. And there's a big difference between aging capabilities in that space. But yeah, everything can do lots more than that. So we have customers who do things like, you know, if they have an issue, a refund, they have it handle returns. Um, and, you know, we've built a very rich and deep product to kind of enable it to do that, to kind of you can hook it up to your external systems, you can hook it up to your API's, you can sort of give it the parameters around that. And we've built a very rich, you know, uh, almost SaaS style, you know, product to enable you to, to specify and to tread together the complexity of the procedures that your business needs in order to do something like actually issue a refund. I see every time we talk to a customer, they're like, oh, I know my refund policy. It's really simple. I'm like, two weeks later, they actually finally get them to write down the refund policy on paper. And it's this big sprawling thing with all these edge cases we've had to build a functionality to do that and to enable people to use LMS in those, like real production use cases. So yeah. So it's pretty.

CRAIG: And, and.

CRAIG: On on uh, refunds because that's uh, pretty complicated use case. Uh, fin actually then um, pushes the button to release money or wire money or uh, what?

FERGAL: Yeah, it absolutely does. We absolutely have customers who have deployed this, you know, where in like, consequential areas in areas where if it gets it wrong, their business will lose money, their end user will be extremely disappointed. We absolutely have customers who are using them in those ways. And, you know, it's a lot of work to do that. Like it's a whole different ballgame when you start trying to do things like that. You have to convince the security team that, you know what, what what what if a customer tries to to jailbreak this? Or what if a customer tries refund or return a different customer's product? You have a whole different level of complexity you have to deal with there. You have to have like an API you can call, you have to like set up the API to work within. And we've done a lot of work to try and make that easy. And we have like, I don't know, at least five years, maybe a lot longer of investment in this space. You know, we were building low code API integrations for customer service bots back before the current wave of LMS. And what we've done now is we have integrated the next generation AI, LLM technology with all that sort of like battle tested, secure and easy to set up interfaces of before. And of course we support MCP. When we saw MCP coming, we were like, wow, this is amazing. This is a potential huge unlock. And so I think we're in the first wave of people to like really support MCP.

CRAIG: Yeah. And that's. Uh, model context protocol that allows, uh, models to talk to each other or agents to talk to models. Is that right? The, um, um, in in the customer. Do you focus on a particular vertical or is this, um, any customer service in any industry?

FERGAL: No, we don't focus on in particular vertical. And, you know, we have lots of verticals that the products are very successful in. It's very successful in financial services. It's very successful in software as a service. And you know, areas where people want to give really great customer service, where they're, you know, willing to pay for really great customer service, where it's worth it, where they want to use a very high quality product. And fin does really well on but we don't constrain ourselves to a particular vertical. And we kind of think it's probably a mistake to do that. And you know, the reason why is that building a product that's horizontal, like this, like there's so much commonality across the different use cases. The AI layer is, you know, you can invest deeply in the AI layer. We've done things, we've built custom models, we've done a great deal of testing and optimization. And so we really have this product that's like unified. And then we have thousands of customers running in production, and we're always improving it based on how well it does for those customers. And so like our product, our standardized product really has outperformed vertical specific competitors who have a much smaller volume. We really think that there are these really deep scaling and experimental laws in AI. You really want a standardized product envelope, a standardized product core that can be optimized and tested en masse across a very large number of customers, but then deeply Customized for the individual business. And that's really what we've built. So we're not in a world where we're telling customers to go and prompt the agent. Instead, we're doing we. We've built a core that's standardized and that we have continually made better over time. But then with a customization layer that can adapt it to a specific business.

CRAIG: And most of the customer service, uh, solutions that I've talked to, uh, work in tandem with a call center or a BPO, uh, does does Finn do that as well?

FERGAL: Actually. I mean, you can deploy it. And because we have customers that deploy it in every configuration, you know, it's a very mature product on a very mature platform. We've been in this space for a long time. Um, but I would say Finn is more likely to kind of compete with bpos like we typically see. You know, the typical pattern is, um, a customer of ours will deploy it and they will often end a contract they had with a with a business process outsourcer that used to do maybe their sort of tier one or their front line support, you know, even a year ago, and the product was much less mature a year ago, even a year ago, we were talking to customers who were like, yes, I've deployed fin and I have ended my relationship with my BPO. I still have my my technical support team. I still have my own internal support team. But the BPO to which I was outsourcing, sort of the simple queries, um, is just gone. And Fin entirely does that and, you know, more reliably and more cost effectively. And that was even a year, year and a half ago. And Fin's really, really pushed up the quality level since then. And a lot of people's internal tier one support is, is now doing something else.

CRAIG: And so what happens when a customer hits a wall that fin can't resolve? Is it, uh, routed to to your internal customer support.

FERGAL: Yeah, exactly. So, you know.

CRAIG: I say.

CRAIG: To to the customers internal customer support.

CRAIG: Yeah.

FERGAL: To the customers. Internal customer support. Absolutely. Yeah. Again, we've been at this a long time. And so probably like eight years ago we were building Resolution Bot and with the previous generation tech. And while like the core engine is completely different and the modern AI is next generation, it works much better. We were solving all those problems around escalation. And how do you gracefully hand off to a human. And so we we've we've great solutions for that are configurable. And we have like almost a decade worth of technology investment in there. But again, we completely rebuilt. We built fin completely from scratch. We were out the door on GPT four launch day with a new GPT four powered product. And and so we, you know, it's an entirely new architecture. And I think we're on generation four internally of it now. But, uh, but it's connected to all those old battle tested systems that we've had for a very long time. And so escalation. Yeah. We've had great escalation. Hand over to humans for a long time. Very recently, we shipped a custom AI model trained ourselves. That's great at detecting exactly when to escalate, exactly when to escalate, when not escalate, and when to offer an escalation to the end user. So we've even built, like custom AI technology in there to get particularly good at that. That's a problem that we care deeply about.

CRAIG: And I'm curious, uh, is is there some metric about, uh, what percentage of calls end up being escalated? Uh, and if it. Is.

CRAIG: Lower with fin than with other solutions or.

FERGAL: Yeah. Yeah. So so we, we have, um, our core metric that we care the most about we call resolution rate. And it's essentially the percentage of times when Finn is involved in a conversation that it successfully resolves the conversation, versus when you have to escalate to a human. And that for us is about 65, 66% at the moment. And that's sort of our North Star metric. We care deeply about it. It was about 35% when we launched Finn. And we have this like team, my team, it's about 50 people in it. We care deeply about constantly moving that metric up over time. We're constantly doing a B tests. We're constantly taking any new change to the AI of Finn running it at scale in production, hundreds of thousands of users in each version, and then checking to see does it increase the resolution rate. And over time, we've managed to do that scientific process, get that resolution rate up and up and up and up over time. So it's now around 65% still working on getting it up more and more. And we're incredibly proud of that. We think that's best in class. And anytime a customer kind of runs a head to head between us and between one of our competitors and customer always tells us our resolution rates higher. They choose us, and occasionally we even test specific competitors. And we've always sort of won those tests. And so yeah, we're really proud of that. Um, and like everybody says this and we just always tell our customers, please just do an A, B test, do a trial and see how you get on. And I think, you know, I think we're gathering momentum at the moment in the market, as you know, legitimately being the one with the with the highest rate.

CRAIG: And when you say it's, it's highly configurable. What what do you mean by that? And, and when you onboard a new customer in an industry that maybe you're not as prevalent in, what kind of data do they have to provide to optimize the solution?

CRAIG: Yeah.

FERGAL: So I guess, you know, like all these questions, the nuanced questions. So the first thing I will say is, you know, intercom traditionally we believe a lot in like self-serve. We really like to build products rather than services. You know what a product. You can maintain it yourself. You can change it yourself. As we make it better, it gets better for all our customers. A lot of other people are attacking this fire services approach. They're, you know, putting engineers in and custom engineering it. And then the product doesn't get better over time, which isn't great. So, you know, look, we really kind of built this in a productized way. And we believe in productizing it for everybody. And then increasing the quality over time for everybody. And so, you know, initially on day one of Finn, you could set it live. And we had customers that set it live with no intervention, no onboarding, no information from intercom at all. They would just go. They would configure Finn in our easy to use product, and it would be live, and it would do a certain resolution rate for them. And, you know, over time we've had loads of customers that have done that. However, we have learned as we have gone up market and as the product has matured, that there's an element of business transformation to this that our customers need. So now we have sort of built the muscle really over the last sort of six months, the last nine months, we've really built the muscle that we have to have teams of people. So we have an R&D services org.

FERGAL: We've also built a professional services org whose job it is to help customers get successful. So if you're an enterprise customer, you're in upper mid market customer and you need a lot of help, you maybe need to negotiate and talk to your security team to do these deeper procedures. Maybe you have a voice deployment. You need to get that voice deployment live. It's a very big scale. You need that help. You need that services team. So we've kind of we've we've been forced to build that muscle probably about 6 or 9 months ago we didn't have a great muscle. Nine, nine, 12 months ago. We didn't have a great muscle for that. Some of our competitors were starting to do it. Some of our competitors were starting to make headway at us because they were going and kind of by hand in an unsustainable way. Probably they were getting their core teams to like, build a product for each customer. We thought that was a bad idea, but we needed to compete with it. So we built this kind of services org. We've really upped our game there for those large enterprise customers. So yeah, so you know today we have customers that turn it on. They get like 5,060% resolution rate out of the box. It's transformational for them. We also have a motion now for the big the really big enterprises where it's like no we can go deep. We've got a services team and do all that transformation with them. You know that the business process change that is hard and complicated and it needs to be gone through.

CRAIG: Yeah.

CRAIG: And and when you go deep, as you say, are you are you A fine tuning. The the the the LM behind the agent. Uh, or is it, uh, you know, putting rules in place, uh, around the agent.

CRAIG: So, so.

FERGAL: Got it. So so we do not fine tune the LMS on a per customer basis. Um, we're suspicious of people who say they do that because you need a very large volume of data before that makes sense. Um, maybe the biggest customers in the world, and that might make sense. But even then, we're much more in favour of fine tuning LMS. That will work for customer service generally, because then you have this huge amount of data from many, many different customers. And in our experience that that tends to outperform training for any one customer. So that's really the way we go. So when we talk about configuring deeply for an enterprise customer. We've built this product based configuration layer on top of fin above the level of the LMS. So, you know, essentially it turns into prompts for the LMS, but managed prompts, you know, we don't believe that customers really benefit from having the ability to really prompt a large, complicated system like Fin, which has maybe 10 or 15 different prompts underneath the hood. Instead, we have learned the control that they need for their business, and we've given them easy to use windows to get that control. So to change Fin's behavior, to change its tone of voice, to make it compatible with their brand, but without them kind of messing around accidentally in the core LMS.

FERGAL: So it it's quite configurable. It's quite customizable. You can change the order of those things. You kind of can change the policy a lot, but sort of the mechanism, you know, a lot of AI products, they accidentally mix up customer control over, like the policy with customer control over the mechanism, and then you end up with all these failed deployments or mediocre performance. We don't do that. We kind of have a standardized mechanistic layer. And then like a configuration and policy layer that we have done the hard work of productizing that we expose. And that way customers, they get the improved performance of that kind of core engine getting better over time. They get that while maintaining the configuration layer they need. And sometimes there's a trade off there. But, you know, the hard work of building a product is to kind of give customers the configuration options that they need. And I think we've done that. We've been doing that for about a year and a half of Finn's lifecycle. I think we've we've really done that. They can give a guidance without accidentally breaking it.

CRAIG: You're proud Uh, you know, this is a crowded space, as I'm sure you're. You're aware more than me. Um, how do you, uh, is the market so large that it doesn't matter? Um, there, there's there's always another potential customer. Uh, if you if you, you know, lose one, but. Or is it, um, I mean, what strikes me about this space is and I've talked to a lot of companies in this space is, uh, I, I still, uh, I don't, I don't think yet in my personal life, I've run into one of these, uh, generative AI agentic customer service solutions when I'm dealing with companies. I'm still going through the the phone menus. I'm still, you know, getting, uh, you know, the canned answers from chatbots. Uh, so I guess the question is, it appears to me that it's it's kind of an endless market at the moment. Uh, how do you deal with competition? And why is it taking such a long time to penetrate? Is it just companies, uh, have bought a solution, and they're going to amortize it out before they switch.

CRAIG: Yeah.

FERGAL: I think an interesting question with a couple of different pieces to it. Um, so firstly, I think the market here is huge. Like the total addressable market here is like absolutely massive, right. So many people doing these kind of customer service tasks, it's going to go away. And like all human customer service won't go away. But all the road stuff will. And and even a lot of the stuff that's not wrote, a lot of the stuff where it's like asking a question that can be answered from any sort of knowledge or document and knowledge we think is going to go away. And so we need is a massive market here in terms of competition. We legitimately believe that we have the best agent, and we have a lot of data to back that up. And you know, we I think we're starting to win in the market. I don't know anyone else that has I need to pull the exact stats. We definitely have over 5000 customers 50 million, over 50 million in IRR. We're growing at about like, you know, we're on trajectory for 100,000,000 in 2 quarters, growing about four x year on year like so, you know, there's an adoption curve. But we're growing very, very well. And so, you know feeling pretty good. There's always an adoption curve. Right. If a business wants to buy something it's not a consumer product. There's always an adoption curve. There is always, you know, stakeholder management. The existing solution does all these concerns, you know, is it good enough? Can I adopt it.

FERGAL: And we've been going through that, you know, for almost two years at this point. And it takes time. But we're seeing that very rapid growth. And you know really the market is turning. And so we're feeling pretty good. And then in terms of like competition. Yeah we have competition that's growing really fast as well. And again huge market really expanding. So good luck to them. Um, but we are setting out to win. And um, and we think we've, we're really quite sure we have better technology with better product. We have a very big investment here. And so um, yeah, that's that's all compete and see who can give the best product and answer the most end user customer support questions. That's how we build we build a dollar per resolution. If we resolve the question, we get a dollar. If the end user is not happy and they talk to the human, we get no dollar. So that's sort of like putting our money where our mouth is in terms of our belief in having the best product and making it work. Um, there's a lot of vaporware out there. There's a lot of big claims out there. Um, our retention numbers are really good. And, um, let's see how the market plays out. But yeah, we want to build the best, uh, really good product for our end users, and we're going to make a good go of it.

CRAIG: Yeah.

CRAIG: Um, the, um, do you think, uh, on the adoption curve, um, we're we're still just ramping up. I mean, I know you're not in sales, but how long does it take a company to make a decision? Uh, to to adopt, uh, Finn, for example.

FERGAL: So, I mean, this market is very structured, right? You have the top tech companies. We have some great customers. Tropic is an amazing customer of ours. Is um, obviously really AI forward company. Other companies like amplitude Synthesia like really, really good, um, kind of tech companies. They make decisions quickly. They run. They're very sophisticated. They're very technical. They can run tests. They can really assess the quality of a product. On the other hand, we have customers who are, you know, more like a utility or a financial services company. And then they insurance, uh, customers like, you know, they, they, they will have more diligence to do they care a lot about like, it's the regulator. Okay. With this, is this going to get me into trouble? Can you really be sure this won't hallucination production in a way that will cause me trouble? And so, you know, there's an adoption curve, and different people in different industries are in different curves. So there's no one answer. I can talk about the the growth of the product overall. And I think the growth of the space overall to, you know, the whole space is growing. Um, and so, uh, so, yeah, I think it's maturing all the time. I think it is fast. It's a very fast growing space for B2B. But B2B takes time. You know, these are not consumer applications.

CRAIG: And and do you think that adoption will accelerate? Uh, as, as companies realize they're going to get left behind if they don't upgrade their customer service? Uh, because their competitors at, at their level.

FERGAL: I mean, again, the the most honest and clear and data driven way for me to answer that is to say, like Finn has had double digit, uh, month on month growth for at least 18 months, right. And, you know, certainly, maybe, maybe, maybe one month is a little bit of seasonality effects. But on average, trading three month double digit month on month growth for at least 18 months. And so like that is an accelerating adoption curve. That is the kind of curve you see when a market is maturing and when the product has got good product market fit. And so, um, yeah, I mean, I think, I think that's the best evidence I have of an accelerating market adoption. And, you know, there's also the things with this viral dynamics. People people do encounter it out there in the world. And then they look at and think, wow, this is great. I want this for my business. And so, you know, I think we're sort of seeing that, but we're still early here. This is a huge, huge, huge market and it takes time to penetrate through it. Um, and you know, like voice we have built our voice product is a next generation voice product. It's built with sequence to sequence models. We have competitors, even very new competitors who kind of build the voice, but like text to speech and speech to text, it's kind of janky. We've gone like for the newest models. It's really, really great. You know, we couldn't do that. We couldn't have built that product a year ago. So like, voice is like the paint is still wet on voice, but it's really great and it's next generation. And so, um, yeah, I think you're going to see continuing adoption like AI for voice. There's no comparison with like the best models now versus where they were like even 18 months ago, you know? So yeah, it's very exciting.

CRAIG: Um, do you think that because the market, there are so many players in the market, uh, I mean, obviously there'll be a period of consolidation at some point. How far do you think we are from that consolidation where, uh, the winners will emerge and they'll either buy up the the smaller competitors or the smaller competitors will fold.

FERGAL: It is so hard to know. I think we can only wildly speculate on that, I think huge markets with an absolute ton of value being delivered. And so like that would kind of tell you that, like there'll be funding and opportunity to continue grow without a period of consolidation for a long time. On the other hand, um, you know, our product fin returns, excuse me, our product fin returns a healthy margin to us. Uh, I believe there are lots of businesses in the wider AI space where their margins are more questionable. And so, you know, people are just optimizing for growth at all costs. And it is possible, therefore, that there will be a shakeout or a period of consolidation if something happens in the wider macro environment or in the environment of AI, to kind of to drive a cooling. Um, overall, I would probably welcome, uh, a bit of a cooling. I think we've a great product. It's not just hype, it's got positive margin. So with pseudo strategically, if there was a little bit of a cooling, I don't know if we're going to get that. I do think that, you know, the deep seek moment was interesting. I think the the Chinese AI models are very, very powerful and compelling. And so maybe that will lead to a cooling. Um, I just don't know. It's a big space. And that's a real macro question more than anything else.

CRAIG: Well, that's interesting because what what kicked off this space really was, uh, the, uh, was generative AI. And, you know, ultimately the transformer algorithm, um, it I mean, you're a technical guy. So, uh, looking, uh, what what do you think has to happen, um, for the market to go through another, uh, surge. Another, uh, what what breakthrough are you watching? What bottleneck are you guys, uh, working on?

CRAIG: Yeah.

FERGAL: Um, I say I say a number of things there. Firstly, I think we're I believe that we are in overhang at the moment here where like from an industry point of view, even if the AI models completely froze at their current perspective, at their current capability, we would have like at least a decade of huge growth industry to deploy all these things to many different, valuable applications. So I really believe that. But I do think that, you know, the core AI capabilities continuing to improve and they are continuing to improve is an accelerant to the whole space overall, top to bottom. And, you know, really interested in things like reliability improving where like constantly working in our procedures, products, you know, the one that like takes the actions and calls APIs. We're constantly working to get the best reliability we can out of Llms. And, you know, we're working on we're investing and trying to improve that reliability at all these different layers. And also we've really had some improvements recently in, you know, sort of the the small models hypothesis, which is this sort of idea that you can take smaller, large language models and you can train them to be specifically good at a given task you're interested in, really specialize them at that task, kind of take a model that knows how to speak the language, knows how to retrieve a lot of information, maybe knows how to do some reasoning, and then you you fine tune it in a sophisticated way to be really great at a specific task.

FERGAL: So we have a subpart of Finn where we we summarize the end user's question. That's really important to do that because sometimes, you know, before you go and go searching and you go trying to retrieve content, you really want to kind of canonicalize or summarize your question. We built a custom model that does that by kind of like fine tuning, um, one of the kind of the quant models. And that's performance is really excellent. And that's replaced a call to a much bigger LM with better performance overall. So I think you're going to see more of that. I think that the whole tech stack, top to bottom is like radically changing. There's innovation at all levels on it, whether it's things like replace an LM with like a smaller custom model, build a custom model from scratch. We've deployed a whole lot of custom models we've built from scratch and Finn as well. And then also at the at the kind of the frontier LM layer. They're getting better and better.

CRAIG: All the.

FERGAL: Time. So yeah, innovation at all levels.

CRAIG: On you, you mentioned insurance and banking. Uh, those are, you know, highly regulated industries where, uh, their data is valuable and proprietary. You guys are. Excuse me? You guys are a SaaS product offering operating from the cloud. What do you do when someone needs an on premise solution?

FERGAL: We don't currently provide an on premise solution. Um, and, you know, that's a long trade off in SaaS. Obviously, you know, we describe ourselves as an AI company now, and I think it's going to be a long trade off in AI as well. Um, you know, and it's just it is a velocity trade off if you go on premise. It's very difficult to keep your product development life cycle iterating fast enough. And there's just there's so much value to be unlocked in the world for customers that are willing to trust the cloud, that that's where we're focusing for the foreseeable future. And, you know, the customers we have in these regulated industries, they've crossed that bridge. They've crossed the bridge to the cloud a long time ago. And not not every bank has, but many have. And so we think there's there's a ton of market there, and we're not going to kind of slow down and take the velocity hit of going on prem.

CRAIG: Yeah. That's interesting. Yeah. I just asked because you talked about smaller models. Uh, whether you were looking at that, the um.

FERGAL: We could we, we, we could deploy we, we have enough of a stack now that will run entirely on models that we could deploy, that we could deploy to somebody's cloud if we wanted to, or on premise if we want to. But it's just strategically not the direction.

CRAIG: And, uh, on the on the edge agent capabilities. Uh, and you alluded to this at the beginning about how customers are stretching, uh, how they're using the product. Uh, are you looking beyond customer service? Uh, and if you have this, this powerful solution that presumably could run other business processes?

FERGAL: I think that is a very interesting area that I think we're not quite ready to talk about just yet, but it is absolutely an area of interest to us. How could it not be? We have this product. It's excellent to customer service and then we have customers stretching it, using it for pre-sales use cases, getting a ton of value out of it, using it for success use cases. And so it's a very interesting and insightful question strategically, but I think we're not ready to talk.

CRAIG: Okay. Well this is fascinating. And if people want to give Finn a try it's Fine.i or what. What is the.

FERGAL: Yeah, absolutely. If you go to Finn II and again, it's a very self-service product. We have put a lot of energy into that. And if it's an enterprise customer, we definitely say, get in touch, reach out to sales. If it's like an SMB, we'd say, give it a try yourself. You can probably unlock a ton of value. We do. We do the work of making sure it's self serviceable and productized.

CRAIG: And actually, I do have another question. Is this, uh, targeted for and priced for, you know, fortune 1000 or something. Or is does it, uh, is it affordable for a small and medium sized company?

FERGAL: It is absolutely affordable for a small and medium sized company. We really pioneered outcome based pricing here, where we sort of we charge a dollar per resolution, and we're quite proud of that. And that means that if you have 1000 support queries a month, you know, and it's taking you, you know, several humans worth of time to do them, uh, you can afford to pay that $1. Also, if you have a huge support organization, you're outsourcing to a BPO. Uh, it should also work for you. So we're pretty happy that that that price really scales with the value that it's going to deliver for the customer. And, you know, we have history around that. Intercom kind of got its pricing wrong in the past. In a previous generation, we've really learned from that. We really believe very strongly in fair, transparent, outcome based pricing, and it seems to be working pretty well for us. Are we like squeezing all the value out that we could possibly not. But that's not even the right way to think about this. This is a growing market and growth is what we care about.

CRAIG: Build the future of multi-agent software with agency. That's a g t c y. Now, an open source Linux Foundation project agency is building the Internet of Agents, a collaborative layer where AI agents can discover, connect and work across any framework. All the pieces engineers need to deploy multi-agent systems now belong to everyone who builds on agency, including robust identity and access management that ensures every agent is authenticated and trusted before interacting agency also provides open, standardized tools for agent discovery, seamless protocols for agent to agent communication, and modular components for scalable workflows. Collaborate with developers from Cisco, Dell Technologies, Google Cloud, Oracle, Red hat, and more than 75 other supporting companies to build next generation AI infrastructure together. Agency is dropping code, specs and services no strings attached. Visit agency to contribute. That's h g t c y o r g.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including generate automated summaries powered by AI, automated translation, advanced search, collaboration tools, and easily transcribe your Zoom meetings. Try Sonix for free today.


Learn more