

{"id":10879,"date":"2025-08-14T14:39:22","date_gmt":"2025-08-14T06:39:22","guid":{"rendered":"https:\/\/www.dkmeco.com\/en\/?p=10879"},"modified":"2025-12-11T22:00:05","modified_gmt":"2025-12-11T14:00:05","slug":"how-did-intercom-use-100-million-and-gpt-4-to-build-the-hit-ai-customer-service-fin-in-4-months","status":"publish","type":"post","link":"https:\/\/www.dkmeco.com\/en\/how-did-intercom-use-100-million-and-gpt-4-to-build-the-hit-ai-customer-service-fin-in-4-months\/","title":{"rendered":"How did Intercom use $100 million and GPT-4 to build the hit AI customer service Fin in 4 months?"},"content":{"rendered":"<p>In 2022, when GPT-4 had just been released and most companies were still busy discussing the news, Intercom had already quietly moved into action. Within just a few hours, this customer service software company began hands-on testing; in only four months, it launched the AI Agent Fin\u2014now capable of handling millions of complex customer inquiries every month.<\/p>\n<p>This first-mover advantage was no accident. Faced with the rapid evolution of large language models (LLMs), Intercom\u2019s leadership made a decisive bet on AI. They quickly assembled a cross-functional team, shut down all non-AI projects, invested $100 million to rebuild the business architecture, and fully migrated to an AI platform.<\/p>\n<p>This decision triggered a company-wide transformation from top to bottom: reshaping product teams, establishing an \u201cAI-first\u201d customer service strategy, and building a technical platform capable of powering Fin\u2019s high-speed operations.<\/p>\n<p>Next came the three key lessons they summarized from this AI transformation journey\u2014lessons that any team, no matter their current stage, can immediately apply.<\/p>\n<p>\u201cAI must be embedded into product design from the start, not crammed in as an afterthought.\u201d \u2014Paul Adams, Chief Product Officer, Intercom<\/p>\n<p><strong>Lesson 1: Start early and experiment continuously to improve model fluency<\/strong><\/p>\n<p>Intercom began experimenting with generative models early and often, gaining valuable real-world experience\u2014identifying the limitations of models and finding opportunities for optimization. When GPT-4 launched in early 2023, they were fully prepared, releasing the AI customer service agent Fin in just four months and rapidly expanding its use.<\/p>\n<p>\u201cWith GPT-3.5, we achieved smooth conversational experiences, even some \u2018magic,\u2019 but its reliability wasn\u2019t high enough for customer service. Because we had laid the groundwork early, when GPT-4 arrived, we knew the time was right and moved quickly to launch Fin.\u201d \u2014Jordan Neill, VP of Engineering, Intercom<\/p>\n<p>This grasp of model fluency enabled Intercom to design Fin Tasks\u2014a system that can automatically handle complex processes like refunds and technical support. While the team initially planned to use a retrieval-based architecture, evaluation showed that GPT-4.1 could complete tasks independently and efficiently, with higher reliability and lower latency.<\/p>\n<p>Today, GPT-4.1 remains the core engine of Intercom\u2019s AI systems, including the key logic of Fin Tasks. The team also found that adding \u201cchain-of-thought prompting\u201d to non-reasoning queries improved performance without building a full RAG pipeline.<\/p>\n<p>The conclusion is clear: <strong>the earlier and deeper you understand the model, the faster you can seize opportunities as the technology evolves.<\/strong><\/p>\n<p>Evaluations showed GPT-4.1 delivered the highest reliability in task execution while reducing costs by 20% compared to GPT-4o.<\/p>\n<p><img fetchpriority=\"high\" decoding=\"async\" class=\"aligncenter\" src=\"http:\/\/dkm-website.oss-cn-shenzhen.aliyuncs.com\/upload\/0\/dataBlog\/blog\/Intercom\/20250813\/1.png\" width=\"659\" height=\"464\" \/><\/p>\n<p><strong>Lesson 2: Use rigorous evaluation to drive rapid iteration and complete upgrades in days<\/strong><\/p>\n<p>To speed up technical upgrades, you must measure precisely what works and understand why.<\/p>\n<p>Intercom\u2019s ability to quickly switch to new models, modes, and architectures hinges on a structured and rigorous evaluation process. Whether it\u2019s Fin Voice (based on the Realtime API) or Fin Tasks (based on GPT-4.1), every deployment undergoes offline testing and live A\/B experiments, focusing on three key capabilities:<\/p>\n<ul class=\" list-paddingleft-2\">\n<li>Instruction adherence: Can it accurately understand and execute complex multi-step tasks (e.g., refund processes)?<\/li>\n<li>Tool call accuracy: Can it reliably invoke system functions?<\/li>\n<li>Brand tone consistency: Can it consistently maintain communication in Fin\u2019s style?<\/li>\n<\/ul>\n<p>For example, the team uses real customer service records as benchmarks to test task execution and uses evaluation results to guide A\/B tests comparing different model versions (e.g., GPT-4 vs. GPT-4.1) in resolution rates and customer satisfaction.<\/p>\n<p>Thanks to this approach, Intercom completed the migration from GPT-4 to GPT-4.1 in just a few days. Once they confirmed GPT-4.1\u2019s significant improvements in instruction handling and function execution, they immediately deployed it to Fin Tasks, resulting in notable gains in performance and user satisfaction.<\/p>\n<p>\u201cWithin 48 hours of GPT-4.1\u2019s release, we had evaluation results and a deployment plan. It struck the perfect balance between intelligence and latency.\u201d \u2014Jordan Neill, SVP of Engineering, Intercom<\/p>\n<p><strong>Lesson 3: Build flexible architectures for long-term competitiveness<\/strong><\/p>\n<p>From its inception, Intercom has designed its product architecture with change in mind, ensuring the system can evolve in step with the AI models it depends on.<\/p>\n<p>The Fin system uses a modular design, supporting multi-modal interactions across chat, email, and voice\u2014each with its own trade-offs in latency and complexity. This architecture allows Intercom to route each customer request to the most suitable model and swap or upgrade models without overhauling the underlying system.<\/p>\n<p>This flexibility is intentional and constantly refined. The Fin architecture is now in its third major iteration, with the next version already in development. The team adjusts dynamically with model capabilities: adding complexity when needed to unlock new functions, and simplifying when possible to reduce maintenance costs.<\/p>\n<p>The benefits of this flexibility were especially clear in Fin Tasks development. Initially, the team planned to build a custom retrieval-based architecture to support multi-step tasks (like refunds, account changes, and troubleshooting). But testing showed GPT-4.1\u2019s instruction adherence exceeded expectations, maintaining equal reliability at lower latency and cost.<\/p>\n<p>\u201cHonestly, I don\u2019t think GPT-4.1 has been talked about enough. Its performance in latency and cost really surprised us, giving us the opportunity to simplify the architecture and remove a lot of unnecessary complexity.\u201d \u2014Pratik Bothra, Principal Machine Learning Engineer, Intercom<\/p>\n<p><img decoding=\"async\" class=\"aligncenter\" src=\"http:\/\/dkm-website.oss-cn-shenzhen.aliyuncs.com\/upload\/0\/dataBlog\/blog\/Intercom\/20250813\/2.png\" width=\"720\" height=\"371\" \/><\/p>\n<p><strong>Unifying data and workflows to create connected customers<\/strong><\/p>\n<p>This is only the beginning. Intercom is using its advanced AI models and flexible modular architecture to extend AI\u2019s reach from customer support to the entire enterprise\u2014accelerating problem resolution and enhancing customer experiences across the board.<\/p>\n<ul class=\" list-paddingleft-2\">\n<li>Support teams: The Fin AI Agent can handle the majority of customer inquiries from chat, email, and voice channels.<\/li>\n<li>Operations teams: Fin Tasks automates complex ticket workflows, such as processing refunds, account changes, and subscription updates.<\/li>\n<li>Product teams: Through Intercom\u2019s MCP server, AI tools like ChatGPT can access customer conversations, tickets, and user data to help teams identify issues faster, plan product roadmaps, optimize communication strategies, and efficiently prepare quarterly business reviews.<\/li>\n<\/ul>\n<p>With its rigorous evaluation standards, performance-based design, and flexible architecture, Intercom has built a highly scalable AI platform. This not only redefines customer support but also provides valuable lessons for other companies looking to leverage AI for business growth.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In 2022, when GPT-4 had just been released and most companies were still busy discussing the news, Intercom had already<\/p>\n","protected":false},"author":92,"featured_media":10880,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":"","_wp_rev_ctl_limit":""},"categories":[184],"tags":[220],"class_list":["post-10879","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-intercom","tag-ai-customer-service"],"acf":[],"aioseo_notices":[],"rttpg_featured_image_url":{"full":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814.png",435,292,false],"landscape":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814.png",435,292,false],"portraits":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814.png",435,292,false],"thumbnail":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814-150x150.png",150,150,true],"medium":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814-300x201.png",300,201,true],"large":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814.png",435,292,false],"1536x1536":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814.png",435,292,false],"2048x2048":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814.png",435,292,false],"woodmart_shop_catalog_x2":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814.png",435,292,false],"woocommerce_thumbnail":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814-300x292.png",300,292,true],"woocommerce_single":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814.png",435,292,false],"woocommerce_gallery_thumbnail":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814-150x101.png",150,101,true],"rt_custom":["https:\/\/www.dkmeco.com\/en\/wp-content\/uploads\/2025\/08\/1755142886100_\u535a\u5ba2\u5c01\u9762_EN_20250814.png",435,292,false]},"rttpg_author":{"display_name":"dkm-admin","author_link":"https:\/\/www.dkmeco.com\/en\/author\/dkm-admin\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/www.dkmeco.com\/en\/category\/intercom\/\" rel=\"category tag\">Intercom<\/a>","rttpg_excerpt":"In 2022, when GPT-4 had just been released and most companies were still busy discussing the news, Intercom had already","_links":{"self":[{"href":"https:\/\/www.dkmeco.com\/en\/wp-json\/wp\/v2\/posts\/10879","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dkmeco.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dkmeco.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dkmeco.com\/en\/wp-json\/wp\/v2\/users\/92"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dkmeco.com\/en\/wp-json\/wp\/v2\/comments?post=10879"}],"version-history":[{"count":4,"href":"https:\/\/www.dkmeco.com\/en\/wp-json\/wp\/v2\/posts\/10879\/revisions"}],"predecessor-version":[{"id":11851,"href":"https:\/\/www.dkmeco.com\/en\/wp-json\/wp\/v2\/posts\/10879\/revisions\/11851"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dkmeco.com\/en\/wp-json\/wp\/v2\/media\/10880"}],"wp:attachment":[{"href":"https:\/\/www.dkmeco.com\/en\/wp-json\/wp\/v2\/media?parent=10879"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dkmeco.com\/en\/wp-json\/wp\/v2\/categories?post=10879"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dkmeco.com\/en\/wp-json\/wp\/v2\/tags?post=10879"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}