The client needed to develop custom AI assistants, but the process was complex, costly, and required specialized AI teams. Adapting models to unique knowledge bases involved demanding RAG workflows, API calls, and data management, which slowed deployment and limited scalability. Without a solution, the client risked losing efficiency and competitive advantage in customer support and communication.
Bringing drag-and-drop simplicity to AI development

Client NameMETA
Client CountryUnited States
- Client typeEnterprise
- IndustryFinancial Services, Markets & Insurance
- Application areasContent, Media & Communications, Customer Support & Experience, Marketing, Sales & Customer Engagement
- AI technologiesAI Agents & Task Orchestration, Conversational AI (chatbots, voicebots), Graph AI, Large Language Models (LLMs), Speech Recognition & Synthesis
- Business impactsCustomer Experience & Market Growth, Operational Efficiency & Cost Savings
- Data typesDocuments / Semi-structured Data, Other, Structured Tabular Data, Textual Data
- Delivery modelsCustom Development, Service / Subscription
- DeploymentsHybrid
- Key capabilitiesConversational & Language Interaction, Intelligent Search & Knowledge Retrieval
- Project stagesScaling / Expanded Implementation
- Solution formsAPI / Micro-service Interface, Conversational Interface
Solution Description
Problem description
Solution
The client was provided with the AddAI platform, which leverages Llama models to turn AI assistant development into a drag-and-drop tool without the need for programming. The solution automates training, fine-tuning, pipeline orchestration, and RAG workflows, and includes a wizard that guides users from prompt creation to deployment. With a high degree of automation, even non-technical teams can quickly deliver and manage their own assistants, significantly accelerating implementation and reducing costs.
Main Users of the Solution
The main users of the solution are non-technical business and customer teams, who independently manage AI assistants for client support. Outputs are primarily used by customer service staff, contact centers, marketing, and communications, where assistants provide quick and accurate responses to end customers. Product and business managers also use the platform for easy content tuning and conversation personalization without the need for IT or data specialists.
Technologies used
LLM models: Llama 3.3 70B, Llama 3.2 11B, Llama 3.2 3B, Llama 3.1 405B. Platform: IBM watsonx (for development, fine-tuning, pipeline orchestration). Technological approaches: Retrieval Augmented Generation (RAG) workflows, multi-model orchestration, drag-and-drop no-code framework.
Additional services
- AI strategy and roadmap
- Identification and prioritization of suitable use-cases
- Data governance and data quality
- AI model selection and customisation
Use of Personal / Regulated Data
Implementation
Project Owner on the Client's Side
Head of Innovation / Digital Transformation
Participation on the Client's Side
- Business / Product Owner
- Domain / process experts
- Project and change management
- Quality, safety, compliance
Form of Supplier Involvement
Joint implementation with the client
Operation and Maintenance
Operational Model
Hybrid delivery with vendor and internal team.
Needed Competencies on the Client's Side
Customer service, sales and marketing teams, product and business managers, and system specialists.
Other Resources or Infrastructure
Ideally cloud infrastructure for running LLM models and orchestration.
Impact and Results
Qualitative Benefits
Customer communication quality improved thanks to more accurate and faster responses, with unanswered queries reduced by more than 50%. Support processes became more efficient, saving thousands of staff hours monthly, and user experience improved with consistent and personalized service.
Quantitative Results
500,000+ customer interactions per month, 85%+ accuracy of AI assistant responses, 50%+ reduction in unanswered queries, 6,000 staff hours saved every month.
Lessons Learned and Recommendations
Key Success Factors
Key factors included the deployment of LLM models, the intuitive AddAI no-code platform, solid data foundations for RAG workflows, and close collaboration between the vendor and user teams.
Biggest Challenges
The biggest challenge was the complexity of developing and customizing AI assistants for the client’s unique knowledge bases. We overcame this by delivering a no-code platform with automated RAG workflows and a wizard that simplified the entire process even for non-technical teams.
Recommendation for Others
We recommend choosing open and flexible technologies, preparing data sources thoroughly, and ensuring user-friendly operations for non-technical staff. It is advisable to involve non-technical teams in the project.
Promotion
Demo / Public Resources

- CompanyAddAI.Life
- Emailinfo@addai.cz
- Websitehttps://www.addai.life
- Addressnám. Krále Jiřího z Poděbrad 2, 252 30 Řevnice
- Additional addresses
- Rybná 716/24, 110 00 Praha