Manual quality assurance and compliance monitoring of calls is not scalable, and the QA team was only able to review 2% of calls. This limited coverage made it difficult to identify issues and provide actionable feedback. The use of Taglish (a mix of Tagalog and English) further added complexity, reducing transcription accuracy and complicating QA evaluations.
Automation of call center call quality monitoring

Client NameHome Credit
Client CountryPhilippines
- Client typeEnterprise
- IndustryFinancial Services, Markets & Insurance
- Application areasCustomer Support & Experience, Data & Analytics / Business Intelligence, Operations & Process Automation
- AI technologiesGenerative AI, Large Language Models (LLMs), Machine Learning, Natural Language Processing (NLP), Speech Recognition & Synthesis
- Business impactsOperational Efficiency & Cost Savings, Risk Reduction & Compliance
- Data typesAudio Data, Documents / Semi-structured Data, Textual Data
- Delivery modelsCustom Development
- DeploymentsCloud
- Key capabilitiesDecision Support & Augmented Analytics, Recommendation & Personalization
- Project stagesScaling / Expanded Implementation
- Solution formsAnalysis, Recommendation, or Report, Automated Backend Process
Solution Description
Problem description
Solution
We have developed an AI-powered solution that automatically evaluates all meaningful calls, ensuring full compliance with multiple quality assurance criteria. Call recordings are pre-processed and transcribed using state-of-the-art open-source models for transcription, cancellation technology and a voice activity detection to eliminate transcription errors and hallucinations. GenAI is then applied to correct any remaining inaccuracies. All QA parameters are then evaluated using LLM, which provides not only performance scoring, but also reasoning and suggestions for improving agent performance. Finally, a user-friendly dashboard provides actionable insights into agent performance, including adherence to call scripts, communication skills, and regulatory compliance.
Main Users of the Solution
Call center managers, supervisors, and operators
Project timeframe (months)
18
Technologies used
Databricks, Microsoft Azure, Python
Additional services
- AI strategy and roadmap
- Identification and prioritization of suitable use cases
- Data collection and preprocessing
- Annotation / synthetic data / dataset extension
- Data governance and quality
- Selection and customization of AI model
- Change management and user training
- Compliance / regulatory support
- Provision of MLOps infrastructure
- Ongoing maintenance and model retraining
Use of Personal / Regulated Data
Implementation
Project Owner on the Client's Side
Executive leadership (C-level)
Participation on the Client's Side
- Business / Product Owner
- Domain / Process Experts
- Data & ML specialists
- Software & data engineering / IT Ops
- Project and change management
- Quality, security, compliance
- End users
Form of Supplier Involvement
Joint implementation with the client.
Operation and Maintenance
Operational Model
Internal team with L3 support from vendor
Needed Competencies on the Client's Side
ML engineer, DevOps engineer
Other Resources or Infrastructure
Solution is flexible, can be deployed in cloud on on-premise with GPUs
Impact and Results
Qualitative Benefits
Full scale evaluation – 100% of meaningful calls are monitored. Enhanced feedback – Agents receive comprehensive and actionable feedback that supports their ongoing development. Comprehensive analysis – The Power Bl dashboard provides detailed performance insights.
Quantitative Results
The number of monitored calls increased from 2% to 100% and is now fully automated.
Client Feedback
“With our automated QA solution, our team can now evaluate a significantly higher volume of meaningful calls, allowing us to provide more detailed and actionable feedback to agents. This improvement has enabled us to derive valuable insights from a larger portion of our client conversations and ensure that key quality parameters are consistently met.” Wojciech Antoni Krotoszynski Head of Collections
Lessons Learned and Recommendations
Key Success Factors
Close cooperation with client’s internal teams, agile iterative approach
Biggest Challenges
Taglish language – tuning of specialized model Data quality – remove noise and improve quality of audio recordings Process not suitable for automation – change of process and improved documentation quality
Recommendation for Others
High-quality speech-to-text is crucial and worth the investment, as it enables numerous use cases afterward.
Promotion

- CompanyDataSentics
- ContactLukáš Soukup
- Emaillukas.soukup@datasentics.com
- Websitehttps://www.datasentics.com
- AddressWashingtonova 1599/17, 110 00 Praha