AI for QA Engineers
AI changes how software is tested. Master it before it obsoletes your toolkit.
AI systems are non-deterministic โ they hallucinate, change with model updates, and break traditional test assertions. This track teaches QA engineers to test AI systems, use AI to generate test suites, and build evaluation frameworks that actually work.
Curriculum
How AI Systems Fail
Hallucination types, failure modes, non-determinism, and why existing assertions break on AI outputs.
LLM Evaluation Fundamentals
BLEU, ROUGE, BERTScore, G-Eval, and custom rubrics โ when each applies and how to build an evaluation pipeline.
Testing RAG Systems
Retrieval quality, answer faithfulness, context relevance. RAGAS, DeepEval, and custom evaluation frameworks.
AI-Assisted Test Generation
Generate test cases, edge cases, and test data at scale with AI. Validate and maintain AI-generated tests.
Regression Testing for AI Features
Detect when model updates break your product. Snapshot testing, golden sets, CI/CD quality gates.
AI Quality Monitoring in Production
Real-time quality monitoring โ logging, sampling, feedback loops, anomaly detection, alerting.