Tech Stack
PyTorchSeleniumTensorflow
About the role
- Ensure quality and reliability of AI-driven applications by testing, verifying, and validating software
- Design and implement detailed test plans and test cases for AI-based applications to ensure functionality, performance, and security
- Conduct rigorous tests on AI applications including generative models, AI Assistant/AI agent platforms, and machine learning algorithms to assess effectiveness, accuracy, and usability
- Identify, document, and report defects, bugs, and AI behavior anomalies; test for response consistency and context understanding
- Develop and implement automated test scripts for AI models to ensure scalability and efficiency
- Work with data engineers, data scientists, and development teams to prepare test data for training, validation, and testing of AI models
- Conduct functional testing, regression testing, load testing, and performance testing; ensure no regression after model retraining or new releases
- Assist in training and fine-tuning AI models based on user feedback and test results; provide validation for ethical AI standards
- Analyze and validate performance of machine learning models and AI outputs; identify biases and edge cases
- Contribute to improving QA processes and testing frameworks used for AI development
- Maintain detailed documentation of test procedures, test cases, and testing results
- Perform exploratory testing to uncover potential issues with AI application features and functionality
- Collaborate closely with data scientists, software engineers, and other stakeholders to improve performance and usability of AI applications
Requirements
- Bachelor’s degree and 2+ years of relevant experience or a master’s degree and 1+ year of relevant experience
- Strong knowledge of AI technologies, machine learning frameworks, and NLP, RAG, OCR
- Outstanding English writing and verbal communication skills
- Ability to develop product understanding by working closely with product developers in an Agile development environment
- Ability to design and organize document structure
- Curiosity about hands-on installation, configuration and use of the software to further develop product understanding
- Understanding of accepted technical writing style conventions
- Experience in writing technical documents
- Ability to work collaboratively as part of a team
- Self-motivated and able to work independently
- Ability to simultaneously handle multiple projects and monitor their progress with minimal direction
- Excellent time management skills and ability to meet deadlines
- Strong research skills
- Cross-functional teammate; detailed, organized, flexible, and proactive
- Comfortable working in an Agile development environment
- Preferred: Bachelor's Degree in Computer Science, Engineering, or a related field
- Preferred: 3+ Years experience in Quality Assurance testing for AI-based applications, machine learning models, generative AI models or software development
- Preferred: Knowledge of ethical AI principles and bias detection in AI models
- Preferred: Proficiency with automated testing tools (e.g., Selenium) and AI platforms (TensorFlow, PyTorch, OpenAI, GPT, Copilot)