
AI Compliance Engineer
Snowheap
contract
Posted on:
Location Type: Remote
Location: Remote • 🇱🇧 Lebanon
Visit company websiteJob Level
Mid-LevelSenior
Tech Stack
AWSAzureCloudGoogle Cloud PlatformPythonTypeScript
About the role
- Define and run SnowHeap’s AI governance program: policies, control library, risk register, exception handling, and sign-offs (from ideation to production).
- Map laws and frameworks (EU AI Act, GDPR/PDPL/DIFC DPL, NIST AI RMF, ISO/IEC 42001 & 27001, SOC 2) to concrete technical controls in our products and client projects.
- Build an evaluation harness for LLMs/agents: golden sets, scenario tests, adversarial probes, offline evals, and online A/Bs; track hallucination, safety, bias, privacy leakage, robustness, cost, and latency.
- Implement guardrails (PII detection, jailbreak/prompt-injection defenses, output filters, content safety) and wire them into pipelines (LangChain/LangGraph, CrewAI/Agno).
- Stand up audit-ready telemetry: data lineage, prompt/response logging with redaction, model cards, decision traces, and approval workflows (LangSmith/observability tools).
- Partner with Security/Privacy on DPIAs/TRA, retention, DLP, key management, access controls, and vendor risk (OpenAI/Anthropic terms, Azure/GCP/AWS).
- Lead red-teaming exercises; coordinate incident response playbooks for model failures and safety regressions.
- Review prompts, fine-tunes, and datasets for policy compliance; curate evaluation datasets and “go/no-go” acceptance criteria.
- Coach engineers, sales, and clients; write crisp docs and checklists; run internal trainings and readiness reviews.
- Contribute to proposals and client audits; turn compliance into a product advantage.
Requirements
- 4+ years in Security/Privacy/Compliance, ML governance, or safety engineering, with 2+ years on LLM products.
- Strong grasp of LLM stacks: OpenAI & Azure OpenAI, Claude, Agno, CrewAI, LangChain/LangGraph/LangSmith.
- Hands-on model evaluation: building test sets, rubric-based scoring, offline/online evals, statistical analysis; familiarity with tools or libraries for evals/observability.
- Working knowledge of privacy & AI risk (GDPR/PDPL/DIFC DPL, EU AI Act concepts, NIST AI RMF), and how to turn them into safeguards, SOPs, and controls.
- Context engineering expertise: ability to design, test, and audit prompt chains, context windows, and memory architectures for compliance, safety, and explainability.
- Solid scripting in Python/Pydantic (TypeScript nice to have); able to review PRs and add compliance checks to CI/CD.
- Cloud/MLOps fluency: one of AWS/GCP/Azure; containers, secrets, monitoring, access controls.
- Excellent writing and stakeholder skills; can say “no” with rationale and ship a safer “yes”.
Benefits
- High-ownership role shaping SnowHeap’s AI governance and PiperX roadmap.
- Remote-first across MENA/EU time zones; flexible hours.
- Competitive compensation with performance bonus.
- Fast career growth, build the function and lead it.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
AI governancemodel evaluationstatistical analysisscripting in Pythoncontext engineeringcompliance checksrisk managementdata lineageprompt engineeringadversarial testing
Soft skills
stakeholder communicationcoachingwriting documentationinternal trainingincident response coordinationpolicy compliance reviewdecision makingteam collaborationproblem solvingleadership