InvoiceCloud, Inc.

AI Security Engineer

InvoiceCloud, Inc.

full-time

Posted on:

Location Type: Remote

Location: United States

Visit company website

Explore more

AI Apply
Apply

About the role

  • Lead AI Security Architecture & Secure Design initiatives by designing and implementing lifecycle security controls across data ingestion, training, evaluation, deployment, and monitoring environments to measurably reduce AI-specific risk while maintaining product velocity.
  • Conduct structured Threat Modeling & Risk Assessment exercises for generative AI, RAG, and agent-based systems, evaluating risks such as prompt injection, data poisoning, model extraction, model inversion, abuse/misuse, and data leakage, and mapping findings to OWASP Top 10 for LLM Applications, MITRE ATLAS, and NIST AI RMF to drive remediation through engineering teams.
  • Define and operationalize Monitoring, Detection & Incident Response capabilities for AI systems by implementing prompt and output telemetry, tool-call logging, anomaly detection, and AI-specific incident response playbooks integrated into SIEM/SOC workflows.
  • Deliver measurable outcomes aligned to 30-, 150-, and 210-day milestones, including secure reference architectures, hardened AI environments, integrated security controls, and executive-ready reporting on AI risk reduction and posture maturity.
  • Establish and formalize AI Governance, Privacy & Third-Party Risk requirements by defining security expectations for AI use cases, third-party models, vendor integrations, and sensitive data usage, embedding controls into SDLC, procurement, and engineering standards.
  • Drive Cross-Functional Collaboration & Enablement by partnering with Engineering, Data Science, DevSecOps, Product, Legal/Privacy, and SOC teams to align on risk appetite, escalation paths, and secure design guardrails while raising AI security maturity across the organization.
  • Inventories current and planned AI/ML initiatives, documents system architectures and sensitive-data touchpoints, and implements a structured AI security intake and risk-rating process that ensures accountability and transparency.
  • Develops and communicates forward-looking 6- and 12-month AI security maturation plans that align technical priorities with business goals and clearly articulate risk trends, metrics, and investment needs to Security leadership and the CISO.
  • Integrate Secure MLOps / MLSecOps controls into AI delivery pipelines, including secure model registries, artifact signing and provenance validation, dependency scanning, secrets management, CI/CD guardrails, and hardened training and inference environments across AWS and Azure.
  • Build and scale AI Security Testing & Red Teaming workflows by creating repeatable adversarial evaluation plans for jailbreaks, model evasion, prompt injection, and data exfiltration scenarios, ensuring security controls remain effective over time.
  • Develop automated regression test harnesses to continuously validate AI security protections as models, prompts, and dependencies evolve, reducing manual effort and improving coverage.
  • Establish a sustainable AI security operating rhythm that includes intake reviews, threat modeling checkpoints, remediation tracking, and structured monitoring ownership to bring consistency and order to AI risk management.
  • Advance AI Security Testing & Red Teaming capabilities through adversarial experimentation and multi-dimensional analysis, proactively identifying emerging AI threat patterns before production impact.
  • Leverage AI and automation to strengthen testing coverage, automate regression validation, enhance anomaly detection logic, and improve the scalability of AI security monitoring and response.
  • Continuously evaluate emerging AI security research, tooling advancements, and regulatory developments, translating insights into adaptive defensive controls that support InvoiceCloud’s AI-first strategy while enabling responsible innovation.

Requirements

  • Bachelor’s degree in Computer Science, Cybersecurity, Engineering, Data Science, or related field (or equivalent practical experience).
  • 5+ years of experience in security engineering, application/product security, cloud security, or DevSecOps.
  • 2+ years of experience building or securing AI/ML systems (including LLM-based applications) in production environments.
  • Strong understanding of AI/ML threats and defenses, including prompt injection, data poisoning, model extraction, model inversion, adversarial inputs, data leakage, and abuse/misuse scenarios.
  • Experience integrating security into CI/CD and MLOps pipelines.
  • Proficiency with cloud platforms (AWS and Azure), container security, IAM, network segmentation, key management, and secrets management.
  • Familiarity with industry guidance such as OWASP GenAI/Top 10 for LLM Applications, MITRE ATLAS, and/or NIST AI RMF preferred.
  • Relevant certifications such as CISSP, CSSLP, CCSP, Azure Security certifications, or GIAC certifications preferred.
Benefits
  • Health insurance
  • 401(k) retirement plan
  • Paid time off
  • Flexible work arrangements
  • Professional development opportunities
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
AI Security ArchitectureSecure DesignThreat ModelingRisk AssessmentIncident ResponseMLOpsAI Security TestingRegression TestingAnomaly DetectionData Protection
Soft Skills
Cross-Functional CollaborationCommunicationLeadershipOrganizational SkillsStrategic PlanningRisk ManagementProblem SolvingAccountabilityTransparencyAdaptability
Certifications
CISSPCSSLPCCSPAzure Security CertificationsGIAC Certifications