Tech Stack
CloudCyber SecurityTypeScript
About the role
- Technical lead and internal consultant for secure, effective, and innovative use of AI across KONE.
- Help teams adopt AI confidently and safely by providing guidance, guardrails, and reusable solutions.
- Collaborate with cybersecurity engineers, IT, product and data teams to embed secure-by-design practices into AI solutions and workflows.
- Establish practical AI security guardrails & patterns for LLMs, classical ML, and AI-assisted workflows.
- Design and build AI agents for cybersecurity use cases and guide other teams building agents for business/product scenarios.
- Provide hands on consultancy and enablement: design sessions, solution reviews, prototyping safely, run clinics/office hours.
- Advance LLMOps/MLOps and oversee MCP usage: evaluation/safety tests, versioning and rollback, prompt/tool hygiene, secrets handling, and governance.
- Run pragmatic AI risk assessments for models and use cases—data minimization, isolation and context scoping, abuse/misuse prevention.
- Partner with capability owners to integrate AI securely with patterns, sample configs, and baselines.
- Strengthen the AI supply chain with procurement and third-party risk management—technical due diligence for model providers, vector stores, orchestration frameworks, plugins/tools, and SaaS.
- Create reusable assets and educate users—playbooks, starter kits, example prompts, decision trees, and reference implementations.
- Note: role does not focus on Security Operations or detection engineering, and does not own regulatory governance or management system topics.
Requirements
- Solid, hands on expertise in AI/ML security across LLM and classical ML—protecting data, avoiding leakage, resisting abuse, establishing safe boundaries for agents/tools, and promoting secure user experiences.
- Experience guiding teams to build with AI safely, including AI agents and copilots; familiarity with enterprise agent frameworks and MCP (in practice, not just theory), and how everyday users interact with these systems.
- Security architecture and application security fundamentals—identity and access (human and workload), API and data protection, cloud platform controls, containerization/runtime boundaries—and the ability to apply them pragmatically to AI scenarios.
- Operational MLOps/LLMOps know how: safety/evaluation harnesses, rollout/rollback, versioning, telemetry for AI workloads, dataset hygiene, and drift/regression monitoring—ideally in partnership with platform teams.
- Clear, audience appropriate communication: explain complex topics simply, write user friendly guidance, and coach technical and non-technical stakeholders alike.
- Influence without authority: facilitation, design reviews, and decision making with architects, engineers, product owners, and business sponsors.
- Comfortable with rapid prototyping and vendor/tool evaluation to demonstrate safe approaches and accelerate adoption.
- Awareness of AI governance and regulation, and ability to collaborate with governance specialists without owning that domain.
- 8+ years of experience in cybersecurity and/or technical IT.
- Master’s degree in information security, computer science, data/ML - or equivalent practical experience.
- Fluent English; other languages are a plus.