Design and implement the backbone of Product Security’s data ecosystem, with responsibilities including:
Data Architecture & Blueprint – Define and implement the strategic vision for a secure, scalable, and reliable data architecture, including models, standards, and flows for ingesting, processing, and storing risk, vulnerability, and compliance data.
Ontology & Standardization – Build a centralized data ontology, creating consistency and a shared language across all tools, systems, and teams.
Pipeline Development – Design, optimize, and maintain complex ETL/ELT pipelines (Python, SQL, etc.) to consolidate data from diverse sources into warehouses, lakes, or analytics platforms.
Automation & AI – Automate data flows end-to-end, incorporating intelligent automation and AI-driven solutions to improve efficiency and security operations.
Governance & Optimization – Establish and enforce data governance practices, ensuring data quality, accuracy, compliance, and cost-efficient performance at scale.
Insights & Reporting – Enable teams with BI tools (e.g., Tableau) to create dashboards, reports, and analytics that surface actionable insights and trends.
Collaboration & Communication – Act as the bridge between Product Security teams, translating requirements into scalable solutions and ensuring alignment across stakeholders.
Technology Evaluation – Assess and recommend emerging data and AI/ML technologies that can strengthen security data capabilities and enhance automation.
Requirements
Bachelor’s or Master’s in Computer Science, IT, Business Administration, or related field.
5+ years of experience as a Data Engineer or in a related role, with proven expertise in data pipeline development and architecture.
Strong programming skills (Python, Java, Scala) and deep proficiency in SQL.
Hands-on experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Elasticsearch).
Expertise in building and optimizing ETL/ELT processes and data warehouses.
Experience with big data technologies (Hadoop, Spark, Kafka) and cloud platforms (AWS, Azure, GCP) is highly desirable.
Solid understanding of data modeling and schema design principles.
Preferred experience with Tableau/CRMA analytics.
Familiarity with version control systems (e.g., Git).
Ability to analyze large, complex datasets and extract meaningful patterns and insights.
Strong problem-solving, analytical, and communication skills, with the ability to simplify and explain technical concepts to varied audiences.
Comfortable working in a fast-paced, globally distributed environment with minimal supervision.
Proactive mindset, leveraging AI-assisted tools (e.g., code generation, auto-completion, intelligent suggestions) to accelerate development, improve quality, and enhance automation throughout the lifecycle.
Benefits
Comprehensive medical, dental, and vision coverage
Flexible Spending Account - healthcare and dependent care
Health Savings Account - high deductible medical plan
Retirement 401(k) with employer match
Paid time off and holidays
Paid parental leave plans for all new parents
Leave benefits including disability, paid family medical leave, and paid military leave
Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more!
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data architectureETLELTPythonSQLJavaScaladata modelingschema designbig data technologies