Designing and implementing data solutions best suited to deliver on our customer needs — from model inference, retraining, monitoring, and beyond — across an evolving technical stack.
Providing thought leadership by recommending the technologies and solution design for a given use case, from the application layer to infrastructure; and they have the team leadership and coding skills (e.g. Python, Java, and Scala) to build and operate in production; and to help ensure performance, security, scalability, and robust data integration.
Design and create environments for data scientists to build models and manipulate data
Work within customer systems to extract data and place it within an analytical environment
Learn and understand customer technology environments and systems
Define the deployment approach and infrastructure for models and be responsible for ensuring that businesses can use the models we develop
Demonstrate the business value of data by working with data scientists to manipulate and transform data into actionable insights
Create operational testing strategies, validate and test the model in QA, and implementation, testing, and deployment
Ensure the quality of the delivered product
Requirements
At least 6 years experience as a Machine Learning Engineer, Software Engineer, or Data Engineer
4-year Bachelor's degree in Computer Science or a related field
Experience deploying machine learning models in a production setting
Expertise in Python, Scala, Java, or another modern programming language
The ability to build and operate robust data pipelines using a variety of data sources, programming languages, and toolsets
Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
Hands-on experience in one or more big data ecosystem products/languages such as Spark, Snowflake, Databricks, etc.
Familiarity with multiple data sources (e.g. JMS, Kafka, RDBMS, DWH, MySQL, Oracle, SAP)
Systems-level knowledge in network/cloud architecture, operating systems (e.g., Linux), and storage systems (e.g., AWS, Databricks, Cloudera)
Production experience in core data technologies (e.g. Spark, HDFS, Snowflake, Databricks, Redshift, & Amazon EMR)
Development of APIs and web server applications (e.g. Flask, Django, Spring)
Complete software development lifecycle experience, including design, documentation, implementation, testing, and deployment
Excellent communication and presentation skills; previous experience working with internal or external customers.
Benefits
Remote-First Work Environment
Casual, award-winning small-business work environment
Collaborative culture that prizes autonomy, creativity, and transparency
Competitive comp, excellent benefits, generous weeks PTO plus 10 Holidays (and other cool perks)
Accelerated learning and professional development through advanced training and certifications
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
PythonJavaScalaSQLSparkSnowflakeDatabricksHDFSRedshiftAPI development