Design and maintain data transformations using Python and SQL to convert raw aircraft systems data into analysis-ready datasets, ensuring accuracy and consistency across the data pipeline
Define performant data models to power dashboards and reporting, maintaining their coherence and evolution over time
Build and maintain interactive dashboards and data exploration tools to support flight test teams, aircraft systems engineers, and other stakeholders
Collaborate with aircraft systems engineering teams to understand data schema changes, system interface updates, and evolving data requirements to ensure transformations remain current and accurate
Effectively communicate with both technical and non-technical stakeholders, translating complex data concepts into understandable insights and recommendations
Monitor data quality and integrity throughout the pipeline from aircraft systems to end-user consumption, implementing validation checks and error detection mechanisms
Maintain and optimize data processing workflows running on AWS infrastructure, troubleshooting issues and ensuring reliable data delivery
Maintain documentation of code, algorithms, and data definitions, ensuring clarity for other team members and stakeholders
Work with data and subject matter experts to strive for greater self-serve functionality in our data systems
As necessary, facilitate ad-hoc analysis requests using tools like Jupyter notebooks and direct SQL queries
Actively contribute to a collaborative team environment, fostering open communication, mutual respect, and a unified vision to achieve shared goals and drive business value
Participate in collaborative code and work reviews, providing constructive feedback to teammates and incorporating feedback to continuously improve data solutions and maintain team standards
Requirements
Bachelor's degree in Engineering, Computer Science, Data Science, or related technical field
3+ years of experience in data analysis, business intelligence, or analytics engineering roles
Strong proficiency in Python for data manipulation, analysis, and pipeline development
Advanced SQL skills and experience with relational databases, as well as familiarity with columnar (e.g., RedShift) and non-relational (e.g. DynamoDB) databases
Experience with cloud platforms (preferably AWS) and containerized applications
Proficiency with data visualization tools and dashboard development
Strong analytical and problem-solving skills with attention to detail and data accuracy
Excellent communication skills with ability to work effectively with both technical and business stakeholders
Experience working with time-series data and large datasets
Understanding of data pipeline concepts and ETL/ELT processes
Aptitude for rapidly learning and integrating with APIs
Understanding of DevOps practices including CI/CD, version control (Git), and infrastructure as code
Strong project management and organizational skills
Exceptional troubleshooting skills with the ability to spot issues before they become problems
Experience supporting and working with cross-functional teams in a dynamic environment.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.