Salary
💰 $196,000 - $217,000 per year
Tech Stack
AWSAzureBigQueryCloudETLGoogle Cloud PlatformHadoopMySQLPostgresPythonScalaSparkSQLTableau
About the role
- Design, develop, and maintain scalable data pipelines to manage and transform large amounts of on-chain and off-chain data that make data insights widely available across the company
- Optimize data storage and retrieval processes for performance and scalability.
- Implement data validation and monitoring processes to ensure data accuracy and consistency.
- Conduct regular audits and data quality assessments to identify and resolve data issues.
- Develop and maintain data models, dashboards, and reports to support business analytics and decision-making.
- Collaborate with data scientists, analysts, and other stakeholders to understand their data needs and provide robust solutions.
- Conduct exploratory data analysis to discover trends, patterns, and insights that drive business and product strategy.
- Work closely with cross-functional teams, including product, engineering, and business teams, to understand data requirements and deliver solutions that meet business needs.
- Communicate complex data insights and technical details to non-technical stakeholders effectively.
- Provide technical guidance and mentorship to junior data engineers and analysts.
- Stay current with the latest trends and technologies in data engineering and analytics.
- Continuously evaluate and implement new tools and technologies to improve data processing, analysis, and visualization capabilities.
- Drive the adoption of best practices in data engineering and analytics across the organization.
Requirements
- Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related field.
- 5+ years of experience in data engineering, data analytics, or a related field with Bachelor’s degree, or 3 years of experience with Master’s degree
- Strong proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL)
- Experience with big data technologies (e.g., Hadoop, Spark) and data warehousing solutions (e.g., BigQuery, Snowflake).
- Proficiency in programming languages such as Python or Scala.
- Experience with data visualization tools (e.g., Tableau, Power BI, Looker).
- Strong understanding of ETL processes, data modeling, and data warehousing concepts.
- Excellent problem-solving skills and attention to detail.
- Strong communication and interpersonal skills, with the ability to work effectively in a collaborative environment.
- Preferred: Experience with cloud data platforms (e.g., AWS, GCP, Azure).
- Preferred: Knowledge of machine learning and data science techniques.