Tech Stack
ETLJavaPythonScalaSparkSQL
About the role
- Responsible for designing and developing data warehouse systems, including data model design, ETL process development, and data quality assurance
- Collaborate with business teams and data analysts to understand business requirements and provide data warehouse solutions
- Continuously optimize data processing performance, improve platform stability and processing efficiency
- Participate in research on new technology trends and drive the next-generation upgrade of the data platform
- Work with global teams across San Francisco, Shenzhen, Beijing, and Tokyo to deliver products
Requirements
- Bachelor's or Master’s degree in a technical field (Computer Science, Engineering or a related field)
- 6 years of experience in big data development, including real-time and offline data processing, modeling, ETL development, and data analysis
- Proficient in SQL programming
- Familiar with at least one object-oriented programming language such as Python, Java, or Scala
- Strong analytical and problem-solving skills
- Preferred: 8+ years of experience in big data development, including real-time and offline processing, modeling, ETL development, and data analysis
- Preferred: Experience with data modeling in business domains and familiarity with AI application products
- Preferred: Proficient in multiple OLAP storage systems (Hive, Spark, SnowFlake, Databrick, Doris, Clickhouse)
- Preferred: Proven track record of shipping globalized products and driving alignment across multicultural, cross-functional teams
- Preferred: Fluent in English and Mandarin Chinese
- Application asks: ability to come into either San Francisco or Seattle offices 2-3 times per week
- Application form asks: Do you need sponsorship working in US?