
Senior Data Engineer
Astro Sirens LLC
contract
Posted on:
Location Type: Remote
Location: India
Visit company websiteExplore more
Job Level
Tech Stack
About the role
- Design, build, and maintain scalable, reliable, and high-performance data pipelines
- Develop and manage ETL/ELT workflows to ingest data from multiple sources (APIs, databases, streaming systems, files)
- Architect and optimize data warehouses and data lakes for analytics and reporting use cases
- Ensure data quality, consistency, availability, and reliability across systems
- Collaborate with data scientists, analysts, software engineers, and business stakeholders to understand data requirements
- Implement data modeling best practices for analytical and operational workloads
- Optimize data pipelines for performance, cost efficiency, and scalability in cloud environments
- Build and maintain real-time and batch data processing systems
- Implement monitoring, logging, and alerting for data pipelines and infrastructure
- Enforce data governance, security, and compliance standards
- Document data architectures, pipelines, and best practices
- Mentor junior data engineers and contribute to improving data engineering standards and tooling across teams
Requirements
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field
- 5+ years of experience in data engineering, backend data development, or related roles
- Strong proficiency in Python and/or Java/Scala for data processing
- Advanced experience with SQL and relational databases (PostgreSQL, MySQL, MS SQL, etc.)
- Hands-on experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery, Azure Synapse)
- Experience building ETL/ELT pipelines using tools or frameworks such as Airflow, dbt, Prefect, or similar
- Strong understanding of data modeling concepts (star/snowflake schemas, normalization, denormalization)
- Experience working with cloud platforms such as AWS, GCP, or Azure
- Familiarity with distributed data processing frameworks (e.g., Spark)
- Experience implementing CI/CD for data pipelines and infrastructure
- Strong communication skills and ability to work effectively with U.S.-based stakeholders
- Experience with big data technologies such as Spark, Kafka, Hadoop, or Flink (preferred)
- Knowledge of Docker, Kubernetes, and containerized data workloads (preferred)
- Experience with streaming and real-time data pipelines (preferred)
- Exposure to MLOps or supporting machine learning workflows (preferred)
- Experience working with large-scale, high-volume data systems (preferred)
Benefits
- Paid Time Off (PTO)
- Work From Home
- Professional development opportunities
- Training & Development Programs
- Collaborative and inclusive company culture
- Competitive salary and performance-based bonuses
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data engineeringETLELTdata modelingPythonJavaScalaSQLdata warehousingbig data
Soft skills
communicationcollaborationmentoringproblem-solvingorganizational skills
Certifications
Bachelor's degreeMaster's degree