FREE ACCESS
5,000–10,000 jobs/day

See all jobs on JobTailor
Search thousands of fresh jobs every day.
Discover
- Fresh listings
- Fast filters
- No subscription required
Create a free account and start exploring right away.
Tech Stack
Tools & technologiesAWSAzureBigQueryCloudETLHadoopOraclePostgresPySparkPythonScalaSDLCShell ScriptingSparkSQLUnix
About the role
Key responsibilities & impact- Support and manage highly complex software development initiatives, including data engineering, data analysis, user support, and enterprise-scale data solutions
- Manage and drive integrated teams, delivering high-quality technology solutions across big data platforms
- Designing, developing, and maintaining highly scalable ETL applications on Hadoop/Spark environments
- Delivering big data projects using Spark, Python, Scala, SQL, and Hive within distributed computing ecosystems
- Leading and mentoring onshore and offshore development teams, ensuring high-quality delivery and consistent engineering standards
- Actively participating in—and at times leading—Agile/Scrum ceremonies, including daily stand-ups, sprint planning, retrospectives, and code reviews
- Tuning and optimizing ETL pipeline performance to meet strict SLAs across highly complex, concurrent processing environments
- Engaging with business partners to understand needs, translate requirements, and identify appropriate technology solutions
- Coordinating development project execution from initiation through release, including dependency planning and sprint scheduling
- Managing team velocity, financials, and other critical delivery metrics
- Reviewing product backlog, designs, code, and test results to mitigate delivery risks and ensure product quality
- Leading adoption of innovative technologies to support evolving software product and data platform needs
- Assessing, installing, and enhancing software required for the organization’s broader big data platform
- Building robust ETL pipelines supporting day-to-day client delivery operations
Requirements
What you’ll need- 10+ years of software development experience
- 5+ years of hands-on experience developing ETL Spark applications in PySpark or Scala
- 5+ years managing globally distributed teams of 4+ developers, including mentorship, performance oversight, and delivery management
- Strong understanding of Spark architecture, data frames, distributed processing, and performance tuning
- Experience designing, developing, and supporting data-driven analytical applications in big data environments
- Experience with relational databases such as Oracle or PostgreSQL
- Experience reviewing complex code, troubleshooting software/data issues, and ensuring on-schedule delivery
- Hands-on software experience with Spark (Databricks), Python, Scala, SQL, UNIX shell scripting, SQL scripting
- Experience with cloud platforms (Azure, AWS)
- Expertise in database design, data modeling, and workflow/diagramming tools (flowcharts, data flows)
- Familiarity with enterprise data platforms such as Synapse, Snowflake, Google BigQuery, and tools like Ab Initio
- Experience using SDLC methodologies including Agile, Scrum, Kanban, and Waterfall
- Excellent leadership skills with proven ability to manage onshore/offshore engineering teams
Benefits
Comp & perks- paid time off
- medical/dental/vision insurance
- 401(k)
ATS Keywords
✓ Tailor your resumeApplicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
ETLSparkPythonScalaSQLHadoopdata engineeringdata analysisperformance tuningdatabase design
Soft Skills
leadershipmentorshipteam managementcommunicationproblem-solvingcollaborationproject managementagile methodologiesdelivery managementstakeholder engagement
