Salary
💰 $141,750 - $165,000 per year
Tech Stack
Amazon RedshiftAWSCloudDistributed SystemsETLKafkaPythonPyTorchScikit-LearnSQLTensorflow
About the role
- The Data/AI Engineer will be responsible for solution engineering of enterprise scale data management best practices.
- This includes patterns such as modern data integration frameworks and building of scalable distributed systems using emerging cloud-based data design patterns.
- Develop data integration tasks in the data and analytics space.
- This position will report to the Director of Data Management under Data & AI organization.
- Demonstrate ability in implementing data warehouse solutions using modern data platforms such as Snowflake, Databricks or Redshift.
- Build data integration solutions between transaction systems and analytics platforms.
- Expand data integration solutions to ingest data from internal and external sources and to further transform as per the business consumption needs.
- Develop tasks for a multitude of data patterns, e.g., real-time data integration, advanced analytics, machine learning, BI and reporting.
- Fundamental understanding of building of data products by data enrichment and ML.
- Act as a team player and share knowledge with the existing team members.
Requirements
- Bachelor’s degree in computer science or a related field.
- Minimum 5 years of experience in building data driven solutions.
- At least 3 years of experience working with AWS services.
- Authorized to work in the US without requiring employer sponsorship currently or in the future.
- U.S. FinTech does not offer H-1B sponsorship for this position.
- Expertise in real-time data solutions, good-to-have knowledge of streams processing, Message Oriented Platforms and ETL/ELT Tools.
- Strong scripting experience using Python and SQL.
- Working knowledge of foundational AWS compute, storage, networking and IAM.
- Understanding of Gen AI models, prompt engineering, RAG, fine tuning and pre-tuning will be a plus.
- Solid scripting experience in AWS using Lambda functions.
- Knowledge of CloudFormation template preferred.
- Hands-on experience with popular cloud-based data warehouse platforms such as Redshift and Snowflake.
- Experience in building data pipelines with related understanding of data ingestion, transformation of structured, semi-structured and unstructured data across cloud services.
- Knowledge and understanding of data standards and principles to drive best practices around data management activities and solutions.
- Experience with one or more data integration tools such as Attunity (Qlik), AWS Glue ETL, Talend, Kafka etc.
- Strong understanding of data security – authorization, authentication, encryption, and network security.
- Hands on experience in using and extending machine learning framework and libraries, e.g, scikit-learn, PyTorch, TensorFlow, XGBoost etc.
- Experience with AWS SageMaker family of services or similar tools to develop machine learning models preferred.
- Strong written and verbal communication skills to facilitate meetings and workshops to collect data, functional and technology requirements, document processes, data flows, gap analysis, and associated data to support data management/governance related efforts.
- Acts with integrity and proactively seeks ways to ensure compliance with regulations, policies, and procedures.
- Demonstrated ability to be self-directed with excellent organization, analytical and interpersonal skills, and consistently meet or exceed deadline deliverables.
- Strong understanding of the importance and benefits of good data quality, and the ability to champion results across functions.