
AWS Solution Architect
qode.world
full-time
Posted on:
Location Type: Hybrid
Location: San Francisco • California • United States
Visit company websiteExplore more
About the role
- Designing Solution architecture, and work on Data Ingestion, Preparation, and Transformation. Debugging the production failures and identifying the solution.
- Developing efficient frameworks for development and testing using (AWS Dynamo DB, EKS, Kafka, Kinesis/Spark/Streaming/Python, etc.) to enable a seamless data ingestion process onto the AWS cloud platform.
- Enabling Data Governance and Data Discovery Platform
- Building data processing framework using Spark, Databricks, and python
- Exposure to Data Security Framework on the cloud
- Exposure to Data Pipeline Automation using DevOps tools
- Exposure to Job Monitoring framework along with validations and automation
- Exposure to handling structured, Unstructured, and Streaming datasets.
Requirements
- Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred)
- Hands-on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue PySpark, EMR PySpark
- Hands-on experience with programming languages like Python, Scala with Spark.
- Good command and working experience on Hadoop/Map Reduce, HDFS, Hive, HBase, and No-SQL Databases
- Hands-on working experience on any of the data engineering/analytics platforms (Hortonworks/Cloudera/ MapR/ AWS), AWS preferred
- Hands-on experience in Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Ozzie
- Hands-on working experience in data processing at scale with event-driven systems, message queues (Kinesis/Kafka/Flink/Spark Streaming)
- Hands-on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation
- Hands-on working Experience with AWS Athena
- Data Warehouse exposure on Apache Nifi, Apache Airflow, Kylo
- Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.)
- Feature Engineering/Data Processing to be used for Model development
- Experience gathering and processing raw data at scale (including, writing scripts, web scraping, calling APIs, writing SQL queries, etc.)
- Experience building data pipelines for structured/unstructured, real-time/batch, events/synchronous/ asynchronous using MQ, Kafka, Steam processing
- Hands-on working experience in analyzing source system data and data flows, working with structured and unstructured data
- Must be very strong in writing SQL queries
- Strengthen the Data engineering team with Big Data solutions
- Strong technical, analytical, and problem-solving skills
- Strong organizational skills, with the ability to work autonomously as well as in a team-based environment
- Pleasant Personality, Strong Communication & Interpersonal Skills
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
Solution architectureData ingestionData preparationData transformationBig Data technologiesPythonScalaHadoopSQLData processing
Soft skills
Analytical skillsProblem-solving skillsOrganizational skillsCommunication skillsInterpersonal skillsTeamworkAutonomy