Agile Defense

Data Engineer

Agile Defense

full-time

Posted on:

Origin:  • 🇺🇸 United States

Visit company website
AI Apply
Manual Apply

Job Level

Mid-LevelSenior

Tech Stack

AWSAzureCloudCyber SecurityDockerEC2ETLJenkinsPostgresPySparkSQL

About the role

  • Title: Data Engineer\n
  • Clearance: Public Trust\n
  • Location: Remote \n
  • Req # 1118\n
  • Overview: Agile Defense is looking for a Data Engineer to join our growing team!\n
  • The Data Engineer will service existing products with enhancements, bug fixes and updates while also creating new analytics products at the request of the PO.\n
  • The role operates in an Agile environment that continuously deploys code and requires a solid understanding of the full software lifecycle, including DevSecOps, Software Development, System Integration, Infrastructure Management, and Cyber Security related to public-facing government products.\n
  • As a direct employee of IntelliBridge, you would receive a benefit package that includes health/dental/vision insurance coverage, 401K with company match, PTO & paid holidays, and annual tuition/training assistance.\n
  • Clearance: Public Trust\n
  • Responsibilities/Duties:\n
  • Strong oral and written communication skills.\n
  • Able to communicate effectively with Product Owners (POs) and stakeholders to understand requirements/scope and be able to translate that into user stories.\n
  • Able to effectively communicate and coordinate between stakeholders, POs, and team members including designers, business analysts, developers, and Scrum Masters across other development teams.\n
  • Work to become a subject matter expert for the products that are supported.\n
  • Able to assist with testing/communication/de-bugging with external integration partner system teams.\n
  • After team onboarding, work independently to continue to learn the system and products by reviewing resources and documentation, testing, and engaging with design, development, Product Owner and stakeholders in great detail.\n
  • Design, build, and maintain efficient ETL/ELT data pipelines\n
  • Develop and optimize SQL queries for large-scale data processing\n
  • Implement data quality checks and monitoring systems\n
  • Create and maintain database schemas and data models\n
  • Collaborate with cross-functional teams to integrate data from multiple sources\n
  • Optimize database performance through indexing, query analysis, and partitioning\n
  • Automate data workflows using CI/CD pipelines\n
  • DatePosted: null

Requirements

  • 3-5 Years of Experience working in an Agile and Scrum software development environment performing the duties of a Data Engineer.\n
  • Databricks notebooks, job schedules, SQL query tools\n
  • Managing multi-hope data processing pipelines (185 separate nodes in data pipeline)\n
  • Scripting in SQL, pyspark, and bash shell\n
  • Postgres database management\n
  • FlaskAPI\n
  • AWS Cloudflare, S3, EC2, EKS, ALB\n
  • Docker containerization\n
  • Apigee\n
  • Jenkins, GitHub and separate code environments