FREE ACCESS
5,000–10,000 jobs/day

See all jobs on JobTailor
Search thousands of fresh jobs every day.
Discover
- Fresh listings
- Fast filters
- No subscription required
Create a free account and start exploring right away.

Data & AI Platform Engineer
Brown and Caldwell. Design, create and maintain data pipelines to collect, clean, transform, and load data from various sources.
Posted 4/21/2026full-timeRemote • 🇺🇸 United StatesMid-LevelSenior💰 $117,000 - $191,000 per yearWebsite
Tech Stack
Tools & technologiesCloudCyber SecurityDockerETLPythonSQL
About the role
Key responsibilities & impact- Design, create and maintain data pipelines to collect, clean, transform, and load data from various sources.
- Collaborate with interdisciplinary teams of environmental engineers, data scientists, and software developers to understand data requirements.
- Participate in the design of and execute the creation and management of data warehouses, data lakes, and databases.
- Develop, deploy, execute, and monitor ETL (Extract, Transform, Load) processes.
- Develop and maintain data models and engage in SQL database management and querying.
- Design and execute testing plans for data pipeline and data warehousing implementation efforts.
- Implement processes for improving data quality and managing data governance.
- Collaborate with IT infrastructure and cybersecurity teams to implement and operate data pipelines within approved data infrastructure.
- Design and execute processing tasks using Python and maintain up-to-date understanding of big data processing frameworks.
- Perform regular data audits and updates to ensure a high level of data accuracy and integrity.
Requirements
What you’ll need- Typically a minimum of 5 years of data engineering or related experience.
- Typically certified in BC's SMS Framework and progressing through the SMS competencies.
- Understanding in building and optimizing data pipelines, architectures, and data sets.
- Strong working SQL knowledge and skills in implementing and managing relational databases.
- Proficient in ETL processes creation and management and techniques for data cleaning and validation.
- Proficient in Python and other scripting languages applicable for data engineering.
- Proficient with best practices for writing clean, maintainable, and scalable code while applying software engineering best practices including use of version control systems (e.g., Git).
- Demonstrated abilities with data warehousing solutions, data lake solutions, and cloud platforms.
- Hands-on experience supporting production LLM‑ or RAG‑based systems in a platform, data, or MLOps capacity is preferred.
- Familiarity with LLMOps practices and operational tooling is preferred.
- Exposure to analytics platforms and integration‑heavy systems is preferred.
- Experience deploying and operating AI‑enabled or analytics-heavy services in Docker‑based containerized runtimes on managed cloud platforms is preferred.
- Familiarity with geospatial data and analysis is preferred.
- Interest or experience in environmental, water resources, or scientific computing domains is preferred.
Benefits
Comp & perks- medical
- dental
- vision
- short and long-term disability
- life insurance
- an employee assistance program
- paid time off and parental leave
- paid holidays
- 401(k) retirement savings plan with employer match
- performance-based bonus eligibility
- employee referral bonuses
- tuition reimbursement
- pet insurance
- long-term care insurance
ATS Keywords
✓ Tailor your resumeApplicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
data pipelinesETLSQLdata modelingPythondata warehousingdata lakesdata governancedata cleaningdata validation
Soft Skills
collaborationcommunicationproblem-solvingattention to detailinterdisciplinary teamwork
Certifications
BC's SMS Framework