Tech Stack
AirflowAmazon RedshiftAWSCloudETLPythonScalaScikit-LearnSQLTensorflow
About the role
- Design and maintain intelligent dashboards, automated reports, and self-service analytics platforms using Looker (LookML), Power BI, and emerging AI-native BI tools.
- Build and optimize real-time data models, ETL/ELT pipelines, and streaming analytics architectures to support scalable reporting and analytics.
- Implement automated data quality monitoring, anomaly detection, and intelligent alerting systems to ensure reliable insights.
- Integrate large language models (LLMs) and generative AI tools to enable natural language querying and automated insight generation.
- Develop predictive models, recommendation engines, and forecasting systems embedded within BI workflows.
- Leverage AI coding assistants (e.g., GitHub Copilot, Cursor) for rapid development, automated testing, and intelligent code optimization.
- Apply core programming principles to build scalable, maintainable analytics solutions, including modular code design and version-controlled development workflows.
- Collaborate with cross-functional teams to identify opportunities for AI-enhanced decision-making and autonomous business processes.
- Lead the transition from static reporting to dynamic, AI-driven analytics while mentoring junior engineers in modern BI and AI methodologies.
- Adapt quickly to changing business needs, priorities, and technologies.
Requirements
- 3+ years of experience in BI development, including integration of AI/ML tools into analytics workflows.
- Advanced SQL proficiency and hands-on experience with modern data platforms (e.g., Databricks, Amazon Redshift, Amazon S3, Cloud-based Data Lakes).
- Proficiency with BI tools like Looker and Power BI, including AI-enhanced features such as PBI Smart visuals and ML integration.
- Hands-on experience with the modern data stack and orchestration tools (e.g., dbt, Airflow, Prefect), including reverse ETL, data mesh architectures, and real-time personalization.
- Experience building and optimizing scalable analytics solutions for high-volume data processing in integrated cloud environments.
- Hands-on experience with machine learning frameworks (e.g., scikit-learn, TensorFlow) and deployment tools (e.g., MLflow).
- Hands-on experience with generative AI tools and prompt engineering, applied to code generation, data analysis, and automated insight discovery within AI-assisted development workflows.
- Hands-on experience with cloud-native data and AI services (e.g., AWS Bedrock, Databricks AI/Genie), with the ability to communicate insights effectively to non-technical stakeholders.
- Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, or a related quantitative field (preferred).