Citi

Big Data Support Engineer

Citi

full-time

Posted on:

Origin:  • 🇺🇸 United States • Florida, Missouri

Visit company website
AI Apply
Apply

Salary

💰 $113,840 - $170,760 per year

Job Level

Mid-LevelSenior

Tech Stack

ElasticSearchGrafanaHadoopHDFSJavaKafkaLinuxLogstashOraclePythonShell ScriptingSparkSQLUnix

About the role

  • System Health & Support: Partner with various technology teams to ensure our big data applications are integrated correctly, identify issues, and implement solutions. You'll provide expert support for production systems. Problem Resolution: Analyze complex technical problems, troubleshoot incidents, and lead efforts to resolve them quickly to minimize impact on the business. System Enhancement & Design: Help define system requirements for new features and improvements, ensuring they align with our business goals and industry standards. Risk & Compliance: Identify potential risks, vulnerabilities, and security issues within our data platforms, then propose and implement solutions to mitigate them, always keeping audit and compliance standards in mind. Monitoring & Optimization: Use advanced monitoring tools to track the performance and health of our big data clusters. You'll identify bottlenecks, optimize resource usage, and ensure system stability. Change Management: Oversee technical changes to our systems, ensuring they are thoroughly reviewed, approved, and validated to prevent disruptions. Resiliency & Recovery: Work on plans to ensure our systems can quickly recover from disruptions and participate in disaster recovery tests. Collaboration & Leadership: Work closely with product owners, business analysts, and other teams. You may also advise or mentor junior team members, applying strong communication and influencing skills.

Requirements

  • Experience: 6-8 years of strong experience in IT production support or engineering development, preferably within the financial industry. Automation : Strong Orientation towards simplification and automation of production services to reduce TOIL and improve production stability and improve productivity. Technical Foundations: Solid experience with Linux operating systems (4-5 years). Strong SQL skills and experience with various databases (e.g., Oracle, Hive). Familiarity with job schedulers like Autosys or CONTROL-M is a plus. Programming & Scripting: Proficiency in scripting languages such as PowerShell, UNIX shell scripting, Python, or Java. Big Data Expertise (Must-Have): Hands-on experience with key Hadoop ecosystem components (e.g., HDFS, Hive, Spark, Kafka). Experience supporting applications on Snowflake is a definite advantage. Ability to analyze and improve big data cluster performance and manage resources efficiently. Observability Tools: Strong understanding and practical experience with monitoring and logging tools like ELK Stack (Elasticsearch, Logstash, Kibana) and Grafana. Soft Skills: Excellent communication, interpersonal, and relationship-building skills. Strong analytical, problem-solving, and strategic thinking abilities. High attention to detail, professionalism, and integrity. Ability to work effectively in a fast-paced environment and manage multiple priorities. Education: A Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related field is preferred.