Tech Stack
AirflowCloudKubernetesLinuxPython
About the role
- Work on technologies to improve development & prototyping in Virtualization/Cloud Computing including Container Technology/Kubernetes platforms and AI/ML pipelines
- Contribute to a project in ONE of the following areas: Data analytics/Machine Learning including Exploratory Data Analysis; Coding in C++/C and Python for implementation of models for data pipeline (training, serving, testing); LSTM models, Embedding Models, Large Language Models (LLM) - encoders/decoders
- Collaborate with diverse global R&D team to support 5G cloud-native solutions
- Participate in prototyping, model training, serving, and testing within data pipelines
Requirements
- Pursuing a Data Science degree or degree in a similar field with some experience in data classification models, time series, and large language models.
- Python proficiency, C++/C language knowledge
- Experience with the Exploratory Data Analysis (EDA) process - end to end, including projects utilizing the EDA
- Jupyter Notebooks and knowledge of common AI prototyping tools/libraries.
- Networking IP Knowledge (IPv6/IPv4) (nice to have)
- Familiarity with Linux, Git, and AI tools: MLFlow, AirFlow, and Scikit Learn (nice to have)