
Senior Back-End Engineer, Node.js
Cross River
full-time
Posted on:
Location: 🇮🇱 Israel
Visit company websiteJob Level
Senior
Tech Stack
ApolloAWSCloudDockerDynamoDBGraphQLJavaScriptMicroservicesNode.jsPostgresRayRedisSQLTerraformTypeScript
About the role
- Design, build, and operate highly reliable Node.js/TypeScript services on AWS to enable generative-AI capabilities across products and internal workflows
- Design and implement REST/GraphQL APIs to serve chat, summarization, and content-generation features
- Build and maintain AWS-native architectures using Lambda, API Gateway, ECS/Fargate, DynamoDB, S3, and Step Functions
- Integrate and orchestrate LLM services (Amazon Bedrock, OpenAI, self-hosted models) and vector databases (Aurora pgvector, Pinecone, Chroma) to power RAG pipelines
- Create secure, observable, and cost-efficient infrastructure as code (CDK/Terraform) and automate CI/CD with GitHub Actions or AWS CodePipeline
- Implement monitoring, tracing, and logging (CloudWatch, X-Ray, OpenTelemetry) to track latency, cost, and output quality of AI endpoints
- Collaborate with ML engineers, product managers, and front-end teams in agile sprints; participate in design reviews and knowledge-sharing sessions
- Establish best practices for prompt engineering, model evaluation, and data governance to ensure responsible AI usage
Requirements
- Available working some US hours
- Proficient in Hebrew and English both written and verbal (Must)
- 4+ years professional experience building production services with Node.js/TypeScript
- 3+ years hands-on with AWS, including Lambda, API Gateway, DynamoDB, and at least one container service (ECS, EKS, or Fargate)
- Experience integrating third-party or cloud-native LLM services (e.g., Amazon Bedrock, OpenAI API) into production systems
- Experience building Retrieval-Augmented Generation (RAG) systems or knowledge-base chatbots
- Hands-on with vector databases such as Pinecone, Chroma, or pgvector on Postgres/Aurora
- AWS certification (Developer, Solutions Architect, or Machine Learning Specialty)
- Experience with observability tooling (Datadog, New Relic) and cost-optimization strategies for AI workloads
- Background in microservices, domain-driven design, or event-sourcing patterns
- Strong understanding of RESTful design, GraphQL fundamentals, and event-driven architectures (SNS/SQS, EventBridge)
- Proficiency with infrastructure-as-code (AWS CDK, Terraform, or CloudFormation) and CI/CD pipelines (GitHub Actions, AWS CodePipeline)
- Familiarity with secure coding, authentication/authorization patterns (Cognito, OAuth), and data privacy best practices for AI workloads
- Familiarity with monitoring, tracing, and logging (CloudWatch, X-Ray, OpenTelemetry)
- Technical proficiency with TypeScript, JavaScript, SQL, Express.js, Fastify, Apollo Server, LangChain-JS, AWS SDK v3
- Experience with datastores: DynamoDB, Aurora (Postgres + pgvector), Redis, S3
- Experience with containers and infra: Lambda, API Gateway, ECS/Fargate, Step Functions, Docker
- Familiarity with AI stack: Amazon Bedrock, OpenAI API, HuggingFace Inference Endpoints, Pinecone, Chroma