Cross River

Senior Back-End Engineer, Node.js

Cross River

full-time

Posted on:

Origin:  • 🇮🇱 Israel

Visit company website
AI Apply
Manual Apply

Job Level

Senior

Tech Stack

ApolloAWSCloudDockerDynamoDBGraphQLJavaScriptMicroservicesNode.jsPostgresRayRedisSQLTerraformTypeScript

About the role

  • Design and implement REST/GraphQL APIs in Node.js/TypeScript to serve generative‑AI features such as chat, summarization, and content generation
  • Build and maintain AWS‑native architectures using Lambda, API Gateway, ECS/Fargate, DynamoDB, S3, and Step Functions
  • Integrate and orchestrate LLM services (Amazon Bedrock, OpenAI, self‑hosted models) and vector databases (Aurora pgvector, Pinecone, Chroma) to power RAG pipelines
  • Create secure, observable, and cost‑efficient infrastructure as code (CDK/Terraform) and automate CI/CD with GitHub Actions or AWS CodePipeline
  • Implement monitoring, tracing, and logging (CloudWatch, X‑Ray, OpenTelemetry) to track latency, cost, and output quality of AI endpoints
  • Collaborate with ML engineers, product managers, and front‑end teams in agile sprints; participate in design reviews and knowledge‑sharing sessions
  • Establish best practices for prompt engineering, model evaluation, and data governance to ensure responsible AI usage
  • Operate highly reliable Node.js services on AWS that enable generative‑AI capabilities across products and internal workflows
  • Create scalable APIs, data pipelines, and serverless architectures integrating large‑language‑model services

Requirements

  • Available working some US hours
  • Proficient in Hebrew and English both written and verbal, sufficient for achieving consensus and success in a remote and largely asynchronous work environment - Must
  • 4+ years professional experience building production services with Node.js/TypeScript
  • 3+ years hands‑on with AWS, including Lambda, API Gateway, DynamoDB, and at least one container service (ECS, EKS, or Fargate)
  • Experience integrating third‑party or cloud‑native LLM services (e.g., Amazon Bedrock, OpenAI API) into production systems
  • Experience building Retrieval‑Augmented Generation (RAG) systems or knowledge‑base chatbots
  • Hands‑on with vector databases such as Pinecone, Chroma, or pgvector on Postgres/Aurora
  • AWS certification (Developer, Solutions Architect, or Machine Learning Specialty)
  • Experience with observability tooling (Datadog, New Relic) and cost‑optimization strategies for AI workloads
  • Background in microservices, domain‑driven design, or event‑sourcing patterns
  • Strong understanding of RESTful design, GraphQL fundamentals, and event‑driven architectures (SNS/SQS, EventBridge)
  • Proficiency with infrastructure‑as‑code (AWS CDK, Terraform, or CloudFormation) and CI/CD pipelines (GitHub Actions, AWS CodePipeline)
  • Familiarity with secure coding, authentication/authorization patterns (Cognito, OAuth), and data privacy best practices for AI workloads
  • Familiarity with technical environment: TypeScript, JavaScript, SQL, Express.js, Fastify, Apollo Server, LangChain‑JS, AWS SDK v3, DynamoDB, Aurora (Postgres + pgvector), Redis, S3, AWS Lambda, API Gateway, ECS/Fargate, Step Functions, CDK, Terraform, Docker, GitHub Actions, Amazon Bedrock, OpenAI API, HuggingFace Inference Endpoints, Pinecone, Chroma