
Database Engineer
Chess.com
full-time
Posted on:
Location Type: Remote
Location: Anywhere in the World
Visit company websiteExplore more
Tech Stack
About the role
- Design and architect database systems optimized for Chess.com's specific workloads including real-time gaming, puzzle systems, and social graph traversal
- Build internal tooling and automation to improve database deployment, migration, and operational efficiency, simplifying work streams for the broader engineering organization
- Develop data pipelines and ETL processes for analytics, machine learning features, and cross-system data synchronization
- Engineer multi-regional database architectures capable of handling massive volumes of chess games, user data, and social network interactions with minimal latency
- Solve complex data modeling challenges including chess game storage optimization, puzzle attempt tracking, and large-scale social graph representation
- Drive database platform evolution evaluating and implementing new technologies, storage engines, and architectural patterns with a bias toward continuous improvement
- Build observability and performance tooling providing deep visibility into database behavior, query patterns, and capacity trends
- Collaborate with product engineering teams to design optimal schemas, access patterns, and data layer integrations, ensuring the right people have the information they need
- Implement infrastructure-as-code practices for database provisioning, configuration, and lifecycle management with high first-time-right quality
- Participate in on-call rotation to ensure 24/7 database availability and contribute to incident post-mortems
Requirements
- 5+ years of professional database engineering experience with large-scale, high-availability database systems in production environments
- Expert-level proficiency with MySQL (Percona) including internals, storage engine behavior, replication topologies, and performance optimization
- Strong software engineering skills with proficiency in Python and/or Go for tooling, automation, and data pipeline development
- Experience designing and building data pipelines using streaming or batch processing frameworks
- Strong experience with Redis for caching architectures, pub/sub systems, and high-performance data structures
- Advanced Linux systems knowledge with understanding of kernel behavior, I/O patterns, and hardware optimization for database workloads
- Experience with distributed systems concepts including CAP theorem trade-offs, consensus protocols, and partition tolerance
- Proficiency with infrastructure-as-code tools (Terraform, Ansible, Pulumi) for database infrastructure automation
- Experience with monitoring and observability platforms (Datadog, Prometheus, PMM) for building database observability solutions
- Strong understanding of query optimization including execution plans, index design, and workload analysis
- Gaming industry experience with understanding of real-time gaming database requirements and low-latency data access patterns (preferred)
- Deep MySQL internals knowledge including InnoDB internals, buffer pool tuning, redo/undo logs, and MVCC behavior (preferred)
- Experience building database proxies or middleware (ProxySQL, Vitess, custom solutions) for connection management and query routing (preferred)
- MySQL replication expertise including GTID-based replication, multi-source replication, and replication lag optimization (preferred)
- Cloud database architecture experience with AWS (RDS, Aurora) and/or GCP (Cloud SQL, AlloyDB) for hybrid database strategies (preferred)
- Experience with ScyllaDB and/or Cassandra for high-throughput, low-latency distributed workloads (preferred)
- Knowledge of MySQL sharding and partitioning strategies for large-scale data distribution and query performance (preferred)
- Experience building zero-downtime migration tooling (pt-online-schema-change, gh-ost) for schema evolution at scale (preferred)
- Container and Kubernetes experience for database operator development and cloud-native MySQL deployments (preferred)
- MySQL backup and recovery expertise including Percona XtraBackup, point-in-time recovery, and disaster recovery procedures (preferred)
- Track record of accumulating wins — History of successful project delivery and measurable infrastructure improvements (preferred)
- Open source contributions to MySQL ecosystem tools, drivers, or infrastructure projects (preferred).
Benefits
- 100% remote (work from anywhere!)
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
database engineeringMySQLPythonGodata pipelinesRedisLinux systemsinfrastructure-as-codequery optimizationcloud database architecture
Soft Skills
collaborationproblem-solvingcontinuous improvementincident managementproject delivery