Cornelis Networks

ASIC Verification Engineer – Automation

Cornelis Networks

full-time

Posted on:

Location Type: Remote

Location: Remote • California • 🇺🇸 United States

Visit company website
AI Apply
Apply

Job Level

Mid-LevelSenior

Tech Stack

DockerGrafanaJenkinsLinuxNumpyPandasPrometheusPythonScikit-Learn

About the role

  • Own and evolve the tools, flows, automation, and AI capabilities underpinning the entire DV lifecycle—from UVM testbench bring-up and coverage analytics to large-scale regressions, CI/CD, intelligent triage, and release pipelines
  • Collaborate across full-stack software, hardware, RTL design, emulation, and post-silicon teams to deliver robust, reproducible, and data-driven DV at scale
  • Architect, implement, and maintain DV automation and regression infrastructure
  • Build scalable, reliable pipelines for multi-simulator (e.g., VCS, Xcelium, Questa) compilation, elaboration, and execution
  • Own coverage collection/merge (UCIS), results triage, flake detection, and auto-bisection workflows
  • Implement resource- and license-aware scheduling; optimize throughput and cost
  • Own, maintain, and report simulation performance metrics across regression, debug, and coverage workflows
  • Apply AI/ML to accelerate DV flows, debug, and triage
  • Build high-quality datasets from logs, coverage (UCIS), waves, bug trackers, and metadata; define labeling and data hygiene standards
  • Develop models and heuristics for failure clustering, deduplication, auto-classification, and bug-assignment suggestions
  • Implement anomaly detection for regressions (e.g., pass-rate drops, performance regressions, license/queue anomalies)
  • Prioritize and select tests/seeds using coverage- and history-informed ranking; predictively gate changes pre-merge
  • Integrate LLMs for log/wave summarization, root-cause hints, and knowledge-base retrieval; surface insights via PR comments, dashboards, or chat interfaces
  • Enforce reproducibility and governance for datasets, features, and prompts
  • Develop CI/CD and release pipelines for DV
  • Create dynamic pre-merge checks, nightly/weekly gating, and sign-off flows using GitHub Actions and/or Jenkins
  • Track artifacts (binaries, waves, logs), tool/seed manifests, and ensure reproducibility for audits and tapeout
  • Define policies for wave capture, retention, and on-demand replay
  • Build high-quality tooling and libraries
  • Author robust Python/Tcl/Bash utilities, CLI tools, and templates for common DV tasks
  • Standardize environment setup (containers/Modules), tool configs, and runbooks
  • Integrate lint/CDC/formal flows and quality gates into automated pipelines
  • Operate at scale on compute infrastructure
  • Integrate with job schedulers (e.g., SLURM/LSF/PBS/SGE) and coordinate with IT/SRE on storage, networking, and license servers
  • Containerize EDA environments (Docker/Podman/Singularity) for consistency and portability
  • Partner with DV and design engineers
  • Support ground-up UVM environment development at block/unit/SoC levels with an emphasis on reusability and instrumentation
  • Enable functional/code coverage closure through standardized testbench hooks and metrics
  • Improve debuggability via log structuring, automated triage, and viewer integrations (e.g., Verdi/DVE)
  • Drive observability and continuous improvement
  • Publish dashboards for pass rate, coverage, performance, and queue/license health (e.g., Grafana/ELK/Prometheus)
  • Document flows, teach best practices, and mentor peers on Git/GitHub, automation, and AI-assisted workflows

Requirements

  • BS in EE/CE/CS (or related)
  • Proficiency in SystemVerilog and UVM; ability to read RTL and debug to the line
  • Experience with at least one major simulator (preference Synopsys VCS) and coverage tools (preference Synopsys URG/Verdi)
  • Strong scripting (Python and shell); working knowledge of Tcl and Make/CMake or similar build systems
  • Hands-on Git and GitHub expertise (Actions, protected branches, PR reviews, CODEOWNERS, required checks)
  • Experience building and maintaining regression systems and dashboards
  • Linux proficiency, including containers (Docker/Podman/Singularity) and environment management (e.g., Lmod/Environment Modules)
  • Familiarity with job schedulers (SLURM/LSF/PBS/SGE) and license-aware scheduling
  • Practical experience applying ML/AI or intelligent heuristics to software/EDA operations or DV (e.g., failure clustering, anomaly detection, test prioritization, log summarization)
  • Proficiency with Python data/ML stack (pandas, NumPy, scikit-learn); ability to build reliable data pipelines from DV artifacts
  • Comfort using LLMs via APIs for summarization, retrieval, or assistant workflows, with attention to privacy and IP protection
  • 5+ years in ASIC DV and/or DV/EDA automation; can autonomously implement features, maintain pipelines, and handle day-to-day operations for mid-level
  • 8-10+ years in ASIC DV with significant ownership of DV infrastructure and AI-assisted flows; can architect systems, set standards, and lead cross-functional initiatives for senior level
Benefits
  • Health insurance
  • Dental coverage
  • Vision coverage
  • Disability insurance
  • Life insurance
  • Dependent care flexible spending account
  • Accidental injury insurance
  • Pet insurance
  • Generous paid holidays
  • 401(k) with company match
  • Open Time Off (OTO) for regular full-time exempt employees
  • Sick time
  • Bonding leave
  • Pregnancy disability leave

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
SystemVerilogUVMPythonTclBashGitCI/CDMLAIEDA
Soft skills
collaborationmentoringcommunicationproblem-solvingleadershiporganizational skillsdebuggingobservabilitycontinuous improvementteaching
Certifications
BS in EEBS in CEBS in CS