Platform Data Engineer (Kubernetes / Cloud) (24006) Seattle, Washington

Salary: USD60 - USD70 per hour

Platform Data Engineer (Kubernetes / Cloud)

W2 Contract

Salary Range: $124,800 - $145,600 per year

Location: Seattle, WA - Hybrid Role

Job Summary:

As a part of our Cloud Infrastructure and Data Platform Team, you will design, develop, and operate large-scale Kubernetes and AWS-based platforms that enable developers across our company to innovate faster. The scale of our systems brings challenges that require extraordinarily creative problem-solving. By focusing on automation, reliability, and security, you'll be building the technology foundation that supports our services worldwide.

Duties and Responsibilities:

  • Design, deploy, and operate multi-tenant Kubernetes platforms on public and private cloud environments.
  • Implement Infrastructure as Code to provision, manage, and scale environments consistently across regions.
  • Drive adoption of GitOps workflows to deliver automated, reliable configuration and deployment management.
  • Build automation and tooling in Python or Java to streamline developer workflows, CI/CD, and operational efficiency.
  • Collaborate with cross-functional engineering teams to integrate infrastructure with data pipelines, services, and applications.
  • Champion reliability and observability through monitoring, alerting, and performance tuning using modern tools.
  • Ensure platform security, compliance, and cost efficiency while supporting mission-critical services.
  • Participate in on-call rotations to support production environments at scale.

Requirements and Qualifications:

  • Expertise in Kubernetes and container orchestration in production environments.
  • Deep understanding of cloud infrastructure (AWS preferred), including networking, compute, storage, and identity management.
  • Strong background in Infrastructure as Code (Terraform, Crossplane, or equivalent).
  • Familiarity with GitOps principles and tooling for deployment automation.
  • Experience with observability and monitoring systems (Prometheus, Grafana, Datadog, or similar).
  • Proficiency in Python or Java for platform automation and integration.
  • Knowledge of autoscaling (Karpenter, cluster autoscalers) and ingress patterns for highly available workloads.
  • Understanding of modern CI/CD practices and version control workflows.
  • 7+ years of experience in DevOps, SRE, or Platform Engineering roles.
  • 3+ years operating Kubernetes clusters in production.
  • Proven expertise in AWS and cloud-native architectures.
  • Strong coding ability in Python or Java to automate and integrate infrastructure.
  • Demonstrated success operating highly available, secure, and scalable systems.
  • Excellent troubleshooting skills and the ability to collaborate effectively across teams.

Preferred Qualifications:

  • Certifications in Kubernetes or AWS.
  • Experience with data platform ecosystems (Spark, Trino, Flink, etc.) is a plus.
  • Exposure to service meshes, policy frameworks, or secrets management.
  • Contributions to open source projects in cloud-native, Kubernetes, or infrastructure domains.
  • Experience developing Airflow DAGs or extending Airflow with custom operators is a strong plus.

 

Bayside Solutions, Inc. is not able to sponsor any candidates at this time. Additionally, candidates for this position must qualify as a W2 candidate.

Bayside Solutions, Inc. may collect your personal information during the position application process. Please reference Bayside Solutions, Inc.'s CCPA Privacy Policy at www.baysidesolutions.com.

;