Senior DevOps Engineer

Remote or Hayward , California
Mar 26, 2021
Biotech Bay
Required Education
High School or equivalent
Position Type
Full time

Job Summary:


We are seeking a passionate engineering talent who is ready to make an impact. We’re looking for a DevOps engineer to transform the infrastructure that undergirds our next-generation computational drug discovery and development platform. We are looking for a strong resource who can quickly come up to speed on our existing cloud technology and work with stakeholders - primarily engineers and data scientists - to dramatically improve our R&D infrastructure. Success in this role will enable engineers and scientists of all stripes to build tools, services, products, dashboards, and models reliably, and safely, which will further accelerate our ability to quickly and cost effectively discover, develop, and deliver life changing drugs to save lives.


The Arcus Biosciences IT organization is in the business of trust and reliability. We create, maintain and operate scalable technology and data solutions that deliver an exceptional experience for our rapidly growing global operations. We embrace Agile principles and values, favor DevOps practices, and view infrastructure as code, all while we create an infrastructure that scales and supports our growth and ambitious vision. This requires a smart, highly collaborative team who can identify, investigate, and implement new technologies to continue securely scaling our operations.


This position is an Individual Contributor, reporting to the Associate Director, IT.


We are in the San Francisco bay area, the heart of the world’s premier biotechnology research hub. Arcus offers a competitive compensation and benefits package, including participation in the aggressive growth of the company in the form of stock option grants.


We are in the relentless pursuit of curing cancer, come join our team!


Duties and Essential Job Functions:


  • Design, implement, scale, and secure services using innovative solutions
  • Execute projects with automation mindset, infrastructure-as-code & at scale deployment
  • Automate where possible, implement and improve processes where necessary
  • Collaborate closely with software and data science teams, and with other business stakeholders
  • Proactively monitor and improve infrastructure stack(s) with proper prioritization and collaboration
  • Understand how to code automated infrastructure in a "cloud native" architecture
  • Take on a broad set of no-frills infrastructure tasks, from Linux server administration, to cloud resource provisioning, to application monitoring
  • Partner with the SecOps team to develop a strategy to monitor active or emerging threats and vulnerabilities
  • Drive continuous process feedback improvement
  • Perform routine audits of the infrastructure and build a resolution plan around any deficiencies/areas for improvement
  • Participate in on-call rotation, respond to alerts, troubleshoot and resolve problems
  • Strong customer service orientation and demonstrated dedication to consistent quality



  • Expert or proven in-depth knowledge in AWS (Azure a plus)
  • High level of proficiency with AWS, specifically S3, EC2, IAM, Lambda, CloudFormation, etc.
  • Rich experience with Unix/Linux system administration, preferably Red Hat Enterprise or CentOS
  • Experience with configuration and orchestration management tools such as Ansible, Chef, Salt, Puppet, or equivalent.
  • Experience with platform, application, network performance monitoring and alerting tools such as New Relic, Splunk, Dynatrace or equivalent
  • Experience in CI/CD tools such as Jenkins, Terraform, etc., and container runtime tools such as Docker and Kubernetes
  • Solid understanding with a scripting language such as Bash, Python (PowerShell a plus)
  • Solid understanding of networking concepts such as AWS VPC, VPN, DNS, etc. and troubleshooting
  • Strong comprehension of continuous integration and continuous deployment methodologies, tools and software version control systems, with tools such as Git, Mercurial, Cloud Formation, Terraform or equivalent
  • Experience using containers technologies such as Docker, Docker Swarm, Kubernetes.
  • Preferred: Experience in bioinformatics and pipeline efforts
  • Preferred: Familiarity with distributed Linux systems administration
  • Preferred: Familiar with SecOps processes i.e. event/ incident management
  • Preferred: Experience administering Jira, Confluence and Bitbucket and complementary technologies
  • Minimum 5 years of experience with DevOps
  • Bachelor’s Degree in Computer Science or related field OR High School Diploma/GED or higher from an accredited institution and a minimum of ten (10) years of experience in automation and deployment in lieu of the bachelor’s degree education requirement


NOTE:  This job description is not intended to be all-inclusive. Employee may perform other related duties as requested to meet the ongoing needs of the organization.