Senior Engineering Lead, DevOps

Gather AI
Apply Now

Job Description

About us

Gather AI is a supply chain robotics company founded by the PhDs from Carnegie Mellon’s Robotics Institute who created the world’s first provably-safe autonomous helicopter. We have developed an Inventory as a Service platform where fully autonomous drones collect warehouse inventory data at a press of a button.

This is an essential problem to solve as the warehouses we serve have typically misplaced over 10% of their inventory worth $10+ million dollars (seriously!). Their current manual techniques for taking inventory are falling down due the e-commerce boom brought on by COVID, and made worse due to the labor shortage and 70% annual staff turnover. Our drones take inventory 15x faster than humans with over 95% accuracy. We deliver this data through our web dashboard, which acts as a DVR for their warehouse where they run their inventory operation. We are the leader in this new market with proven technology. Our drones are live in a dozen warehouses and have scanned over 150k pallet locations.

We are a pure-software robotics company and our key innovation is the world’s only autonomy and machine learning engine that can solve this problem with commodity hardware in GPS-denied environments. That means we avoid all of the hardware development pitfalls of traditional robotics companies and we can scale 10x faster. The robotics industry is starting to enter its “Google era,” and we are leading the charge.

About You

You are a detail oriented, self-directed person that enjoys creating infrastructure-as-code. You are excited about the prospect of working across a broad array of DevOps concerns, including automating deployment and scaling of ML pipelines for our AI and web dashboards, helping lead our teams’ containerization and automated deployment efforts, and building out advanced metrics monitoring infrastructure. Maybe you’ve worked on big projects before/for a big company, or perhaps many small consulting projects where standard infrastructure was needed, or maybe even a startup where you were turning ideas into working software platforms. You are ready for a fresh challenge - to be the person that defines what devops and deployment looks like at a fast-growing, AI- and robotics-centric company. You love test-driving new technologies, and you like the challenge of incorporating them into your organization in a secure, sustainable way.

What You’ll Need

  • BS in Computer Science/Engineering or equivalent technical experience.
  • 10+ years of internet technology work experience, as a programmer or infrastructure-as-code developer.
  • Experience deploying containerized services in production.
  • Comfortable with cloud technologies, e.g., cloud VMs, databases, blob storage, serverless functions
  • Interest and experience designing secure, maintainable cloud deployment pipelines.
  • Strong familiarity with the GitHub ecosystem and modern CI/CD practices.
  • Knowledge of and comfort with cloud compute technologies, including network, data integrity (backup), and security considerations.
  • Customer obsession! We are a customer-obsessed company. If you are not already customer-obsessed, expect to become so!

Nice to Have

  • 2+ years experience working with production infrastructure-as-code technologies (e.g. AWS CDK, Terraform, Pulumi, etc.)
  • Deep knowledge and experience in at least one of the major cloud compute platforms (AWS, Azure, and/or Google Cloud) - note that we are currently multi-provider (AWS, Azure.)
  • Experience in distributed ML inference with platforms such as AWS Sagemaker, GCP Vertex, Seldon, or Kubeflow.
  • Interest and experience in building complete code-to-production pipelines.
  • Specific experience building/maintaining metrics and logging systems.
  • Familiarity with flexible, cloud based CI/CD tooling, such as GitHub actions.
  • Familiarity with clustering tools such as Kubernetes.
  • Expertise in ML is not required, but familiarity with ML architectures and lifecycle, especially in computer vision with deep learning is a plus.

What You’ll Do

As a DevOps Engineer working with our ML and Web teams, you will:

  • Identify and implement containerization, networking, and security best practices for our web and ML back-end applications.
  • Help us scale up our ML pipeline packaging by improving how we distribute the inference workload to multiple nodes.
  • Ensure the reliability and observability of our pipelines by introducing monitoring, metrics, and logging tools.
  • Increase our development velocity by leveraging containerization, infrastructure-as-code, and modern CI/CD practices.
  • Create tools, automation scripts and processes to manage our ML models and our datasets.

Compensation and Benefits

  • Competitive salary
  • Comprehensive health insurance
  • Very flexible schedule
  • Customized PTO

If this sounds like a good fit we’d love to meet you. Robotics is the future and we’re leading the charge with our software-only business model. Come help us change the world!

Company Info.

Gather AI

Gather AI is a supply chain robotics company founded by the PhDs from Carnegie Mellon’s Robotics Institute who created the world’s first provably-safe autonomous helicopter. We have developed an Inventory-as-a -Service platform where fully autonomous drones collect warehouse inventory data at a press of a button.

  • Industry
    Information Technology,Artificial intelligence
  • No. of Employees
    28
  • Location
    Pittsburgh, PA, USA
  • Website
  • Jobs Posted

Get Similar Jobs In Your Inbox

Gather AI is currently hiring Lead DevOps Engineer Jobs in India with average base salary of ₹50,000 - ₹150,000 / Year.

Similar Jobs View More