Algorithms, Apache Flink, Apache Hadoop, AWS, Data structuring, Google Cloud Platform (GCP), JAX framework, Kubernetes-K8s, Machine learning techniques, PyTorch, Spark Core, TensorFlow
About the role:
Anyscale is looking to hire strong engineers to build next generation high-performance machine learning serving systems (both our open source libraries and SaaS offering).
Much of the tooling used to serve ML models today is inherited from the previous generation of infrastructure, but emerging ML applications come with a new set of requirements: high compute requirements, the need for specialized hardware, and composing many different models along with business logic in a single request.
Our goal is provide a simple but powerful set of tools to make bringing complex ML applications to production a reality.
About the Platform team:
The Platform team’s mission is to build world class systems for serving ML models in production. Part of this work is building and maintaining the open source Ray Serve library, as well as contributing directly to the Anyscale platform used by our customers to run mission-critical applications.
Much of our work is user-facing: you’ll have the opportunity to collaborate with open source users and customers from small startups with lean ML engineering teams to industry-leading companies using Ray, such as Uber, Shopify, and ByteDance.
As part of this role, you will:
We'd love to hear from you if you have:
Bonus points!
Compensation
At Anyscale, we're on a mission to democratize distributed computing and make it accessible to software developers of all skill levels. We’re commercializing Ray, a popular open-source project that's creating an ecosystem of libraries for scalable machine learning. Companies like OpenAI, Uber, Spotify, Instacart, Cruise, and many more, have Ray in their tech stacks to accelerate the progress of AI applications out into the real world.