Design, ETL frameworks, GPUs, Kubernetes-K8s, Machine learning techniques, PyTorch, Transformer
You care about making safe, steerable, trustworthy systems and are excited to commercialize them. You want to work at the confluence of safety research, capabilities research, and product. As a Research Engineer, you'll touch all parts of our code and infrastructure, whether that means running and designing experiments, working with major partners to improve our AI systems for their use cases, funneling AI safety and capabilities advances together into a single new system, or partnering with the API team to ensure the safety and security of new deployments. You're excited to write code when you understand the research context and more broadly why it's important.
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Responsibilities:
You might be a good fit if you:
Strong candidates may also:
Annual Salary (USD)
Hybrid policy & US visa sponsorship: Currently, we expect all staff to be in our office at least 25% of the time. We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate; operations roles are especially difficult to support. But if we make you an offer, we will make every effort to get you into the United States, and we retain an immigration lawyer to help with this.
Role-specific policy: For this role, we prefer candidates who are able to be in our office more than 25% of the time, though we encourage you to apply even if you don’t think you will be able to do that.
Anthropic, a public-benefit corporation and AI startup based in the United States, was established by former members of OpenAI. The company's primary focus is on creating general AI systems and language models, while maintaining a philosophy of responsible AI use.
San Francisco, CA, USA
8-10 year
San Francisco, CA, USA
4-6 year
San Francisco, CA, USA
4-6 year
New York, NY, USA
4-6 year