We are looking for Research Engineers to build “gold standard” evaluations for catastrophic risks, in order to understand what AI Safety Level (ASL) to assign to models. Research leads on this team collaborate with engineers in one of our focus areas: CBRN, Cyber, Autonomy (this list may expand over time). This will have major implications for the way we train, deploy, and secure our models, as detailed in our Responsible Scaling Policy (RSP).
The policy defines a series of capability thresholds – AI Safety Levels (ASLs) – that represent increasing risks – crossing an ASL threshold would trigger a commitment to more stringent safety, security, and operational measures, intended to handle the increased level of risk.
Please note: We are currently only hiring for the Autonomous Replication and Adaption (Autonomy) threats workstream. We will also be prioritizing candidates who can start ASAP and can be based in either our San Francisco or London office.
Responsibilities:
You may be a good fit if you:
For all workstreams, experience designing and building evaluations would be valuable, but is definitely not essential. For National Security threats workstreams, we will particularly value experience working on confidential or sensitive projects and demonstrated integrity, responsibility, and trustworthiness. We will also value domain specific knowledge, although it is not necessary. For ARA threats workstreams, we would value experience with language model agents, although this is not essential.
Sample Projects:
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The expected salary range for this position is:
Annual Salary:
$315,000—$510,000 USD
Logistics
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
US visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate; operations roles are especially difficult to support. But if we make you an offer, we will make every effort to get you into the United States, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Compensation and Benefits*
Anthropic’s compensation package consists of three elements: salary, equity, and benefits. We are committed to pay fairness and aim for these three elements collectively to be highly competitive with market rates.
Equity - For eligible roles, equity will be a major component of the total compensation. We aim to offer higher-than-average equity compensation for a company of our size, and communicate equity amounts at the time of offer issuance.
US Benefits - The following benefits are for our US-based employees:
UK Benefits - The following benefits are for our UK-based employees:
Anthropic, a public-benefit corporation and AI startup based in the United States, was established by former members of OpenAI. The company's primary focus is on creating general AI systems and language models, while maintaining a philosophy of responsible AI use.