Apache Hadoop, C Programming, C++, Data science techniques, Java Programming, Machine learning techniques, MapReduce, Python Programming, PyTorch, R Programming, Scala Programming, SQL, TensorFlow
The Applied Safety team is building safety specifications, processes, and measurement tools for general-purpose AI. We’re looking for safety-focused research engineers to work on measurement tools with us: we aim to develop a world-class toolkit for measuring the safety-relevant characteristics of our datasets, models, and algorithms. This is high-impact work that will help teams across OpenAI meet safety goals.
This is not about safety for narrow AI systems like autonomous vehicles: this is about safety for general-purpose AI systems that have large, uncharted surface areas of potential risk. Given that the field is quite young, your work may be foundational for future standards and professional duties.
In this role, you will:
This role might be a good fit for you if you:
Nice to haves:
OpenAI is an American artificial intelligence (AI) research laboratory. OpenAI conducts AI research with the declared intention of promoting and developing a friendly AI. OpenAI systems run on the fifth most powerful supercomputer in the world. Microsoft announced that it is building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products.