Java Programming, Python Programming, C++, C Programming, SQL, Cloud computing, Scala Programming, Machine learning techniques, Data science techniques, MATLAB Programming, PyTorch, TensorFlow, R Programming
Are you a passionate and innovative software engineer interested in being the gatekeeper of all new capabilities being integrated onto state of the art AI products? Do you want to work on a product that will make a positive impact? Do you want to work alongside mission driven and values focused teammates? Shield AI is just the place for you!
As a Robotic Test Analysis Engineer, you will be working across the engineering organization to design and facilitate the integration and testing of a broad spectrum of algorithms across a range of platforms.
Join us to push the state of the art in autonomous robotics for protecting lives.
What you’ll do:
Test software for autonomous systems – define nominal operational performance and metrics, analyze algorithms and find operational limits, design mission and operational scenarios to test platform performance
Work with functional teams to ensure interfaces are implemented and meeting specifications
Contribute to the documentation of the systems
Create integration tests within our testing frameworks
Integrate autonomy software on hardware and simulated platforms. Use simulation and hardware-in-the-loop setups to determine system performance
Identify data anomalies and design analyzers to automate the detection of these anomalies for regression identification
Review data from robot flights, identify performance weaknesses, and work with developers to design algorithm changes to improve performance
Projects that you might work on:
Anomaly discovery and automated reporting: Create automated tools to find anomalies in data, from test logs to signals. Identify instances of known patterns. Summarize robot performance / data quality using statistical measures and report with visual plots. Write toolboxes in python; write data queries in SQL; work with other teams to extend existing web-based infrastructure for reporting
Automate software-history bisection with hardware in the loop: Design an analyzer with binary output to determine performance criterion pass/fail. Run as part of a bisection-search over a history of software commits, automatically executing software in simulation and/or on hardware components to determine the introduction of specific behavior
Investigate approaches for evaluating large amounts of data from simulation, evaluating code performance and regression checking, as well as enabling evaluation of simulation-to-real-world performance
Create an integration test plan for the addition of new sensors to a platform
Facilitate the use of our software stack on a new prototype platform
Leverage your knowledge of our software and platforms to work on an R&D effort proving out future capabilities
Required qualifications:
Shield AI is an artificial intelligence company founded in 2015 with the mission to protect service members and civilians with intelligent systems. The company’s Hivemind autonomy stack is the first and only autonomous AI Pilot, deployed in combat since 2018. Hivemind enables intelligent teams of aircraft to perform missions ranging from room clearance, to penetrating air defense systems, and dogfighting F-16s.