You will be in charge of creating the model inference and training infrastructure's future architecture and roadmap for both internal and external users in your capacity as our MLOps engineer. You will be in charge of creating, putting into use, and maintaining the tools and systems that track the performance of our models in real-time and offer recommendations and insights to increase their precision, dependability, and scalability. We hire great people regardless of where they live. Work wherever you’d like as reliable internet access is our only requirement. We communicate asynchronously, work autonomously, and take ownership of our work.
top of page
bottom of page