DataOps Cloud Engineer

Obruta

Date Published:

October 7, 2025

Application Deadline:

Ottawa, ON, Canada

Job Type:

Hybrid

About the Role

Obruta is growing our team as we build the autonomy technologies required for fully-automated spacecraft pilots and we lay the foundation for a sustainable in-space economy. Obruta is seeking a keen, passionate DataOps Cloud Engineer to join our team. You will be at the forefront of building the infrastructure for autonomous spacecraft, with a unique hybrid role that combines cloud engineering with data analysis. You will design, build, and maintain the high-performance computing pipelines that drive our development and provide critical insights from our simulation data.

Your work will be essential for two core functions:

  1. Enabling our engineering team to run hundreds of thousands of Monte Carlo simulations to validate our autonomous rendezvous and docking software.
  2. Unlocking the value of our simulation data to provide deep insights into system performance and failure modes.

Obruta is a remote-first company, but requires periodic travel to Ottawa, Ontario. The team is currently located across eastern Canada and therefore a slight preference is given to candidates in those locations. Please apply with your resume and GitHub using the "Apply" button.

Compensation

  • $80,000.00 - 110,000.00 CAD annual salary.
  • Stock options available.
  • Health benefits package.
  • Starting 3 weeks annual time off.

Time & Location

  • Full-time.
  • Fully-remote/hybrid arrangements available.
  • Periodic travel to Ottawa required.
  • Must be able to work in Canada.

Responsibilities

  • Design and build a scalable cloud architecture for high-performance computing to run large-scale Monte Carlo simulations. This includes managing cloud resources on platforms like AWS, Azure, or Google Cloud.
  • Develop and maintain an automated CI/CD pipeline for our multi-language software stack (Python, Matlab/Simulink, Unreal Engine).
  • Containerize our simulation components (e.g., Python code with TensorRT, Unreal Engine instances, C++ code) using Docker or a similar technology to ensure a consistent and portable environment.
  • Implement robust monitoring and logging solutions to track simulation progress, performance metrics, and potential issues across a distributed cluster.
  • Optimize cloud spending by managing resource allocation, leveraging spot instances, and implementing efficient job scheduling.
  • Design, build, and maintain a data pipeline to collect, process, and store the massive datasets generated by our simulations.
  • Analyze large-scale simulation data to identify trends, performance metrics, and underlying causes of success or failure in our autonomous systems.
  • Collaborate closely with our engineering team to provide them with the tools and infrastructure they need to test and deploy their code efficiently.
  • Continuously improve developed systems by staying up-to-date with modern approaches.
  • Maintain honest and clear communication about work status, timelines, and conflicts.

Required Qualifications

  • Degree in Computer Science, Software Engineering, or a related technical field.
  • 2+ years of professional experience in a DevOps, Data Engineering, or Cloud Engineering role.
  • A proven track record of implementing cloud solutions and working with at least one major cloud provider (AWS, Azure, or Google Cloud).
  • Proficiency with containerization tools like Docker.
  • Strong expertise in scripting languages such as Python.
  • Experience building and managing data pipelines, including data collection, storage, and analysis.
  • Solid understanding of CI/CD principles and experience with tools like GitHub Actions, GitLab CI/CD, or Jenkins.
  • An eager attitude to build incredible software that will make spacecraft fully autonomous,  and a shared vision to drive massive growth in the off-world economy.
  • Excellent communication skills—they’re essential for us all to flourish.

Nice to Have

  • Experience with high-performance computing (HPC) or distributed computing frameworks (e.g., Ray, Dask).
  • Experience with data analysis tools and libraries (e.g., Pandas, NumPy, SQL).
  • Familiarity with the MATLAB/Simulink ecosystem and its deployment mechanisms.
  • Knowledge of robotics or aerospace simulation environments.
  • Experience with Unreal Engine in a headless environment.
  • Experience with verification and validation of complex systems.

We know the confidence gap and imposter syndrome get in the way of meeting incredible candidates, and don’t want it to get in the way of meeting you. If you feel like you don’t meet all the requirements for this role, we encourage you to apply anyways.

About Obruta

Obruta Space Solutions was founded in 2019 with a vision of humanity reaching an interplanetary future. To achieve this, Obruta is building advanced autonomy technology and laying the foundation for a sustainable multiplanetary economy.

Today's satellites cannot be repaired, refueled, or reused. The single-use paradigm of the space economy restricts the missions we can perform and limits the market's economic potential. Obruta is changing this with our flagship product, the RPOD Kit. With a turnkey system for spacecraft rendezvous, proximity operations, and docking, satellite servicing and reusable logistics are being brought to the commercial market so that maintaining a spacecraft is as easy as maintaining your car.

If you share our vision, then we invite you to apply to join our growing team and help lay the foundation for humanity’s interplanetary future.