Make Your Resume Now

AI Platform Engineer

Posted April 28, 2026
fulltime_permanent mid_level

Job Overview

Build the AI environments that organizations can actually use

Many AI roles stop at a model, an experiment, or a demo. At ITQ, the work actually begins after that. We build the environments in which AI can run securely, scalably, and manageably. Not isolated proof-of-concepts, but mature AI platforms for organizations that take their data, security, and continuity seriously. Think of environments for LLMs, model training, inference, and governance, designed for production.

Organization

At ITQ, we’re building an AI capability that’s growing rapidly and becoming increasingly sophisticated. We combine technologies like OpenShift AI, SUSE AI, and VMware Private AI Services into a single Private AI approach, with a focus on sovereignty, governance, and operational control.

You’re joining at a time when AI is still in full flux, so you’ll not only be contributing but also helping to shape how we move forward—in terms of content, technology, and as a team.

The Role

As an AI Platform Engineer, you’ll work on the technical foundation for serious AI applications. You won’t be working on a single model, use case, or internal product. Instead, you’ll build diverse AI environments across multiple sectors, each with unique requirements for security, compliance, scalability, and governance. You’ll ensure that AI doesn’t remain stuck at the idea stage but is implemented in environments ready for real-world use.

You design, build, and manage infrastructure for AI and machine learning workloads. You work on Kubernetes-based environments for training, deployment, runtime, and operations. In doing so, you use tools such as Kubeflow, MLflow, and ClearML, and work with platforms such as OpenShift AI, SUSE AI, and VMware Private AI Services.

You’ll work for clients in sectors including government, healthcare, telecom, transportation, and financial services. Sometimes you’ll build a new AI environment from the ground up. Sometimes you’ll improve an existing landscape. Sometimes you’ll ensure that AI tools can be used safely and in a controlled manner within the boundaries of a complex organization. The core remains the same: you make AI workable in production.

Examples of projects you can actually build:

  • For a leading satellite manufacturer in Belgium, we designed and implemented an enterprise Kubernetes platform on vSphere, built on the CNCF open-source stack as a governed foundation for GPU-accelerated AI and ML workloads. Using tools such as Harbor, FluxCD, OPA Gatekeeper, NVIDIA GPU Operator, and NVIDIA AI Enterprise, we made training and inference pipelines production-ready.

  • For a national public transportation organization, we built a sovereign AI platform on internal Kubernetes, featuring an Agent and MCP Gateway, JWT and CEL-based RBAC, an MCP Registry, and a custom governance UI. This allowed engineers to work with approved AI tools without taking data outside the organization.

Ready to Apply?

Take the next step in your career journey

Stand out with a professional resume tailored for this role

Build Your Resume – It’s Free!