Make Your Resume Now

AI Platform Engineer – Databricks Platform_ Senior hands-on

Posted November 26, 2025

Job Overview

NEORIS ahora parte de EPAM es un acelerador digital que ayuda a las compañías a entrar en el futuro, con más de 20 años de experiencia como Socios Digitales de algunas de las compañías más importantes del mundo. Somos más de 4,000 profesionales en 11 países, con una cultura multicultural y de startup donde fomentamos la innovación, el aprendizaje continuo y la generación de soluciones de alto impacto para nuestros clientes.

We are looking for: AI Platform Engineer – Databricks Platform_ Senior hands-on profile.

Residency in Spain is essential!!

Ideallly in Barcelona, but could be in the rest of Spain.

Working pattern: • Must work UK business hours starting 09:00 UK time / 13:30 IST.

Project summary Staff augmentation project for customer focused on scaling data and AI infrastructure, implementing and operating ML/GenAI workloads on Databricks, optimizing cost and performance, and ensuring data security and compliance.

Job summary We are seeking a hands-on AI Platform Engineer to design, build, and operate Databricks-based data and AI platforms on AWS. You will enhance AI capabilities by leveraging Databricks (Workspace, Unity Catalog, Lakehouse, MLflow), modern cloud services, and DevOps/MLOps practices to deliver reliable, secure, and scalable platforms.

What candidate will do

• Design and implement scalable Databricks platform solutions to support analytics, ML, and GenAI workflows across environments (dev/test/prod).
• Administer and optimize Databricks workspaces: cluster policies, pools, job clusters vs. all-purpose clusters, autoscaling, spot/fleet usage, and GPU/accelerated compute where applicable.
• Implement Unity Catalog governance: define metastores, catalogs, schemas, data sharing, row/column masking, lineage, and access controls; integrate with enterprise identity and audit.
• Build IaC for reproducible platform provisioning and configuration using Terraform; manage config-as-code for cluster policies, jobs, repos, service principals, and secret scopes.
• Implement CI/CD for notebooks, libraries, DLT pipelines, and ML assets; automate testing, quality gates, and promotion across workspaces using GitHub Actions and Databricks APIs.
• Standardize experiment structure, implement model registry workflows, and deploy/operate model serving endpoints with monitoring and rollback.
• Develop and optimize Delta Lake pipelines (batch and streaming) using Auto Loader, Structured Streaming, and DLT; enforce data quality and SLAs with expectations and alerts.
• Optimize cost and performance: rightsize clusters and pools, enforce cluster policies and quotas, manage DBU consumption, leverage spot/fleet, and implement chargeback/showback reporting.
• Integrate observability: metrics/logs/traces for jobs, clusters, and model serving; configure alerting, on-call runbooks, and incident response to reduce MTTR.
• Ensure platform security and compliance: VPC design, PrivateLink, encryption at rest/in transit, secrets management, vulnerability remediation, and audit readiness; align with internal security standards and, where applicable, GxP controls.
• Collaborate with cross-functional teams to integrate the Databricks platform with data sources, event streams, downstream applications, and AI services on AWS.
• Conduct technical research, evaluate new Databricks features (e.g., Lakehouse Federation, Vector Search, Mosaic AI), and propose platform improvements aligned to roadmap.
• Regularly communicate progress, risks, and recommendations to client managers and development teams.

Required qualifications

• Hands-on Databricks administration on AWS, including Unity Catalog governance and enterprise integrations.
• Strong AWS foundation: networking (VPC, subnets, SGs), IAM roles and policies, KMS, S3, CloudWatch; EKS familiarity is a plus but not required for this Databricks-focused role.
• Proficiency with Terraform (including databricks provider), GitHub, and GitHub Actions.
• Strong Python and SQL; experience packaging libraries and working with notebooks and repos.
• Experience with MLflow for tracking and model registry; experience with model serving endpoints preferred.
• Familiarity with Delta Lake, Auto Loader, Structured Streaming, and DLT.
• Experience implementing DevOps automation and runbooks; comfort with REST APIs and Databricks CLI.
• Git and GitHub proficiency; code review and branching strategies.

What candidate is needed

• Proficient in cloud operations on AWS, with strong understanding of scaling infrastructure and optimizing cost/performance.
• Proven hands-on experience with Databricks on AWS: workspace administration, cluster and pool management, job orchestration (Jobs/Workflows), repos, secrets, and integrations.
• Strong experience with Databricks Unity Catalog: metastore setup, catalogs/schemas, data lineage, access control (ACLs, grants), attribute-based access control, and data governance.
• Expertise in Infrastructure as Code for Databricks and AWS using Terraform (databricks and aws providers) and/or AWS CloudFormation; experience with Databricks asset bundles or CLI is a plus.
• Experience implementing CI/CD and GitOps for notebooks, jobs, and ML assets using GitHub and GitHub Actions (or GitLab/Jenkins), including automated testing and promotion across workspaces.
• Ability to structure reusable libraries, package and version code, and enforce quality via unit/integration tests and linting. Proficiency with SQL for Lakehouse development.
• Experiment tracking, model registry, model versioning, approval gates, and deployment to batch/real-time endpoints (Model Serving).
• AWS IAM/STS, PrivateLink/VPC, KMS encryption, Secrets, SSO/SCIM provisioning, and monitoring/observability (CloudWatch/Datadog/Grafana).
• Experience with DevOps practices to enable automation strategies and reduce manual operations.
• Experience or awareness of MLOps practices; building pipelines to accelerate and automate machine learning will be viewed favorably.

 

Ofrecemos


• Contrato indefinido con salario competitivo
• Modalidad flexible y posibilidad de trabajo remoto
• Plan de carrera personalizado y formación continua (certificaciones, inglés, etc.)
• Participación en proyectos estables con alto componente técnico
• Flexibilidad horaria y enfoque en la conciliación
• Beneficios sociales adaptados a tus necesidades

Te invitamos a conocernos en http://www.neoris.com, Facebook, LinkedIn, Twitter o Instagram: @NEORIS.


Monica Ortego

#LI-MO1

 

 

 

Ready to Apply?

Take the next step in your career journey

Stand out with a professional resume tailored for this role

Build Your Resume – It’s Free!