Make Your Resume Now

Data Lead - Data Engineering

Posted November 10, 2025
Full Time

Job Overview

Shape the Future of Work with Eptura

At Eptura, we're not just another tech company—we're a global leader transforming the way people, workplaces, and assets connect. Our innovative worktech solutions empower 25 million users across 115 countries to thrive in a digitally connected world. Trusted by 45% of Fortune 500 companies, we're redefining workplace innovation and driving success for organizations around the globe.


Job Description

We are seeking a Data Lead – Data Engineering to spearhead the design, development, and optimization of complex data pipelines and ETL processes. This role requires deep expertise in data modeling, cloud platforms, and automation to ensure high-quality, scalable solutions. You will collaborate closely with stakeholders, engineers, and business teams to drive data-driven decision-making across our organization.

This is a highly visible role with career growth and development!

Responsibilities

  • Work with stakeholders to understand data requirements and architect end-to-end ETL solutions.
  • Design and maintain data models, including schema design and optimization.
  • Develop and automate data pipelines to ensure quality, consistency, and efficiency.
  • Lead the architecture and delivery of key modules within data platforms.
  • Build and refine complex data models in Power BI, simplifying data structures with dimensions and hierarchies.
  • Write clean, scalable code using Python, Scala, and PySpark (must-have skills).
  • Test, deploy, and continuously optimize applications and systems.
  • Lead, mentor, and develop a high-performing data engineering team, fostering a culture of collaboration, innovation, and continuous improvement while ensuring alignment with business objectives 
  • Mentor team members and participate in engineering hackathons to drive innovation.

About You


  • 7+ years of experience in Data Engineering, with at least 2 years in a leadership role.
  • Strong expertise in Python, PySpark, and SQL for data processing and transformation.
  • Hands-on experience with Azure cloud computing, including Azure Data Factory and Databricks.
  • Proficiency in Analytics/Visualization tools: Power BI, Looker, Tableau, IBM Cognos.
  • Strong understanding of data modeling, including dimensions and hierarchy structures.
  • Experience working with Agile methodologies and DevOps practices (GitLab, GitHub).
  • Excellent communication and problem-solving skills in cross-functional environments.
  • Ability to reduce added cost, complexity, and security risks with scalable analytics solutions.

Nice to have:

  • Experience working with NoSQL databases (Cosmos DB, MongoDB).
  • Familiarity with AutoCAD and building systems for advanced data visualization.
  • Knowledge of identity and security protocols, such as SAML, SCIM, and FedRAMP compliance.

Ready to Apply?

Take the next step in your career journey

Stand out with a professional resume tailored for this role

Build Your Resume – It’s Free!