Make Your Resume Now

Senior Data Engineer (Azure)

Posted February 12, 2026
Full-time Mid-Senior Level

Job Overview

About You

You are a Data Engineer passionate about building scalable, production-grade data ecosystems on Azure. You thrive on transforming complex and fragmented data into reliable analytical assets that drive meaningful business decisions. You work with a high level of autonomy, bring strong architectural foundations, and consistently apply high-quality engineering practices across data modeling, pipelines, orchestration, and performance optimization.

You enjoy simplifying complexity whether migrating legacy SQL logic into distributed PySpark jobs, designing canonical data layers, or ensuring data integrity, governance, and observability across systems. Collaboration, clarity, and business impact are core to how you work.

You Bring to Applaudo the Following Competencies:

  • Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, or a related field, or equivalent practical experience.
  • 5+ years of experience designing, building, and maintaining production-grade data pipelines at scale.
  • Expert-level SQL skills, including window functions, query optimization, partitioning, and execution plan tuning.
  • Strong expertise in data modeling concepts such as star and snowflake schemas, facts and dimensions, SCDs, and curated/canonical data layers.
  • Advanced hands-on experience using Python for Data Engineering, including large-scale PySpark transformations.
  • Strong experience working with Azure data services, including: Azure Data Factory, Azure Databricks, ADLS Gen2 / Azure Storage, Azure SQL, Azure Logic Apps (orchestration)
  • Experience building incremental ETL/ELT pipelines with dependencies, CDC strategies, retries, and failure handling.
  • Hands-on experience optimizing big data workloads, including partitioning strategies and Delta Lake performance (OPTIMIZE, Z-ORDER, VACUUM).
  • Experience integrating REST APIs and handling schema drift and pagination.
  • Proficiency with Git workflows and CI/CD pipelines for data codebases.
  • Strong communication skills and ability to work autonomously within Agile environments.
  • Experience with Snowflake, PostgreSQL, or cloud cost optimization strategies (nice to have).
  • Advanced English proficiency, with the ability to communicate clearly and collaborate directly with US-based clients.

You Will Be Accountable for the Following Responsibilities

  • Design and implement conceptual, logical, and physical data models aligned with analytics and business requirements.
  • Build, optimize, and monitor end-to-end ETL/ELT pipelines using Azure Data Factory, Databricks, and Logic Apps.
  • Migrate legacy SQL-based logic into scalable, resilient PySpark-based processing jobs.
  • Apply and automate performance best practices, including partitioning, indexing, schema evolution, and storage optimization.
  • Ensure data quality, lineage, governance, and security across environments.
  • Establish observability standards, including logging, data quality checks, alerting, and SLA monitoring.
  • Collaborate closely with analysts, architects, and business stakeholders to define requirements and data contracts.
  • Design and maintain cost-efficient Azure data architectures while continuously improving pipeline reliability and delivery velocity.

Ready to Apply?

Take the next step in your career journey

Stand out with a professional resume tailored for this role

Build Your Resume – It’s Free!