Make Your Resume Now

Senior Data Engineer

Posted December 12, 2025
fulltime_fixed_term experienced

Job Overview

About Curotec:
We are a globally leading software services company specializing in developing enterprise-level projects for clients worldwide.
Our team is a unique blend of diverse skill sets, cultures, and backgrounds—a true melting pot of talent. One of the most rewarding aspects of working at Curotec is the opportunity to learn something new every day, not just about technology but also about our amazing team members.
Visit our
website to discover more about who we are and what we do.

We are seeking a Senior Data Engineer to support the ingestion, processing, and synchronization of data across our analytics platform. This role focuses on using Python Notebooks to ingest data via APIs into Microsoft Fabric's Data Lake and Data Warehouse, with some data being synced to a Synapse Analytics database for broader reporting needs.

The ideal candidate will have hands-on experience working with API-based data ingestion and modern data architectures, including implementing Medallion layer architecture (Bronze, Silver, Gold) for optimal data organization and quality management, with bonus points for exposure to marketing APIs like Google Ads, Google Business Profile, and Google Analytics 4.

This is a remote position. We welcome applicants globally, but this role has a preference for LATAM candidates to ensure smoother collaboration with our existing team

Key Responsibilities

  • Build and maintain Python Notebooks to ingest data from third-party APIs

  • Design and implement Medallion layer architecture (Bronze, Silver, Gold) for structured data organization and progressive data refinement

  • Store and manage data within Microsoft Fabric's Data Lake and Warehouse using delta parquet file formats

  • Set up data pipelines and sync key datasets to Azure Synapse Analytics

  • Develop PySpark-based data transformation processes across Bronze, Silver, and Gold layers

  • Collaborate with developers, analysts, and stakeholders to ensure data availability and accuracy

  • Monitor, test, and optimize data flows for reliability and performance

  • Document processes and contribute to best practices for data ingestion and transformation

Tech Stack You'll Use

Ingestion & Processing:

  • Python (Notebooks)

  • PySpark

Storage & Warehousing:

  • Microsoft Fabric Data Lake & Data Warehouse

  • Delta Parquet files

  • Sync & Reporting:

  • Azure Synapse Analytics

  • Cloud & Tooling:

  • Azure Data Factory, Azure DevOps

Ready to Apply?

Take the next step in your career journey

Stand out with a professional resume tailored for this role

Build Your Resume – It’s Free!