Make Your Resume Now

Data Engineer (Python & AWS)

Posted October 07, 2025
Full-time
Mid-Senior Level

Job Overview

About you:

You are a Data Engineer who enjoys building reliable, scalable, and high-performance data platforms that drive meaningful impact. You thrive in designing end-to-end solutions, applying software engineering best practices, and leveraging cloud technologies to deliver clean, efficient, and automated data pipelines. You are collaborative, detail-oriented, and constantly seeking to improve and innovate within the modern data ecosystem.

You Bring to Applaudo the Following Competencies:

  • Bachelor’s degree in Computer Science, Data Engineering, or related field, or equivalent practical experience.
  • 5+ years of hands-on experience as a Data Engineer.
  • Advanced proficiency in SQL and strong understanding of database fundamentals.
  • Proficient in Python for data processing, automation, and transformation.
  • Experience working with AWS, Azure, or GCP, including core data services.
  • Strong knowledge of data modeling (dimensional, normalized, and performance-optimized).
  • Experience with workflow orchestration tools (Airflow, Dagster, or Prefect).
  • Familiarity with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation.
  • Understanding of software engineering best practices, including CI/CD, version control, and testing.
  • Experience with Docker or Kubernetes for containerized data solutions (nice to have).
  • Knowledge of Apache Spark or similar distributed data processing frameworks (nice to have).
  • Familiarity with streaming technologies such as Kafka, Kinesis, or Pub/Sub (nice to have).
  • English is a requirement, as you will be working directly with US-based clients.

You Will Be Accountable for the Following Responsibilities:

  • Design, build, and maintain scalable ETL/ELT pipelines integrating multiple data sources (APIs, databases, streams, and files).
  • Develop and optimize cloud-based data warehouses and data lakes (Snowflake, BigQuery, Redshift, etc.).
  • Implement observability and monitoring for data pipelines, including logging and alerting.
  • Apply software engineering principles to ensure reliable, reusable, and well-documented code.
  • Collaborate with cross-functional teams to align data infrastructure with analytics and business goals.
  • Ensure data reliability, scalability, and performance across environments.
  • Automate workflows and promote Infrastructure as Code for consistency and repeatability.
  • Stay current with emerging data engineering tools and technologies and propose continuous improvements.

Ready to Apply?

Take the next step in your career journey

Stand out with a professional resume tailored for this role

Create Resume