Make Your Resume Now

Software Data Engineer

Posted November 27, 2025
Full-time Mid-Senior level

Job Overview

Devsinc is seeking a talented Data Engineer having minimum of 2 years of professional experience to join our growing data team. In this role, you will design and build scalable data pipelines, work with modern cloud platforms, and lay the foundation for analytics that drive critical business decisions. From day one, you’ll learn from senior engineers, work with a modern cloud stack, and see your contributions make a tangible impact.

Responsibilities:

  • Design, develop, and maintain automated ETL/ELT data pipelines for structured and unstructured datasets.
  • Build and optimize scalable, secure, and cost-efficient cloud data solutions using AWS, Azure, or GCP.
  • Model, cleanse, and transform data to support analytics, dashboards, and reporting use-cases.
  • Implement automated testing, monitoring, and alerting to ensure high data quality and reliability.
  • Develop high-performance Python-based services and utilities for data ingestion and processing.
  • Work with APIs, event-driven systems, and streaming platforms for real-time data workflows.
  • Collaborate with cross-functional teams (Data Science, Backend, DevOps, Product) to gather requirements and deliver tailored data solutions.
  • Follow strong software engineering best practices — clean code, modularity, version control, CI/CD.
  • Document architecture, data flows, schemas, and development standards.
  • Stay updated on evolving data engineering tools, frameworks, and cloud-native technologies.

Requirements

  • Bachelor’s degree in Computer Science, Software Engineering, or a related field.
  • Minimum 2 years of professional experience.
  • Strong knowledge of Python and SQL for data processing and transformation.
  • Hands-on experience with at least one major cloud platform — AWS, GCP, or Azure.
  • Exposure to cloud-native data services such as AWS Glue, Redshift, Azure Data Factory, BigQuery, Synapse, etc.
  • Familiarity with modern data warehouses — Snowflake, Redshift, BigQuery, or Synapse.
  • Strong understanding of ETL/ELT frameworks such as dbt, Apache Spark, or Databricks.
  • Experience working with orchestration tools like Apache Airflow, Azure Data Factory Pipelines, or AWS Glue Workflows.
  • Proficiency with version control systems (GitHub, GitLab, Bitbucket) and CI/CD pipelines.
  • Solid understanding of Data Lake architectures (S3, ADLS, GCS) and schema design principles.
  • Basic understanding of data governance, security, compliance, and access management concepts.
  • Familiarity with streaming platforms such as Kafka, Amazon Kinesis, or Google Pub/Sub.
  • Collaborative – open to knowledge-sharing and teamwork.
  • Team Player – willing to support peers and contribute to collective success.
  • Growth Minded – eager to learn, improve, and adapt to emerging technologies.
  • Adaptable – flexible in dynamic, fast-paced environments.
  • Customer-Centric – focused on delivering solutions that create real business value.

Ready to Apply?

Take the next step in your career journey

Stand out with a professional resume tailored for this role

Build Your Resume – It’s Free!