Databricks Data Engineer - Contract-to-Hire
Contract AssociateJob Overview
We are seeking a Databricks Data Engineer to join our growing data engineering team in Pune, India. This role will play a key part in a large-scale modernization initiative to migrate a complex, enterprise-grade Microsoft SQL Server data warehouse ecosystem to the Databricks Lakehouse Platform. The ideal candidate has strong hands-on experience across Databricks data engineering capabilities, with exposure to AI/ML features being a plus, while maintaining a core focus on scalable, reliable data pipelines and analytics workloads.
Key Responsibilities
- Design, build, and optimize scalable data pipelines using Databricks (Apache Spark, Delta Lake, Unity Catalog).
- Participate in the migration of a ~20TB compressed on-prem Microsoft SQL Server data warehouse to Databricks.
- Convert and modernize hundreds of SQL Server tables, thousands of SSIS jobs, and downstream SSRS/SSAS workloads.
- Re-engineer SSIS ETL processes into Databricks notebooks, workflows, and orchestration frameworks.
- Support migration or redesign of cube-based analytics (SSAS) into Databricks SQL, Delta tables, and modern semantic models.
- Implement data quality, validation, reconciliation, and audit controls during migration.
- Optimize performance and cost through efficient Spark usage, partitioning, and query tuning.
- Collaborate with analytics, BI, and AI/ML teams to enable downstream reporting and advanced analytics.
- Apply data governance, security, and access-control standards using Unity Catalog.
- Contribute to reusable frameworks, documentation, and platform best practices.
Make Your Resume Now