Senior Data Engineer, Virtual Insurance
Job Overview
[Job Overview]
We are looking for an experienced Senior Data Engineer to join our engineering team and play a key role in building and scaling our enterprise data platform. You will design, develop, and maintain high-quality data warehouses and data-driven applications that power analytics, reconciliation, and business decision-making across the organization.
This role requires strong expertise in modern data architectures, pipeline engineering, and data quality management. The ideal candidate combines hands-on technical capability with a deep commitment to reliability, scalability, and governance in a regulated environment.
[Responsibilities]
· Data operations: own day-to-day operations of data platforms/pipelines capacity, stability, upgrades, deployments, and recovery drills to sustain high availability and low latency.
· Data collection: design/manage multi-source ingestion (exchanges, internal and external systems), protocol parsing, and robust retry mechanisms.
· Develop rule-based and statistical data quality checks (completeness, uniqueness, time alignment, anomaly detection, error handling).
· Implement automated remediation, reconciliation workflows, and historical backfilling.
· Establish monitoring and alerting frameworks to ensure trusted, production-grade datasets.
· End-to-End pipelines: plan and maintain scalable ETL/ELT including scheduling, caching, partitioning, modelling, schema evolution, and lineage to support both batch and real-time streaming.
· Enforce data access controls, encryption, auditing, and classification to comply with internal policies and external regulatory requirements (including PII management).
· Apply Infrastructure-as-Code, data versioning, data tests, and CI/CD to improve predictability and reduce manual risk.
· Contribute to embedded GenAI and LLM-powered data applications for enterprise analytics, reconciliation, and internal productivity use cases.
· Partner with analytics and product teams to operationalize AI-driven data solutions.
[Requirements]
· Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field.
· 5+ years of experience in data engineering, data platform architecture, or AI/ML engineering.
· Strong experience with modern cloud data platforms (e.g., Snowflake, Databricks, BigQuery, Redshift).
· Hands-on experience building BI data foundations and supporting GenAI / LLM architectures.
· Proficiency in SQL and workflow orchestration tools (e.g., Airflow), streaming platforms (e.g., Kafka), and pipeline design best practices.
· Solid understanding of data warehouse development lifecycles and dimensional modeling concepts.
· Familiarity with GitLab and CI/CD pipelines.
· Strong debugging, performance tuning, and problem-solving skills.
· Working knowledge of data governance, lineage, privacy, and security frameworks.
Make Your Resume Now