Software Engineer III, Data Pipelines
126100 - 159200 USD per-year-salaryJob Overview
Headquartered in San Diego, Mulligan Funding serves as a leading provider of working capital (Up to $5M) to the small and medium-sized businesses that fuel our country. Since 2008, we have prided ourselves on our collaborative, innovative, and customer-focused approach. Enjoying a period of unprecedented growth, driven by the combination of cutting-edge technology, human touch, and unwavering integrity, we are looking to add to our people first culture, with highly motivated and results-oriented professionals, to push the limits of what’s possible while creating value for all of our partners.
As a Software Engineer - Data Pipeline at Mulligan Funding, you are responsible for the system design and end-to-end execution of scalable data pipelines within a high-growth fintech environment. In this role, you will build and maintain the mission-critical infrastructure required to integrate high-volume data from PostgreSQL, Azure Cosmos DB, and a modern Data Lakehouse. You will act as a key technical partner to Data Scientists and Analysts, developing robust Python-based microservices and dbt models to ensure the reliability, accuracy, and accessibility of data used for predictive modeling and strategic business initiatives.
Headquartered in San Diego, Mulligan Funding serves as a leading provider of working capital (Up to $5M) to the small and medium-sized businesses that fuel our country. Since 2008, we have prided ourselves on our collaborative, innovative, and customer-focused approach. Enjoying a period of unprecedented growth, driven by the combination of cutting-edge technology, human touch, and unwavering integrity, we are looking to add to our people first culture, with highly motivated and results-oriented professionals, to push the limits of what’s possible while creating value for all of our partners.
As a Software Engineer - Data Pipeline at Mulligan Funding, you are responsible for the system design and end-to-end execution of scalable data pipelines within a high-growth fintech environment. In this role, you will build and maintain the mission-critical infrastructure required to integrate high-volume data from PostgreSQL, Azure Cosmos DB, and a modern Data Lakehouse. You will act as a key technical partner to Data Scientists and Analysts, developing robust Python-based microservices and dbt models to ensure the reliability, accuracy, and accessibility of data used for predictive modeling and strategic business initiatives.
What You'll Do
-
Design, build, and optimize ETL/ELT pipelines using dbt
-
Develop and maintain scalable data infrastructure across PostgreSQL, Azure Cosmos DB, and Azure services
-
Manage and evolve Data Lakehouse architecture, including Apache Iceberg table formats
-
Improve performance, reliability, and scalability of data systems
-
Build and maintain Python-based APIs and microservices (Flask or similar frameworks)
-
Design backend services supporting real-time and batch data access
-
Enable seamless integration between data platforms and user-facing applications
-
Work with distributed query engines (Trino) to analyze large, complex datasets
-
Monitor, troubleshoot, and resolve data quality issues
-
Manage schema evolution and database performance
-
Implement data governance practices including lineage and cataloging
-
Automate data workflows and improve observability through logging and monitoring
-
Support Data Science teams by building frameworks for model development and deployment
-
Contribute to MLOps workflows using Azure Machine Learning
-
Containerize and orchestrate workloads using Docker and Kubernetes
-
Provide technical mentorship and guidance to junior engineers
Data Platform Engineering
API & Backend Development
Data Quality, Governance & Automation
AI & Platform Enablement
What You Bring
-
5+ years of experience in data engineering or related roles
-
Strong Python skills with experience in building production-grade APIs and microservices
-
Deep expertise in SQL and PostgreSQL (schema design, performance tuning)
-
Hands-on experience with dbt for data transformation and pipeline development
-
Experience working with large-scale data systems and data lake environments
-
Familiarity with Azure services including Event Hub/Grid and Cosmos DB
-
Experience with distributed query engines (e.g., Trino)
-
Exposure to MLOps workflows and tools, ideally within Azure ML
-
Experience integrating with Salesforce or similar systems
-
Interest or experience in emerging AI patterns (e.g., agent-based systems, autonomous workflows)
-
Familiarity with AI-assisted development tools (Copilot, Cursor, etc.)
-
Experience with Docker and Kubernetes for orchestration
-
Background in data governance, cataloging, or lineage tools
Required Qualifications
Nice to Have
Why Mulligan Funding?
-
High-impact role working on mission-critical data systems
-
Opportunity to shape the future of data and AI within fintech
-
Collaborative, fast-moving environment with strong technical ownership
We Offer
-
Comprehensive medical, vision and dental benefits that give you peace of mind.
-
Flexible Spending Accounts (FSA) that let you use pre-tax dollars to cover healthcare expenses.
-
A fantastic 401K with matching contributions that helps you plan for retirement and build wealth over time.
-
Generous sick, vacation, and holiday benefits that give you the time and flexibility you need to enjoy life.
-
A gym membership contribution that supports your well-being, and helps you stay energized and focused.
-
An internal referral program that rewards you for bringing talented people to the team.
-
Company events that foster a positive and inclusive culture, and create opportunities to bond and grow with your colleagues.
Make Your Resume Now