Founding AI Engineer (gn) @ Pre-Seed AI-Native Enterprise Platform, Helsinki
fulltime_permanent mid_levelJob Overview
This is an Atlantic portfolio venture. Find out more about us and other opportunities in our portfolio here.
About the Venture
We're a stealth venture building an AI-native platform for enterprise business modelling and scenario planning. At its core is a graph-based modelling engine combined with an intuitive grid UI and an AI co-pilot that accelerates model creation and improves decision quality.
Our goal is to replace Excel as the default for complex business models, while offering a more flexible and easier alternative to current heavyweight enterprise tools. It's a new kind of tool for business controllers and FP&A leaders.
We're a founding team of three with deep expertise in enterprise software, data modelling, and product execution. Backed by pre-seed financing, we're working closely with design partners in the upper mid-market and enterprise space, and preparing to launch soon.
We’re establishing our HQ in Helsinki and would be excited to hear from you who are already based here or open to relocating. That said, we are open to hybrid arrangements within a reasonable aligned timezone.
About the Role
Weʼre looking for a hands-on AI/LLM engineer to own the design and implementation of the platformʼs AI capabilities end to end. Youʼll be responsible for how models reason over our grid + graph, for safety and reliability in enterprise environments, and for building agentic workflows that deliver practical value in complex modelling and variance analysis.
This is a zero-to-one role where youʼll pair research-grade curiosity with product pragmatism. You will collaborate closely with the founding team, shipping iteratively while laying the foundations for correctness, observability, and scale.
What You’ll Do
Design and implement agentic workflows for model building, scenario generation, reconciliation and variance analysis
Build reliable, auditable LLM pipelines: retrieval, tool-use, function calling, routing, and self-correction loops
Integrate LLMs with our graph-based modelling engine and domain-specific DSLs and tools
Establish evaluation frameworks: unit-style evals, golden sets, regression tests, and human-in-the-loop review
Productionize prompt and policy governance for enterprise: safety, privacy, red-teaming, guardrails
Own LLMOps: tracing, observability, cost and latency optimization, caching, batch inference
Ship high-impact product features in tight loops with design partners and founders
Contribute to core architecture decisions balancing speed with reliability and compliance
Make Your Resume Now