Senior Data Engineer, Datacraft

SlovakiaCompetitiveRemote0 applicants

About this role

Bloomreach is building the world’s premier agentic platform for personalization.We’re revolutionizing how businesses connect with their customers, building and deploying AI agents to personalize the entire customer journey.

We're taking autonomous search mainstream, making product discovery more intuitive and conversational for customers, and more profitable for businesses.

We’re making conversational shopping a reality, connecting every shopper with tailored guidance and product expertise — available on demand, at every touchpoint in their journey.

We're designing the future of autonomous marketing, taking the work out of workflows, and reclaiming the creative, strategic, and customer-first work marketers were always meant to do.

And we're building all of that on the intelligence of a single AI engine — Loomi AI — so that personalization isn't only autonomous…it's also consistent.From retail to financial services, hospitality to gaming, businesses use Bloomreach to drive higher growth and lasting loyalty. We power personalization for more than 1,400 global brands, including American Eagle, Sonepar, and Pandora.

Become a Senior Data Engineer for Bloomreach!

Join our newly formed Datacraft team — the team building the next-generation data platform that powers internal DWH, analytics dashboards, and the Loomi Analytics Agent for Bloomreach Engagement. Your engineering work will directly impact how hundreds of enterprise customers access, understand, and activate their Bloomreach data internally and with data share in Snowflake, BigQuery, Databricks, and beyond.

For candidates at the P3 / Senior Software Engineer level, starting monthly compensation begins at 3 800 € gross, with the final offer tailored for each candidate based on their skills and experience. Stock options and a comprehensive benefits package are also included. Working in one of our Central European offices (Bratislava, Brno or Prague) or from home on a full-time basis, you'll become a core part of the Engineering team.

What challenge awaits you?

You will join the Datacraft team as one of its founding engineers. Datacraft is a new team in the Engagement pillar, established to tackle three interconnected domains:

Data Warehouses (~60% of team domain) — making Bloomreach data first-class in customer DWHs (Snowflake, BigQuery, Databricks). The strategic goal for 2026–27 is to use DWHs to exponentially accelerate data adoption — both for customers who want their data outside Bloomreach and for Bloomreach itself to build analytics faster.

Loomi Analytics Agent (~20%) — evolving Loomi Analytics from a constrained report builder into an agentic analytics assistant that can explore data across systems, explain insights, and eventually act on them. You will help build the data backbone the agent operates on.

Dashboards & Analytics Stack (~20%) — moving engagement reporting from the proprietary stack onto DWH-backed, modern analytics stacks (semantic layers, headless BI tools), dramatically speeding up how fast we can ship and iterate on dashboards.

Datacraft is an AI-first team. We believe code is a commodity and expect every engineer to fluently use coding agents(e.g., Cursor, Claude Code, Copilot, Gemini CLI) as a core part of their daily workflow. The ability to leverage AI tooling to accelerate development, prototyping, and problem-solving is not optional — it's foundational.

As a P3 (Senior) engineer at Bloomreach you are an independent professional — expert in at least one component, able to decompose objectives into tasks, and lead small projects end-to-end with minimal day-to-day guidance.

Your job will be to:

a. Build the DWH data platform — Design data pipelines (Kafka → GCS → Iceberg → DWH) following medallion architecture. Own data models and ETL mutations for ID resolution and consent semantics. Maintain orchestration (Airflow / Cloud Composer) and monitoring.

b. Shape Loomi Analytics Agent's data layer — Implement evaluation harnesses and analytics skills, fine-tune system prompts and traces, build MCPs and data interfaces for the agentic platform, ensure data access patterns are reliable and debuggable.

c. Co-build dashboards & analytics stack — Design canonical metrics and models on DWH, support semantic layers and BI tools (Cube, Looker Studio), ensure tight Loomi integration.

Our tech stack

Python, Go, SQL · Apache Kafka · BigQuery, Iceberg, GCS, Mongo, Redis · Spark, DataProc, Airflow · Snowflake, Databricks · GCP, Kubernetes, Terraform · LLM APIs, MCP, agent orchestration · Cube, Looker Studio · Grafana, Prometheus, PagerDuty, Sentry, OpenTelemetry · GitLab CI/CD · Cursor, Claude Code

Your success story

30 Days: Understand the Datacraft domain, complete onboarding, set up your dev environment, and get oriented on DWH architecture and Loomi roadmap.

90 Days: Ship your first pipeline, data model, or orchestration improvement to production. Join architecture discussions and the on-call rotation.

180 Days: Own a component end-to-end. Make informed trade-off decisions independently. Contribute to measurable team goals — first DWH exports, dashboards, or Loomi Agent improvements live.

You have the following experience and qualities:

Professional experience

Solid data engineering background with strong SQL and data modeling skills (star/snowflake schemas, slowly changing dimensions, partitioning/clustering, etc.).

Hands-on experience building production-grade data pipelines on GCP, ideally involving BigQuery, Apache Iceberg, Apache Spark on DataProc, and Airflow (Cloud Composer).

Experience with orchestration and workflow tools — specifically Airflow / Cloud Composer — and comfort working with DAG-based systems for scheduled and event-driven jobs.

Familiarity with open table formats (Iceberg preferred, Delta Lake / Hudi acceptable) and how they interact with query engines and DWH platforms.

Strong programming skills in Python (preferred); Scala/Java/Go also relevant.

Good understanding of data quality, lineage, and observability (monitoring SLAs/SLOs, detecting missing/late loads, backfilling strategies).

Ability to work across product and engineering teams, turning fuzzy problem statements into incremental, shippable slices.

Strongly preferred

Experience with agentic platforms or AI-powered analytics (agent orchestration, LLM data access layers, evaluation harnesses).

Background in marketing analytics, CDPs, or BI/semantic layers (Looker, dbt, Cube).

Experience with Snowflake or Databricks alongside BigQuery.

Personal qualities

Ownership & accountability · Product thinking · Clear communication across engineering and product · Bias for reliability · Continuous improvement mindset · Comfortable remote-first in Central Europe.

Responsibilities

  • We're taking autonomous search mainstream, making product discovery more intuitive and conversational for customers, and more profitable for businesses.
  • We’re making conversational shopping a reality, connecting every shopper with tailored guidance and product expertise — available on demand, at every touchpoint in their journey.
  • We're designing the future of autonomous marketing, taking the work out of workflows, and reclaiming the creative, strategic, and customer-first work marketers were always meant to do.
  • Data Warehouses (~60% of team domain) — making Bloomreach data first-class in customer DWHs (Snowflake, BigQuery, Databricks). The strategic goal for 2026–27 is to use DWHs to exponentially accelerate data adoption — both for customers who want their data outside Bloomreach and for Bloomreach itself to build analytics faster.
  • Loomi Analytics Agent (~20%) — evolving Loomi Analytics from a constrained report builder into an agentic analytics assistant that can explore data across systems, explain insights, and eventually act on them. You will help build the data backbone the agent operates on.
  • Dashboards & Analytics Stack (~20%) — moving engagement reporting from the proprietary stack onto DWH-backed, modern analytics stacks (semantic layers, headless BI tools), dramatically speeding up how fast we can ship and iterate on dashboards.
  • a. Build the DWH data platform — Design data pipelines (Kafka → GCS → Iceberg → DWH) following medallion architecture. Own data models and ETL mutations for ID resolution and consent semantics. Maintain orchestration (Airflow / Cloud Composer) and monitoring.
  • b. Shape Loomi Analytics Agent's data layer — Implement evaluation harnesses and analytics skills, fine-tune system prompts and traces, build MCPs and data interfaces for the agentic platform, ensure data access patterns are reliable and debuggable.
  • c. Co-build dashboards & analytics stack — Design canonical metrics and models on DWH, support semantic layers and BI tools (Cube, Looker Studio), ensure tight Loomi integration.
  • Solid data engineering background with strong SQL and data modeling skills (star/snowflake schemas, slowly changing dimensions, partitioning/clustering, etc.).
  • Hands-on experience building production-grade data pipelines on GCP, ideally involving BigQuery, Apache Iceberg, Apache Spark on DataProc, and Airflow (Cloud Composer).
  • Experience with orchestration and workflow tools — specifically Airflow / Cloud Composer — and comfort working with DAG-based systems for scheduled and event-driven jobs.
  • Familiarity with open table formats (Iceberg preferred, Delta Lake / Hudi acceptable) and how they interact with query engines and DWH platforms.
  • Strong programming skills in Python (preferred); Scala/Java/Go also relevant.
  • Good understanding of data quality, lineage, and observability (monitoring SLAs/SLOs, detecting missing/late loads, backfilling strategies).
  • Ability to work across product and engineering teams, turning fuzzy problem statements into incremental, shippable slices.
  • Experience with agentic platforms or AI-powered analytics (agent orchestration, LLM data access layers, evaluation harnesses).
  • Background in marketing analytics, CDPs, or BI/semantic layers (Looker, dbt, Cube).
  • Experience with Snowflake or Databricks alongside BigQuery.

Requirements

EU Requirements

Job Details

Posted25 March 2026
Closes24 April 2026
Work ModeRemote

Contact

Similar Jobs

Finding similar jobs...