Staff AI Data Engineer (x/f/m)

Paris, Paris, FranceCompetitive0 applicants

About this role

We are looking for a Staff Data Engineer - AI to join the AI team at Doctolib.

As a Staff Data Engineer AI, your mission will be to build robust data pipelines - from data capture to monitoring - to power our AI systems and support our ambition to transform healthcare delivery. You will be embedded in the AI teams delivering AI-powered features to healthcare professionals and patients.

Working in the tech team at Doctolib involves building innovative products and features to improve the daily lives of care teams and patients. We work in feature teams in an agile environment, while collaborating with product, design, and business teams.

Your responsibilities include but are not limited to:

Design and implement data capture and ingestion systems ensuring data quality, privacy compliance (anonymization, consent, retention), and GDPR adherence

Build, optimize and maintain end to end data pipelines using Python, Dagster, BigQuery, SQL/Jinja, DBT

Enable AI model development by providing datasets for training, evaluation, and annotation workflows

Develop custom monitoring solutions including online metrics pipelines and dashboards (Amplitude, Metabase, Tableau) to track AI system performance

Collaborate with the Data Platform teams to optimize infrastructure, ensure scalability, and manage costs effectively

About our tech environment

Our solutions are built on a single fully cloud-native platform that supports web and mobile app interfaces, multiple languages, and is adapted to the country and healthcare specialty requirements. To address these challenges, we are modularizing our platform run in a distributed architecture through reusable components.

Our stack is composed of Rails, TypeScript, Java, Python, Kotlin, Swift, and React Native.

We leverage AI ethically across our products to empower patients and health professionals. Discover our AI vision here and learn about our first AI hackathon here!

Our data stack includes: Kafka/Debezium for data ingestion, Dagster/DBT for orchestration, GCS/BigQuery for data warehousing, and Metabase/Tableau for BI and reporting.

Who you are

Before you read on — if you don't have the exact profile described below, but you feel this job description matches your skill set, we still encourage you to apply.

You have at least 7 years+ of experience as Senior or Staff Data Engineer, or a similar role including AI data.

You are proficient in Python, SQL, and DBT for building data pipelines

You have hands-on experience building data pipelines for AI/ML systems in production

You have a good understanding of ML model lifecycle (training, evaluation, deployment, monitoring)

You have a first experience with Google Cloud Platform (GCP) stack

You have strong collaboration skills and can work effectively with data science and engineering teams

Now it would be fantastic if you:

Have experience with Vertex AI, MLflow, or similar ML platforms

Have experience with AI model monitoring and observability tools

Have worked with annotation platforms and labeling workflows

Have experience with Cursor / Claude

Are familiar with GDPR regulations

Responsibilities

  • Design and implement data capture and ingestion systems ensuring data quality, privacy compliance (anonymization, consent, retention), and GDPR adherence
  • Build, optimize and maintain end to end data pipelines using Python, Dagster, BigQuery, SQL/Jinja, DBT
  • Enable AI model development by providing datasets for training, evaluation, and annotation workflows
  • Develop custom monitoring solutions including online metrics pipelines and dashboards (Amplitude, Metabase, Tableau) to track AI system performance
  • Our solutions are built on a single fully cloud-native platform that supports web and mobile app interfaces, multiple languages, and is adapted to the country and healthcare specialty requirements. To address these challenges, we are modularizing our platform run in a distributed architecture through reusable components.
  • Our stack is composed of Rails, TypeScript, Java, Python, Kotlin, Swift, and React Native.
  • We leverage AI ethically across our products to empower patients and health professionals. Discover our AI vision here and learn about our first AI hackathon here!
  • Our data stack includes: Kafka/Debezium for data ingestion, Dagster/DBT for orchestration, GCS/BigQuery for data warehousing, and Metabase/Tableau for BI and reporting.
  • You have at least 7 years+ of experience as Senior or Staff Data Engineer, or a similar role including AI data.
  • You are proficient in Python, SQL, and DBT for building data pipelines
  • You have hands-on experience building data pipelines for AI/ML systems in production
  • You have a good understanding of ML model lifecycle (training, evaluation, deployment, monitoring)
  • You have a first experience with Google Cloud Platform (GCP) stack
  • You have strong collaboration skills and can work effectively with data science and engineering teams
  • Have experience with Vertex AI, MLflow, or similar ML platforms
  • Have experience with AI model monitoring and observability tools
  • Have worked with annotation platforms and labeling workflows
  • Have experience with Cursor / Claude
  • Are familiar with GDPR regulations

Requirements

  • Free comprehensive health insurance for you and your children
  • Parent Care Program: additional leave on top of the legal parental leave
  • Free mental health and coaching services through our partner Moka.care
  • For caregivers and workers with disabilities, a package including an adaptation of the remote policy, extra days off for medical reasons, and psychological support
  • Work from EU countries and the UK for up to 10 days per year, thanks to our flexibility days policy
  • Work Council subsidy to refund part of a sport club membership or a creative class
  • Up to 14 days of RTT
  • Lunch voucher with Swile card
  • Recruiter Interview
  • Feature Building Interview
  • System Design Interview
  • Behavioral Interview
  • At least one reference check
  • Permanent position
  • Full-time
  • Paris, France
  • Hyrbid mode: 2 days remote per week
  • Start date: as soon as possible

EU Requirements

Job Details

Posted19 March 2026
Closes18 April 2026

Contact

Similar Jobs

Finding similar jobs...