Closes in -2 days

Specialist Solutions Architect - DE/DWH

London, United KingdomCompetitive0 applicants

About this role

Req: FEQ127R163

Responsibilities

  • Provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment
  • Architect production-level data pipelines, including end-to-end pipeline load performance testing and optimisation
  • Become a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows
  • Assist Solution Architects with more advanced aspects of the technical sale, including custom proof of concept content, estimating workload sizing, and custom architectures
  • Provide tutorials and training to improve community adoption (including hackathons and conference presentations)
  • Contribute to the Databricks Community
  • Extensive experience in a customer-facing technical role. Pre-sales or post-sales experience working with external clients across a variety of industry markets
  • Nice to have: Databricks Certification
  • Travelling approx. 20-30% of the time
  • Experience as a Data Engineer: query tuning, performance tuning, troubleshooting, and debugging Spark or other big data solutions.
  • Extensive experience building big data pipelines
  • Experience in maintaining and extending production data systems to evolve with complex needs
  • Deep Speciality Expertise in at least one of the following areas:
  • Experience scaling big data workloads (such as ETL) that are performant and cost-effective

Requirements

  • Experience migrating Hadoop workloads to the public cloud - AWS, Azure, or GCP
  • Experience with large-scale data ingestion pipelines and data migrations - including CDC and streaming ingestion pipelines
  • Expert with cloud data lake technologies - such as Delta and Delta Live
  • Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience
  • Production programming experience in SQL and Python, Scala, or Java
  • Professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures
  • Experience with the design and implementation of a broad range of analytical and transactional data technologies such as Hadoop, Apache Spark™, NoSQL, OLTP, OLAP, and ETL/ELT.
  • Hands-on experience working with MPP data warehouse appliances (Oracle Exadata, Teradata, IBM Netezza) or cloud data warehouses (Amazon Redshift, Azure Synapse, Snowflake)
  • Hands-on experience with RDBMS systems (PostGres, MySQL, SQL Server, Oracle, MariaDB)
  • Experience in SQL language or any SQL dialect (PL/SQL, Transact-SQL or others)
  • Experience with BI tools such as Power BI, Tableau, Qlik, or others
  • Knowledge of development tools and best practices for data engineers, including CI/CD, unit and integration testing, plus automation and orchestration
  • Expertise in data warehousing - such as query tuning, performance tuning, troubleshooting, and debugging MPP data warehouses or other big data solutions. Maintained, extended, or migrated a production data warehouse system to evolve with complex customer needs.
  • Production programming experience in PySpark.

About Databricks

Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.

EU Requirements

Job Details

Posted6 March 2026
Closes5 April 2026

Contact

Similar Jobs

Finding similar jobs...