Data Engineer
- Published on
About the Role
Position: Data Engineer
Employment Type: Full-time, Contract
Start: ASAP
Location: London - Hybrid
About the Candidate
The ideal candidate should have 5+ years of experience as a Data Engineer and possess proven expertise in Databricks, including Delta Lake, Workflows, and Unity Catalog. The role requires a strong command of Apache Spark, SQL, and Python, as well as hands-on experience with cloud platforms such as AWS, Azure, or GCP. Familiarity with MLflow, data warehousing solutions, and CI/CD pipelines with infrastructure-as-code tools like Terraform would be beneficial. Strong communication skills and rights to work in the UK are mandatory.
Responsibilities
- Design, build, and maintain scalable and efficient data pipelines using Databricks and Apache Spark.
- Collaborate with Data Scientists, Analysts, and Product teams to understand data needs and deliver clean, reliable datasets.
- Optimize data workflows and storage using Delta Lake and Lakehouse architecture.
- Manage and monitor data pipelines in cloud environments (AWS, Azure, GCP).
- Work with structured and unstructured data across multiple sources.
- Implement best practices in data governance, data security, and data quality.
- Automate workflows and data validation tasks using Python, SQL, and Databricks notebooks.
Company Information
Focus on SAP is located in London, England, UK. The company provides a collaborative and innovative work environment where communication and teamwork are key.
Culture and Additional Information
We are looking for individuals who are passionate about data and want to make a significant impact. Consulting experience is a plus, and we value candidates who are proactive and eager to take on challenges. Interested candidates should apply with their latest CV or reach out directly via email.