Job Search

Search
Skills
Technologies
  • Published on

    Brooksource is looking for a Senior Data Engineer to lead the design, development, and implementation of scalable data pipelines and ELT processes using Databricks, DLT, and dbt. The candidate will collaborate with stakeholders to understand data requirements and deliver high-quality data solutions while ensuring the optimization and maintenance of existing pipelines for quality and performance. This role involves mentoring junior engineers and monitoring data pipeline issues for minimal disruption.

  • Published on

    Infosys is looking for a Data Engineer - Databricks/Apache Spark to design, develop, and maintain scalable data pipelines on Databricks using PySpark, collaborate with data analysts and scientists to understand data requirements, and optimize existing data pipelines. Candidates should have 3+ years of experience in data engineering, strong SQL skills, and proficiency with Databricks and Apache Spark.

  • Published on

    Deichmann SE is looking for a Team Lead Data Engineering (m/w/d) to lead and develop a team of Data Engineers. You will be a technical mentor and manager for your team, fostering specialization and deepening knowledge to take a leading role in Data Engineering. Responsible for all services related to our data products, you will ensure their reliability and scalability through our Data Mesh. This role involves leading product teams in implementing our data and analytics strategy by designing, implementing, and overseeing new data products throughout their lifecycle. Utilize agile methods to create an environment of continuous learning, integrity, and mutual responsibility. Additionally, you will establish a professional development environment alongside the platform team to optimize the performance, scalability, and cost transparency of our data products.

  • Published on

    EPAM Systems is looking for a Senior Data Engineer (AWS/Databricks/PySpark) to join our team. You will design, develop, monitor, and operate data pipelines, integrating high-quality datasets for analytical use cases and enabling data teams to adhere to client standards. Candidates should have over 3 years of practical experience in Databricks or similar ETL tools, strong proficiency in Apache Spark, PySpark, and Python, as well as hands-on experience with AWS Glue and Amazon S3.

  • Published on

    EPAM Systems is looking for a Senior Data Software Engineer responsible for designing, developing, monitoring, and operating data pipelines. The ideal candidate will have over 3 years of practical experience in Databricks or similar ETL tools, a strong understanding of Apache Spark, and proficiency in Python. This role involves integrating high-quality datasets for analytical use cases and enabling other data teams to follow customer standards. You will thrive in a dynamic work environment helping Fortune 1000 clients while adhering to best engineering practices.

  • Published on

    EPAM Systems is looking for a Data Engineering Lead to join our team. The ideal candidate has a solid mastery in Scala and experience with cloud technologies to migrate workflows from Oracle to Databricks. The candidate will play a critical role in developing and monitoring data flows, ensuring effective communication with stakeholders, and adhering to best engineering practices while working on significant projects with Fortune 1000 clients.

  • Published on

    EPAM Systems is looking for a Lead Data Software Engineer (Spark/Scala/Databricks) who is an open-minded professional fluent in Scala programming language. In this role, you will collaborate with team members to migrate data workflows from Oracle to Databricks, requiring excellent communication and organizational skills as well as proficiency in English (B2+/C1 level).

  • Published on

    EPAM Systems is looking for a Senior Data Engineer experienced in Databricks or similar ETL tools. The successful candidate will work with agile data teams to assist in migrating data pipelines from Oracle to Databricks, design and operate data pipelines, integrate high-quality datasets for analytical use, enable data teams to follow client standards, and maintain existing data pipelines. The ideal candidate should have over 3 years of experience with Databricks, strong understanding of Apache Spark, and proficiency in AWS Glue. We look for a professional with a passion for data transformation and innovation, able to communicate fluently in English as part of a dynamic work environment. Join our team in Madrid, Málaga, or remotely across Spain.

  • Published on

    Samba TV is looking for a Data Engineer responsible for developing scalable, high-performance data pipelines and infrastructure that power Samba TV's analytics and insights. The ideal candidate will play a critical role in designing and implementing architectural improvements, ensuring best practices, and optimizing data workflows while collaborating closely with Data Science, Analytics, and Product teams.