Job Search

Search
Skills
Technologies
  • Published on

    EPAM Systems is looking for a Lead Data Software Engineer (Spark/Scala/Databricks) who is an open-minded professional fluent in Scala programming language. The successful candidate will be responsible for developing, monitoring, and operating critical curated data pipelines, consulting with stakeholders to improve KPI systems, and leveraging a cloud-based tech stack to enhance data solutions.

  • Published on

    EPAM Systems is looking for a Senior Data Software Engineer to design, develop, monitor, and operate data pipelines, with responsibilities including integrating datasets for analytical use-cases and enabling data teams to follow client standards. The ideal candidate should have 3+ years of hands-on experience in Databricks or similar ETL Tools, strong skills in Apache Spark, PySpark, and Python, and proven expertise in AWS Glue and Amazon S3. The position allows for remote work across Spain while adhering to high engineering practices and offers a progressive environment with significant career growth opportunities.

  • Published on

    EPAM Systems is looking for a Senior Data Software Engineer with expertise in Databricks or similar ETL Tools to join their dynamic team. Responsibilities include designing, developing, and maintaining data pipelines, as well as integrating datasets for analytical use-cases. The ideal candidate will have over 3 years of experience with Apache Spark, PySpark, AWS Glue, and CI/CD practices. Join EPAM to contribute to innovative projects while enjoying flexible work options in Málaga or remotely across Spain.

  • Published on

    EPAM Systems is looking for a Senior Data Engineer to bolster our Insurance client in Zurich. This role requires a mix of remote and on-site work in a hybrid model. The candidate will be responsible for designing, implementing, and maintaining data-driven applications, developing end-to-end solutions, and engineering reliable data pipelines.

  • Published on

    EPAM Systems is looking for a Senior Python Engineer with Spark to join our team. You will play a crucial role in our financial data and analytics team, responsible for data integration and transformation. The role involves developing models and managing extensive structured financial data. You will design and build robust software that efficiently processes this data utilizing our medallion architecture. We use the latest technology, including Databricks on Microsoft Azure, and you will have the flexibility of remote work in Kyrgyzstan, with an option to work from our office in Bishkek.

  • Published on

    EPAM is looking for a Senior Big Data Engineer to join our friendly environment and become a core contributor to our team of experts. This role requires a strong background in software engineering and Big Data, utilizing a variety of technologies to develop and implement innovative analytical solutions. Candidates should have experience with cloud-native technologies, big data frameworks, and strong SQL skills. Additionally, excellent communication skills in English are essential for collaboration with interdisciplinary teams in an agile environment.

  • Published on

    Samba TV is looking for a Tech Lead - Data Engineering who will be responsible for leading the development of scalable, high-performance data pipelines that power Samba TV's analytics and insights. In this role, you will design and implement architectural improvements, mentor a team of engineers, and collaborate with Data Science, Analytics, and Product teams to deliver robust data solutions that drive business impact. The ideal candidate will have 5+ years of experience in Data Engineering or Software Engineering, expertise in technologies like Apache Airflow and Databricks, and strong problem-solving skills.

  • Published on

    Deichmann SE is looking for a Senior Data Engineer (m/w/d) to join our Data Intelligence & Analytics Unit. The successful candidate will develop and automate data flows in our Data Lakehouse, utilize modern technologies like Spark and Azure, and collaborate closely with Data Scientists and IT teams. The role requires strong expertise in Data Engineering, ETL development, and a solid understanding of CI/CD practices. Excellent knowledge in Spark, Python, SQL, and a hands-on mentality are essential.