Job Search

Search
Skills
Technologies
  • Published on

    Brooksource is looking for a Senior Data Engineer to lead the design, development, and implementation of scalable data pipelines and ELT processes using Databricks, DLT, and dbt. The candidate will collaborate with stakeholders to understand data requirements and deliver high-quality data solutions while ensuring the optimization and maintenance of existing pipelines for quality and performance. This role involves mentoring junior engineers and monitoring data pipeline issues for minimal disruption.

  • Published on

    Kerry Search Partners is looking for a Data Scientist to work remotely, preferably from Vancouver. As a Data Scientist, you will be at the forefront of AI automation within decentralized finance (DeFi) and Web3. Your responsibilities will include building and deploying models to convert complex on-chain data into actionable insights, developing risk scoring and trading signals, and managing analytics workflows using innovative AI tools. Collaboration with product managers and engineers is key, as you'll define metrics and prioritize the product roadmap while owning projects from inception to production, ensuring your findings are communicated effectively to both technical and non-technical audiences. Ideal candidates will have a strong passion for Web3, experience with AI/ML, and the ability to build quant finance models.

  • Published on

    TekRek is looking for a Senior Data Engineer to build a cloud-native data platform that supports large-scale analytics, real-time processing, and machine learning. The successful candidate will design and build robust data platforms on GCP, AWS, or Azure, develop scalable pipelines using Spark, Kafka, dbt, and Airflow, and implement ETL workflows with strong data governance.

  • Published on

    Senior Data Engineer

    TekRekGreater Toronto and Hamilton Area, CA

    TekRek is looking for a Senior Data Engineer to scale the data platform infrastructure from the ground up. You will design and build robust data platforms on GCP, AWS, or Azure, develop scalable batch and streaming pipelines, and architect data lake and warehouse solutions. Candidates should have strong experience in data engineering and cloud-native platforms, proficient in Python, Java, or Scala, and have hands-on experience with tools like BigQuery, Snowflake, and Kafka.

  • Published on

    TekRek is looking for a Senior Data Engineer to scale the data platform infrastructure from the ground up. This high-impact role involves designing and building robust data platforms on major cloud services, developing scalable pipelines, and collaborating with a team of engineers and analysts to foster data access and insights.

  • Published on

    TekRek is looking for a Senior Data Engineer to scale the data platform infrastructure from the ground up. In this high-impact role, you will design and build robust data platforms on GCP, AWS, or Azure, develop scalable batch and streaming pipelines using technologies like Spark and Kafka, and architect data lake and warehouse solutions such as BigQuery and Snowflake. You will be responsible for implementing ETL/ELT workflows while ensuring strong data governance and reliability, as well as collaborating with software engineers, analysts, and data scientists to enable data access and insights.

  • Published on

    Deichmann SE is looking for a Team Lead Data Engineering (m/w/d) to lead and develop a team of Data Engineers. You will be a technical mentor and manager for your team, fostering specialization and deepening knowledge to take a leading role in Data Engineering. Responsible for all services related to our data products, you will ensure their reliability and scalability through our Data Mesh. This role involves leading product teams in implementing our data and analytics strategy by designing, implementing, and overseeing new data products throughout their lifecycle. Utilize agile methods to create an environment of continuous learning, integrity, and mutual responsibility. Additionally, you will establish a professional development environment alongside the platform team to optimize the performance, scalability, and cost transparency of our data products.

  • Published on

    TensorStax is looking for a Research Engineer Intern to help the team build simulation environments that mirror real-world data engineering workflows. You'll work with our research and systems teams to construct and maintain these environments, which are essential for reinforcement learning training and evaluation. We seek candidates with hands-on experience in data engineering tools like Spark, Airflow, and dbt, as well as proficiency in Python.

  • Published on

    TensorStax is looking for a Research Engineer Intern to help build simulation environments that mirror real-world data engineering workflows. This role involves collaborating with our teams to build and maintain simulated environments, parameterize workloads, and set up realistic failure modes for agents to learn from. The ideal candidate should have hands-on experience with data engineering tools and proficient in Python, while familiarity with containerization tools like Docker is a plus.