Machine Learning Engineer - Trust and Safety, NLP (Australia)

TikTokSydney, AU
Published on

About the Role

Building a world where people can safely discover, create and connect. The Trust & Safety (T&S) team at TikTok is dedicated to ensuring that our global online community is safe and empowered to create and enjoy content. We focus on investing heavily in human and machine-based moderation to promptly remove harmful content before it reaches our audience.

We are seeking individuals with solid experience in designing and deploying advanced models within NLP and CV domains. The position involves working with a team of exceptional research scientists and machine learning engineers who are proactive, creative, and capable of developing advanced machine learning solutions that are deployed on TikTok's global platform.

Responsibilities

  • Build industry-leading content safety systems for TikTok.
  • Develop highly scalable classifiers, tools, models, and algorithms utilizing cutting-edge machine learning, computer vision, and data mining technologies.
  • Understand product objectives and enhance trust and safety strategies while improving model performance.
  • Collaborate with cross-functional teams to protect TikTok on a global scale.
  • Work closely with data analysts to identify and analyze data patterns.

Qualifications

Minimum Qualifications

  • 3+ years of experience in areas like machine learning, deep learning, computer vision, NLP, or large-scale machine learning platforms.
  • Skilled in foundational technologies for efficient training and pre-training as a service, particularly within downstream applications of NLP/CV/Video.
  • Proficient in deep learning frameworks such as PyTorch and TensorFlow, and in programming languages like Python or Java.
  • Strong programming skills, particularly in Python, and an in-depth understanding of data structures and algorithms.
  • Excellent communication and teamwork skills with a passion for learning new techniques and tackling challenging problems.

Preferred Qualifications

  • Publications in prestigious AI conferences or journals, such as KDD, IJCAI, WWW, or NeurIPS, would be advantageous.
  • Previous experience in the trust and safety domain is a plus.