About Coverstar
Coverstar is building the first safe, creative, AI-native social platform for Gen Alpha. We’ve built a COPPA-compliant community where kids can create, collaborate, and grow safely.
We’re backed by top investors, moving fast, and hiring a mission-aligned AI engineer to help us power the next generation of personalized content and community.
What You’ll Do
As our Trust & Safety AI Engineer, you’ll design, build, and ship AI systems that keep Coverstar safe and delightful across the video feed, livestreams, and creation tools. The impact is immediate: the models that you trained and rule engines protect kids, reduce harmful content, and stop bad actors before they reach our community.
As our Trust and Safety AI Engineer you will:
- Build safety ML at scale: Train and deploy models and classifiers for policy-violating content (text, image, audio, video), integrity abuse (spam, raids, fake engagement), and bad-actor detection.
- Build and extend our Moderator Console: Develop and maintain a web-based console application to empower internal teams with efficient moderation and safety workflows.
- Enhance trust & safety tooling: Improve moderation dashboards, user-reporting pipelines, and automated enforcement systems (warnings, bans, restrictions) to keep our community safe.
- Apply data science for safety monitoring: Use and Python-based data science and machine learning tools to design, train, and deploy risk-detection models that scale across the platform.
- Continuously improve safety systems: Monitor performance, iterate on detection strategies, and ensure robust coverage against new patterns of abuse or harmful content.
- Ship real-time protection: Design low-latency moderation inference for livestream, feed and messages.
You Might Be a Fit If You:
- Have 2+ years of hands-on ML/AI experience in computer vision and transformer based text models (e.g. BERT) .
- Are fluent in Python (and ideally SQL), with experience building, training, and deploying models into production.
- Have worked with multimodal ML (text, image, audio, or video) and/or LLMs, vLLMs and SGLang and have experience of fine-tuning them.
- Have experience building moderation UIs or internal tools using modern web frameworks (shadcn/ui, tailwind, svelte or similar libraries).
- Have experience creating datasets for training, fine-tuning and tackling data-drift for various models in-production.