Binance is a leading global blockchain ecosystem behind the world’s largest cryptocurrency exchange by trading volume and registered users. We are trusted by over 280 million people in 100+ countries for our industry-leading security, user fund transparency, trading engine speed, deep liquidity, and an unmatched portfolio of digital-asset products. Binance offerings range from trading and finance to education, research, payments, institutional services, Web3 features, and more. We leverage the power of digital assets and blockchain to build an inclusive financial ecosystem to advance the freedom of money and improve financial access for people around the world.
About the Role
We are seeking an LLM Algorithm Engineer (Safety First) to join our AI/ML team, with a focus on building robust AI guardrails and safety frameworks for large language models (LLMs) and intelligent agents. This role is pivotal in ensuring trust, compliance, and reliability in Binance’s AI-powered products such as Customer Support Chatbots, Compliance Systems, Search, and Token Reports.
Why Binance
• Shape the future with the world’s leading blockchain ecosystem
• Collaborate with world-class talent in a user-centric global organization with a flat structure
• Tackle unique, fast-paced projects with autonomy in an innovative environment
• Thrive in a results-driven workplace with opportunities for career growth and continuous learning
• Competitive salary and company benefits
• Work-from-home arrangement (the arrangement may vary depending on the work nature of the business team)
Binance is committed to being an equal opportunity employer. We believe that having a diverse workforce is fundamental to our success.
Responsibilities:
Design and build an AI Guardrails framework as a safety layer for LLMs and agent workflowsDefine and enforce safety, security, and compliance policies across applicationsDetect and mitigate prompt injection, jailbreaks, hallucinations, and unsafe outputsImplement privacy and PII protection: redaction, obfuscation, minimisation, data residency controlsBuild red-teaming pipelines, automated safety tests, and risk monitoring toolsContinuously improve guardrails to address new attack vectors, policies, and regulationsFine-tune or optimise LLMs for trading, compliance, and Web3 tasksCollaborate with Product, Compliance, Security, Data, and Support to ship safe featuresRequirements:
Master’s/PhD in Machine Learning, AI, Computer Science, or related fieldResearch track record (ICLR, NeurIPS, ACL, ICML) a plusHands-on experience building LLM/agent guardrails (policy design, refusal rules, filtering, permissions)Practical experience with hallucination mitigation and safety evaluationProven ability to ship AI safety frameworks to productionStrong coding in Python (Java a plus); expertise in PyTorch/TensorFlow/JAXUnderstanding of privacy, PII handling, data governance, and risk frameworksInterest in crypto, Web3, and financial systemsSelf-driven with strong ownership and delivery skillsExcellent communication and collaboration abilities