Company Description
At SGS, we are the world's leading testing, inspection, and certification company, setting the global benchmark for sustainability, quality, and integrity. With a network of 2,650 offices and laboratories and a dedicated team of 97.000 employees, we work together to create a better, safer, and more interconnected world.
Job Description
The AI Trust Innovation Technologist strengthens SGS’s Digital Research & Ventures capabilities by actively building, testing, and analysing AI systems to develop credible independent validation and monitoring services. The role combines deep AI engineering expertise with venture-oriented innovation, evaluating emerging technologies and startups while translating hands-on experimentation into scalable Digital Trust validation solutions.
Key responsibilities:
- Analyse emerging AI technologies and real-world AI system architectures (e.g., LLM-based systems, ML pipelines, multimodal systems) to identify where independent validation, testing, or monitoring by SGS is technically feasible and valuable.
- Technically assess AI risks — including robustness failures, bias/fairness issues, explainability limits, data integrity risks, cybersecurity vulnerabilities, and misuse scenarios (e.g., deepfakes, hallucinations) — and translate them into potential validation or monitoring service opportunities.
- Develop, prototype, and evaluate AI validation approaches (e.g., adversarial testing, dataset validation, interpretability methods, provenance/watermarking) to assess technical feasibility and scalability for Digital Trust services.
- Interpret AI regulations and standards (e.g., EU AI Act, ISO/IEC AI standards, NIST AI RMF) and translate their technical implications into viable validation, monitoring, or independent evaluation approaches.
- Engage with universities, AI research labs, startups, and technology leaders to track cutting-edge AI system developments and explore collaboration, experimentation, and validation opportunities.
- Assess AI startups, tools, and platforms for technical maturity, architectural soundness, evaluation robustness, and strategic fit with SGS’s AI Trust ambitions.
- Provide technical insight and hands-on validation input for AI-related build–buy–partner–invest evaluations, including assessment of model architectures, evaluation methodologies, and system scalability.
- Contribute expert insight to Digital Trust marketing, thought leadership, and internal education on AI trust issues.
- Work cross-functionally with business lines, M&A, R&D, innovation teams, and IT to technically assess AI systems, prototype validation approaches, and support early-stage AI trust initiatives.
- Build and experiment with AI systems directly to deeply understand system behaviour, validation challenges, and potential service design implications.
Qualifications
Essential Experience:
- Advanced degree (Master’s or PhD) in AI, Machine Learning, Data Science, Computer Science, or a related field.
- 2–5 years of hands-on experience building, deploying, and validating AI/ML systems in production environments, startup environments, or advanced R&D settings.
- Strong understanding of AI validation and trust challenges (robustness, bias, explainability, AI security, misuse), grounded in practical AI system development experience.
- Practical experience applying AI evaluation and validation methods (e.g., interpretability techniques, dataset validation, adversarial testing, red teaming, provenance or watermarking mechanisms).
- Experience working in research-driven, startup, or advanced AI R&D environments, with the ability to translate research concepts into working prototypes.
- Ability to translate deep technical understanding of AI system behaviour into scalable independent validation, monitoring, or assurance service opportunities.
Desirable Skills:
- Ability to technically analyse and diagnose AI system behaviour and identify validation, robustness, or monitoring gaps.
- Practical experience applying AI validation and evaluation approaches (e.g., interpretability techniques, dataset validation, adversarial testing, red teaming, provenance or watermarking mechanisms).
- Understanding of major AI regulations and standards and their technical implications for AI system validation and monitoring approaches.
- Experience assessing AI startups, tools, or research from a technical architecture, evaluation, and system maturity perspective.
- Innovative, systems-oriented thinker able to translate technical AI validation challenges into scalable Digital Trust service concepts.
- Collaborative, hands-on, and comfortable working across research, engineering, business, and venture-building environments.
- Strong experimental mindset with the ability to rapidly prototype and test AI validation concepts.
- Familiarity with AI lifecycle management and MLOps concepts (e.g., monitoring, drift detection, retraining pipelines) is an advantage.
Additional Information
Why SGS?
- Opportunity to work with a global leader in inspection, verification, testing, and certification.
- Collaborative and inclusive work environment.
- Competitive salary and benefits package.
- Opportunities for professional growth and development.
Join Us: At SGS, we are dedicated to fostering an open, friendly, and supportive culture that thrives on teamwork. We value diversity and provide endless opportunities to learn, grow your expertise, and fulfil your potential.
The minimum gross annual salary for this position is EUR 62,790 (based on 14 monthly payments) according to the applicable collective agreement. A higher salary may be possible depending on your qualifications and experience.