DUMMY TEXT
For upcoming STACK webinars and a full list of our past events, please visit our Meetup page.

Overview
As AI systems grow more advanced, ensuring their safety and predictability becomes increasingly critical. This STACK Meetup explores how safety testing and guardrails, and mechanistic interpretability, can reduce misinformation and bias. These approaches work together to ensure that AI functions safely and as intended, especially in high-stakes settings.
Get tips from our GovTech’s AI Practice team on safeguarding LLM applications against safety risks. Our speaker will guide you through the Responsible AI journey through the steps of defining a customised safety risk taxonomy, evaluating safety risks, and implementing safeguards to mitigate them.
Also, hear from a researcher at the Singapore AI Safety Institute on mechanistic interpretability, an approach akin to a brain scan for AI systems. This field seeks to uncover the inner workings of AI systems to identify backdoors, misalignment and unintended behaviours. This understanding powers applications such as model editing, behaviour steering, and the design of more robust guardrails, helping ensure that AI operates predictably and can be audited effectively.
Who should attend: AI Researchers/Engineers, Research Engineers, Data Scientists, Software Engineers/Developers and Designers who use AI in their products or solutions
Recommended knowledge level: Conceptual understanding of LLMs is helpful and experience building with LLMs is a bonus
Programme rundown
7:00pm – Introduction by STACK Community
7:05pm – Introduction on Lorong AI
By Lorong AI
7:10pm – Safeguarding LLM Applications with Testing and Guardrails
By Goh Jia Yi, AI Engineer (Responsible AI), AI Practice, GovTech
7:45pm – Mechanistic Interpretability: Understanding Models From the Inside Out
By Clement Neo, Research Engineer, Singapore AI Safety Institute and Lab Advisor, Apart Research
8:15pm - Q&A
8:30pm - End of STACK Meetup
Last updated 18 August 2025
Thanks for letting us know that this page is useful for you!
If you've got a moment, please tell us what we did right so that we can do more of it.
Did this page help you? - No
Thanks for letting us know that this page still needs work to be done.
If you've got a moment, please tell us how we can make this page better.