STACKx Data & AI 2023 | Singapore Government Developer Portal
Have feedback? Please

Pre-event Workshops


Monday (17/07/2023)

01:30 PM - 03:00 PM

Master how you can speak to AI with us! In this session, you can: Get a sneak preview into the latest AI-empowered functions of GovTech's latest platform, LaunchPad, and how it can empower you in using AI, understand the fundamental principles and concepts of prompt engineering, including the anatomy of an engineered prompt, and apply the CO-STAR methodology to supercharge your prompts, learn how to apply the prompt engineering mindset to create effective prompts for various use cases, be it summarisation, classification, rewriting, generation and more and, develop best practices and learn tips & tricks that helps your prompts stand out from your peers.

Public Officers only

Speaker(s)

Speakr Avatar Image Ms Chan Li Shing, Product Manager, Data Science & Artificial Intelligence Division (DSAID), GovTech
Speakr Avatar Image Mr Vincent Ng, AI Engineer, Data Science & Artificial Intelligence Division (DSAID), GovTech

03:15 PM - 04:45 PM

Discover the world of different ways to acquire data. Unlock the full potential of Computer Vision and understand how to extract valuable insights from your data. Join our workshop to learn the latest techniques for data acquisition, deep learning models, and no-code development software. Unlock the power of Computer Vision to transform the way we work, play and live!

Public Officers only

Speaker(s)

Speakr Avatar Image Ms Lee Ning Sung, Assistant Director, Data Science & Artificial Intelligence Division (DSAID), GovTech
Speakr Avatar Image Mr Suresh Kumar, Product Manager, Data Science & Artificial Intelligence Division (DSAID), GovTech

01:30 PM - 03:00 PM

Vision-and-language research has received much success recently, enabling improved performance in many downstream multimodal AI applications. This talk will introduce the efforts from Salesforce Research in advancing state-of-the-art vision-and-language AI from two perspectives: library development and fundamental research. For the library, we introduce LAVIS (4.3k stars), a one-stop solution for vision-language research and applications. LAVIS is a central hub that supports 40+ vision-language models with a unified interface for training and inference. For research, we introduce our line of research work including ALBEF, BLIP, BLIP-2, and the latest InstructBLIP. In particular, this talk will focus on BLIP-2, a generic vision-language pre-training method that enables frozen LLMs to understand images.

Open to all

Speaker(s)

Speakr Avatar Image Dr Li Junnan, Senior Research Manager, Salesforce
Speakr Avatar Image Dr Li Lu, Data Scientist (II), Data Science & Artificial Intelligence Division (DSAID), GovTech

03:15 PM - 04:45 PM

Large Learning Models (LLMs) are powerful and with game-changing capabilities. We will introduce LLMs, their use cases, strengths, and limitations. LLMs come in various sizes and some of them can be trained and hosted locally. We will discuss model selection, emphasising that size isn’t everything, and explore smaller LLMs like LLaMa, Alpaca, Vicuna, and Dolly. We will focus on demonstrating how to effectively finetune a LLM by leveraging libraries such as PEFT and LoRA with a hands-on session.

Open to all

Speaker(s)

Speakr Avatar Image Dr Watson Chua, Lead Data Scientist, Data Science & Artificial Intelligence Division (DSAID), GovTech
Speakr Avatar Image Dr Li Lu, Data Scientist (II), Data Science & Artificial Intelligence Division (DSAID), GovTech
* Please note that the programme may be subject to change without prior notice


Last updated 09 June 2023


Was this article useful?
Send this page via email
Share on Facebook
Share on Linkedin
Tweet this page