Establishing Guardrails on Large Language Models
applied-tech-live
Access

DataForce Presentation

[img]

Location: Applied Intelligence Live! – Austin, Texas

Date: September 21, 2023

Presenters:

  • Kris Perez, Director, AI - DataForce
  • Randall Kenny, Head of Performance and Product Analytics, BP
  • Shubham Saboo, Head of Developer Relations, Tenstorrent Inc.
  • Patrick Marlow, Conversational AI Engineer, Google
  • Moderator: Srimoyee Bhattacharya, Senior Data Scientist, Shell

Description & Key Takeaways:

  • How do you avoid bias in LLMs from both the dataset level and the algorithms themselves?
  • A diverse dataset is better to minimize the occurrence of bias, and yet the human trainers themselves might introduce bias after the training. For example, the AI model would identify a small list of preferred job candidates, but the human recruiters make the final decision, which could introduce biases such as race preferences.
  • How do you put guardrails around hallucination when the AI model makes things up?
  • The pros and cons of using releasing powerful AI models as open source