Chapter 8

Chapter 8: From Tokens to Thought

From Tokens to Thought: Building Minds from Text

Chapter 8 illustration

Building on the foundations, the team pilots LLMs for admin and patient communication with Hazel’s insistence on human validation, while confronting bias, privacy, and hallucinations.

Building on the foundations from Chapter 7, the team now focuses on what happens after a model can predict text: alignment with human preferences, reasoning at inference time, and the practical mindset needed to deploy LLMs responsibly.

What Will You Learn?

This chapter focuses on post-training and deployment mindset: learning from preferences (RLHF), reasoning at inference time, and basic safety/evaluation habits.

  1. 8.1 When Machines Learn from Our Preferences (RLHF): An interactive simulation of preference-based fine-tuning.

  2. 8.3 Beyond Prediction: Reasoning LLMs: What changes when models spend more compute at inference and make steps explicit.

Mathematical Foundations

Bibliography and Additional Resources

Jan 22, 2024