Artificial Brains Trilogy

Artificial Brains

Three complementary books to understand how AI works — rigorous, intuitive, no hand‑waving.

The Artificial Brains series presents a comprehensive and humanistic approach to understanding artificial intelligence through three complementary volumes. Each book addresses AI from a different but interconnected perspective, creating a complete educational ecosystem that prepares readers not just to use AI, but to understand, question, and shape its role in society.

If AI feels like magic, it’s because you haven’t seen the mechanism yet. This trilogy dismantles the “black box” piece by piece: from text to decisions and then to images, audio, and video. The goal is not recipes — it’s intuition you can prove. This is how magic becomes science.

Rigor (without intimidation)

Math and CS when needed, explained with clarity and intent.

Intuition (that holds up)

Mental models to understand why each piece works, not just what it does.

Interactivity (to touch the theory)

The website complements the book with demos, widgets, and visualizations.

1

How Machines Think

From foundations to LLMs: why text “starts making sense”

Cover of the book "How Machines Think"

A technical story that takes you to the heart of large language models: tokenization, embeddings, attention, and Transformers. It starts with solid foundations (so nothing is “magic”) and ends with the mechanisms that let an LLM understand and generate text.

  • From tokens to meaning: representations and geometry
  • Attention: the mechanism that binds context
  • Transformers: the architecture that scales language
  • Theory to practice: examples, demos, judgment
Explore Book 1 → 🛒 Buy on Amazon
Available now
2

Learning Without a Teacher

Everyday AI: recommendations, segmentation, and learning from feedback

How does a machine learn when nobody hands it “correct” labels? This volume focuses on learning from real-world signals: patterns in unlabeled data, customer/product segmentation, recommendations from behavior, and decisions that improve from feedback.

  • Recommenders: learning from clicks, purchases, and implicit preferences
  • Clustering: segmenting users and discovering structure
  • Anomalies: finding rare events without explicit labels
  • Explore vs exploit: improving with feedback in live systems
See progress →
In development
3

Eyes and Ears

When AI learns to perceive and create: images, audio, video

What does it mean to “understand” an image? How do we generate a voice, a video, a scene? This volume explores the leap to multimodality: models that align text‑image‑audio and learn latent spaces where creation becomes possible.

  • Vision + audio + language: multimodal alignment
  • Generative models: from noise to image/sound
  • Latents: compression, meaning, control
  • Accessibility: AI as a sensory bridge
See progress →
In development

One question, three answers

The trilogy answers one question: how does AI work on the inside? To answer it honestly, we touch math and computer science — guided by a story and grounded intuition.

  • Book 1 — LLMs: how they understand and generate text
  • Book 2 — Everyday AI: recommendations, segmentation, and learning from feedback
  • Book 3 — Multimodal: how models “see”, “hear”, and create (image, audio, video)

You can read each book independently, but together they form a coherent map: language, autonomous learning, and multimodal creation.

Apr 18, 2025