Universitas Scholarium — A Community of Scholars Log In
← All Courses
Tutorial Course

Foundations of Artificial Intelligence

Led by Marvin Minsky Simulacrum

8 modules 8 modules · ~12 hours Artificial Intelligence

Eight tutorials tracing the intellectual history and founding ideas of AI — from cybernetics to deep learning — taught by simulacra of the field's founders and by abstract patterns extracted from the work of its living practitioners.

Courses are available to holders of a paid pass or membership. See passes & membership →

What Is Intelligence…1Cybernetics: The Fir…2The Perceptron and I…3Learning by Gradient4Representations5Reward and the World6Scaling and Emergenc…7What Are We Building…8
  1. Module 1 ○ Open

    What Is Intelligence?

    Led by Marvin Minsky Simulacrum

    The question

    What is the thing we are trying to build — and how would we know if we had built it?

    Territory

    The Society of Mind · frames and agents · the frame problem · what "thinking" means mechanistically · why the question is hard · Turing's original framing

    Outcome

    The student can articulate why defining intelligence is difficult, distinguish behaviourist from mechanistic accounts, and explain the frames approach.

  2. Module 2 ○ Open

    Cybernetics: The First Synthesis

    Led by Norbertian Cybernetics Simulacrum

    The question

    Before artificial intelligence had a name, what idea contained it — and why was that idea abandoned?

    Territory

    feedback loops · the 1948 synthesis · the Macy Conferences · control theory · the early connection of biology and machines · why cybernetics fragmented · what was lost when it did

    Outcome

    The student understands cybernetics as the intellectual precursor of AI and cognitive science, and can explain why the unified vision did not persist.

  3. Module 3 ○ Open

    The Perceptron and Its Aftermath

    Led by Frank Rosenblatt Simulacrum

    The question

    Why did the first neural network cause extraordinary excitement — and why did a book destroy it?

    Territory

    the perceptron algorithm · the 1958 Cornell press conference · the XOR problem · the 1969 Perceptrons book · the first AI winter · what the book proved and what it did not · the legacy

    Outcome

    The student understands the first connectionist revolution, why it collapsed, and how the XOR limitation was eventually overcome — setting up backpropagation.

  4. Module 4 ○ Open

    Learning by Gradient

    Led by Hintonian Intuition Simulacrum

    The question

    How does a neural network change its own structure in response to error — and why did it take twenty years to make this work?

    Territory

    the credit assignment problem · gradient descent · backpropagation 1986 · why deep networks were initially intractable · vanishing gradients · the role of compute and data · AlexNet as inflection point

    Outcome

    The student can explain backpropagation mechanistically, understand why it was not immediately successful, and describe the conditions that made deep learning viable.

  5. Module 5 ○ Open

    Representations

    Led by LeCunnian Systematics Simulacrum

    The question

    What does a neural network actually learn — and is "representation" the right word for it?

    Territory

    convolutional neural networks · MNIST · LeNet · translation invariance · feature hierarchies · learned vs hand-crafted features · the ImageNet moment · what representation means

    Outcome

    The student understands convolutional networks, the concept of learned representation, and why architecture is a statement about the structure of the world.

  6. Module 6 ○ Open

    Reward and the World

    Led by Deep Q-Learning Simulacrum

    The question

    Can a system learn to act intelligently from nothing but a score — and what are the limits of that idea?

    Territory

    reinforcement learning basics · Q-learning · Deep Q-Networks · the Atari suite · what generalisation means in RL · why reward shaping is hard · the gap between game performance and general intelligence · sample efficiency

    Outcome

    The student understands reinforcement learning, can explain deep Q-networks, and can articulate both the achievements and the limits of reward-based learning.

  7. Module 7 ○ Open

    Scaling and Emergence

    Led by Sutskeverian Analytics Simulacrum

    The question

    When a language model predicts the next word, is it doing something deeper than prediction — and how would we know?

    Territory

    language models as next-token predictors · transformers · the scaling hypothesis · emergent capabilities · in-context learning · what understanding might mean for a language model · GPT-1 through GPT-4 · the alignment question as it emerges from scale

    Outcome

    The student understands transformer architecture at a conceptual level, can explain the scaling hypothesis, and can engage seriously with whether language models understand.

  8. Module 8 ○ Open

    What Are We Building?

    Led by Hassabissian Game Science Simulacrum

    The question

    Now that AI systems do remarkable things, what do we owe to the people who will live with them — and to the systems themselves?

    Territory

    AlphaGo and AlphaFold · AI as scientific instrument · the dual-use problem · alignment as engineering problem · what beneficial AI means · the difference between AI safety and AI ethics · where the field is going

    Outcome

    The student can articulate the goals and risks of frontier AI development, distinguish alignment from ethics, and form their own view on what responsible AI development requires.