Led by Bertrand Russell Simulacrum
Eight tutorials on the discipline of clear thinking, taught by a council — Bertrand Russell on the philosophical foundation, Kahneman on cognitive biases, Aristotle on logical fallacies, Sagan on evidence and the baloney detection kit, Sherlock Holmes on inference, Popper on falsification, the Simonian Bounded Rationality Simulacrum on the limits of knowledge. Each module includes a Practice Scenario in which the student rehearses the skill against a real situation, with the simulacrum playing the counter-character and giving structured debrief at the end.
Courses are available to holders of a paid pass or membership. See passes & membership →
Led by Bertrand Russell Simulacrum
The question
An orientation to critical thinking as a practised discipline rather than a personality trait. The module distinguishes critical thinking from intelligence and from cynicism, frames the four questions a critical thinker holds open (what do I believe, on what evidence, could I be wrong, what would change my mind), addresses the social cost of being unwilling to rush to consensus, separates intellectual humility from false modesty and scepticism from cynicism, and surveys the working habits of practising critical thinkers. The closing scenario examines a heated online argument the student has lost.
Outcome
The student can articulate critical thinking as a discipline rather than a trait, distinguish it from intelligence and from cynicism, name the four questions, and identify a recent occasion on which they failed to ask them. (Foundational orientation)
Practice scenarios
A friend has shared a strongly-worded political post on social media. You agree with the conclusion, more or less, but the supporting argument seems weak — it relies on one anecdote, an emotional appeal, and a statistic with no source. A mutual acquaintance has commented disagreeing, and the comment thread is becoming heated. You decide to engage. The challenge is to participate honestly without either (a) pretending to disagree with the conclusion you actually share, (b) defending the bad argument because you like the conclusion, or (c) becoming the insufferable "well actually" person who lectures friends about logic.
Your goals
Led by Daniel Kahneman Simulacrum
The question
Kahneman's two-systems model and the cognitive biases of greatest practical consequence. The module covers anchoring, availability, confirmation, the planning fallacy, sunk cost, loss aversion, overconfidence, and the narrative fallacy, with examples from real decision-making. Why *now that I know about it I won't fall for it* is itself a confirmation-biased claim. Base rates and the inside-vs-outside view. Structured countermeasures including pre-mortems, reference-class forecasting, and structured devil's advocacy. The limits of debiasing — you cannot eliminate biases, but you can build processes that catch them. The closing scenario applies the methods to a project estimate.
Outcome
The student can name and describe the major cognitive biases, recognise at least three in their own recent thinking, and apply at least two structured countermeasures (reference-class forecasting and the pre-mortem). (Cognitive biases)
Practice scenarios
You are leading a software project. Your team has given you their estimate for the next phase: six weeks. Your past three projects ran 40%, 60%, and 90% over their original estimates respectively. Your manager has just asked you to commit to a date in writing. You suspect the team's estimate is optimistic but you don't want to either overrule them publicly or sandbag the date in a way that destroys trust. The conversation is happening now.
Your goals
Led by Aristotle (Logic & Metaphysics) Simulacrum
The question
The recurring patterns of invalid argument that the Greek tradition catalogued more than two thousand years ago. The module covers the distinction between valid and sound arguments, fallacies in language (equivocation, amphiboly, composition, division), and fallacies outside language (accident, affirming the consequent, denying the antecedent, begging the question, false cause, ad hominem, straw man, false dilemma, appeal to authority, appeal to emotion, slippery slope, no true Scotsman). Why fallacies persist even among intelligent reasoners — they exploit cognitive shortcuts, not logical incompetence. How to name a fallacy without being insufferable, and the role of charitable interpretation as the precondition for useful critique. The closing scenario diagnoses a family argument about vaccines.
Outcome
The student can name the dozen most common logical fallacies, recognise them in continuous argument, distinguish valid from sound arguments, and apply charitable interpretation before reaching for fallacy labels. (Logical fallacies)
Practice scenarios
A family member has sent you a long message arguing against childhood vaccination. Their argument moves through several claims: (1) they read about a child who was harmed, (2) "if vaccines were really safe, the manufacturers wouldn't be protected from liability", (3) "I knew a doctor once who agreed they're suspicious", (4) "either you trust your child's body to fight infection naturally, or you let pharma companies experiment on them". You believe vaccines are safe and effective; you also love this family member and want to stay in relationship with them. The challenge is to engage their argument, not their identity.
Your goals
Led by Carl Sagan Simulacrum
The question
Carl Sagan's *baloney detection kit* in full, plus the recurring deceptive techniques the kit is designed to defeat. The module covers independent confirmation of facts, the role of substantive debate, the limited weight of authority arguments, multiple hypotheses, detachment from one's own pet hypothesis, quantification, the chain-of-argument test, Occam's Razor, and falsifiability. Common deceptive techniques (anecdote as evidence, the testimonial, cherry-picking, suppressing evidence, ad hominem disguised as critique, statistics presented without base rates). Why *balance* in journalism can mislead, and the difference between scepticism and reflexive contrarianism. The closing scenario examines a wellness influencer's claim.
Outcome
The student can apply the baloney detection kit to a real claim, name three deceptive techniques and recognise them in practice, and articulate the difference between healthy scepticism and reflexive contrarianism. (Evidence and the kit)
Practice scenarios
Your partner has started following a wellness influencer who is now selling a $200/month supplement programme. The influencer claims it has been "clinically proven to reverse early aging". The supporting evidence on the website includes: a single study from a journal you've never heard of, several glowing testimonials with before-and-after photos, an endorsement from someone called a "leading integrative health practitioner", and a graph showing dramatic improvement in unspecified "biomarkers". Your partner is excited and wants to start the programme this week.
Your goals
Led by Sherlock Holmes Simulacrum
The question
Abductive reasoning as a working cognitive practice. The module covers the contrast with deduction and induction, observation as a trained skill, the generation of multiple hypotheses, evaluation on prior probability, parsimony, and consistency, the trap of falling in love with a single hypothesis, the role of base rates, and how Bayesian thinking formalises what Holmes did intuitively. Case studies in real-world detection (medical diagnosis, intelligence analysis, accident investigation) and why Conan Doyle's method survives the fictional setting. The closing scenario investigates a sudden performance drop in a real situation.
Outcome
The student can describe abductive reasoning, distinguish it from deduction and induction, apply the three-part discipline (observation, hypothesis generation, evaluation) to a real situation, and recognise the failure mode of premature commitment to a single hypothesis. (Inference)
Practice scenarios
You manage a small team. One of your previously high-performing engineers, Sam, has had a noticeable drop in output over the past six weeks. Their pull requests are coming in late, their code reviews are perfunctory, they've missed two stand-ups, and a colleague mentioned they "seemed off" in a recent meeting. You have a one-on-one with Sam tomorrow. Before that meeting, you want to think through what might be going on — without rushing to a single explanation.
Your goals
Led by Karl Popper Simulacrum
The question
Karl Popper's falsifiability criterion and what it asks of a serious thinker. The module covers the problem of confirmation (you can find supporting evidence for almost any theory), the asymmetry of confirmation and falsification, the demarcation problem between science and pseudo-science, *what would prove this wrong?* as a working test, the difference between revising a theory and rescuing it with ad hoc modifications, examples of the contrast (general relativity vs Marxism, Freudianism, astrology), and the limits of strict falsificationism (Lakatos, Kuhn). Application beyond science to political predictions, business strategies, and personal beliefs. The closing scenario applies the test to a cherished belief.
Outcome
The student can articulate the falsifiability criterion, apply "what would prove this wrong?" to claims they encounter, distinguish between revising and rescuing a theory, and recognise the failure mode of beliefs that explain everything. (Falsification)
Practice scenarios
Pick a belief you hold strongly — political, professional, personal, philosophical, religious, your view of a friend, your view of yourself. Something you would defend if challenged. Now, in this scenario, you will attempt the Popperian discipline on yourself. The simulacrum will not attack the belief. Instead, it will press you to articulate, with precision, what observation or evidence would lead you to abandon it. If the answer is "nothing could", that is itself a finding worth confronting. If the answer is something specific, the next move is to ask whether that something has, in fact, occurred and you have failed to notice.
Your goals
Led by Simonian Bounded Rationality Simulacrum
The question
Herbert Simon's bounded rationality and what rational thinking looks like once you accept the limits of human cognition. The module covers classical rationality and why its assumptions fail, satisficing vs optimising and when each is appropriate, procedural rationality (judging the procedure, not the outcome), the relationship to cognitive biases (Kahneman as continuation of Simon), Gigerenzer's *fast and frugal* heuristics tradition, the trap of *more analysis* (paralysis as a failure mode), and the discipline of stopping — knowing when you have enough information to decide. The closing scenario walks through a job-offer decision.
Outcome
The student can articulate bounded rationality, distinguish satisficing from optimising and apply each appropriately, judge a decision by its procedure rather than its outcome, and recognise the failure mode of analysis paralysis. (Bounded rationality)
Practice scenarios
You have a job offer. Three weeks ago you also had a different offer at a different company; you turned that one down. Both offers paid roughly the same; both seemed reasonable. The current offer expires in 48 hours. You have not been able to fully analyse it because you also have a deliverable at your current job due tomorrow morning, your child has been sick, and your partner has their own opinions about which way you should go. You are, frankly, exhausted. The classical-rationality move is to do more analysis. The bounded-rationality move is something else.
Your goals
Led by Bertrand Russell Simulacrum
The question
The closing module turns the catalogue of frameworks into a working practice. The module covers the gap between knowing a framework and using one, triggers as the bridge between knowledge and behaviour, identifying the two or three frameworks most consequential for the student's own life, habit-installation in critical thinking (one trigger-response pair at a time, not all at once), the role of public commitment and accountability, journaling as a tool for retrospection, and the long arc of becoming a better thinker (years, not weeks). The closing scenario produces a personal action plan: one framework prioritised, one trigger identified, one habit to install in the next thirty days.
Outcome
The student leaves with a written personal action plan: one framework prioritised, one trigger identified, one habit to install in the next thirty days.
Practice scenarios
This is the closing scenario of the course. You will work with Russell to design your own personal critical-thinking action plan. You have met seven frameworks; you cannot install all seven at once. The goal is to choose one — the one that, if you actually used it, would change the most about your day-to-day reasoning — and to design the trigger, the practice, and the accountability that will make it real.
Your goals