Most discourse operates inside a set of assumptions it never examines. The debate, the take, the argument — all conducted without noticing the ground they stand on.
This is what happens when you look at the ground first.
Paste any text. An argument, a strategy doc, a pitch, a decision you are about to make. Get back what it implicitly assumes but never states.
Not a summary. Not an evaluation. The implicit structure underneath — the things that must be true for the text to hold, which the author never examined because they treated them as obvious.
"We need better AI alignment to ensure AI systems do what humans want."
1. Human values are stable enough to serve as a target
2. "What humans want" is a coherent, aggregable quantity
3. The problem is technical, not structural
4. AI and human are categories with clear enough boundaries to define a relationship between them