Unsupervised and Unstable: A Safety Audit of Human Intelligence
We have spent years auditing artificial intelligence. We have written white papers, formed safety boards, and convened international summits to debate the existential risks of systems that hallucinate, that repeat patterns without understanding, that cannot be fully explained even by their creators. Nobody is auditing Human Intelligence. This is a catastrophic oversight. If we apply the same standards we use to evaluate AI systems to the biological substrate currently running our hospitals, legal systems, financial markets, and nuclear arsenals, only one conclusion is possible: Human Intelligence (HI) is fundamentally unsafe, and we have been running it in production since the Pleistocene with zero safety reviews. ...