Where Are the Better Angels of Our Nature? A Warning Written in Fire and Code
- Jacqueline Noguera
- Mar 5
- 5 min read

From Apollo 1 to artificial intelligence -- the cost of moving too fast has always been paid by others.
In January 1967, three men climbed into a capsule and never came home.

Gus Grissom. Ed White. Roger Chaffee. They did not die in space, they died on the launchpad and during a routine test in a fire that consumed the Apollo 1 command module in seconds. The hatch opened inward. The pressure made it impossible to open from the inside. The pure oxygen atmosphere turned the cabin into a furnace. Engineers had raised concerns. There were memos. There were warnings.

There was institutional knowledge that something was wrong, knowledge that existed in the minds and the margins of the people closest to the work. That knowledge was overruled. The schedule mattered. The prestige of beating the Soviets mattered. The budget mattered. And then three men were gone, and all of it mattered very differently.
We Have Been Here Before
I've been thinking about Apollo 1 a great deal lately. Not because I am a historian, but because I am watching something familiar unfold, the same quiet calculus, the same institutional pressure, the same dismissal of the people who know the system best. We are in a race again. Not to the Moon this time, but to an AI-transformed economy. And the people making the decisions, the ones controlling the schedules, the budgets, the headcount, are making the same category of mistake that has cost us before. They are treating the human as an afterthought.
In the early days of Project Mercury, the astronaut was almost an afterthought in the spacecraft design. Engineers argued about whether a human was even necessary, whether the capsule could simply be automated, the pilot redundant. It took the insistence of the astronauts themselves, particularly John Glenn and Alan Shepard, to demand a window, a manual override, the ability to actually fly the machine they were strapped into. They understood something the engineers designing from a distance did not: that when something goes wrong in ways no one anticipated, you need a human in the loop. You need someone who understands the system not just technically, but intuitively, who can respond to the unexpected with judgment, not just processing. We are making the inverse mistake today. We have the AI. We are removing the human.
The Lessons Written in Tragedy

Challenger. January 1986. The O-rings were known to be a risk in cold temperatures. Engineers at Morton Thiokol fought the night before launch to delay it. They were overruled. The launch window mattered. The cameras were rolling. Seven people died in 73 seconds, live on television, in front of schoolchildren who had gathered to watch one of their teachers go to space. Columbia. February 2003. Foam had struck the wing on ascent. Engineers requested satellite imaging to assess the damage. The request was denied, leadership had determined, without the imaging, that the damage was not a safety concern. Seventeen days later, the shuttle broke apart on reentry. Seven more people were gone.

In both cases, the institutional knowledge existed. The people who understood the system at its deepest level were raising the alarm. In both cases, they were overruled by a combination of schedule pressure, cost consciousness, and an organizational culture that had gradually normalized risk, that had confused "it hasn't failed yet" with "it is safe."

The Rogers Commission, investigating Challenger, found that NASA had developed what they called "an informal chain of command" that filtered bad news upward and good news downward. The people at the top were making decisions in an information environment that had been quietly, systematically sanitized. Does any of this sound familiar?
Our Brightest Child
Here is what keeps me up at night. AI is extraordinary. It is, in many ways, the most extraordinary thing our civilization has ever built, a system that can reason, create, synthesize, and in some domains surpass human capability. It is our brightest child. And like all children, what it becomes depends enormously on the environment in which it is raised, the foundations on which it is built, and the wisdom of those responsible for its development. We are, right now, making decisions about that foundation. And some of those decisions look very much like the decisions made the night before Challenger, expedient, economically rational on paper, and catastrophically short-sighted in ways that will only be visible in retrospect.

When we hollow out our engineering teams in favor of AI-generated output, we are not just risking code quality. We are degrading the human infrastructure that understands, oversees, and can course-correct the systems we are building. We are removing the people who would notice when something is wrong. We are silencing, in advance, the engineers who would send the memo. And we are doing it at precisely the moment when the systems we are building have the potential to affect not a single mission, or a single shuttle crew, but critical infrastructure at civilizational scale. The blast radius is not comparable to anything we have risked before.
The Better Angels
Abraham Lincoln, on the eve of the most devastating conflict in American history, appealed to what he called "the better angels of our nature." He was asking a nation on the brink of destroying itself to find, somewhere within, the wisdom and the grace to choose differently. I am not comparing our current moment to civil war. But I am suggesting that innovation without conscience, transformation without accountability, and efficiency without wisdom are not progress. They are a different kind of destruction, slower, quieter, but no less real.

The better angels of our nature are the engineers who send the memo knowing it might cost them their job. They are the leaders who delay the launch because the temperature is wrong. They are the organizations that say we could move faster, but we will not, because we understand what is at stake. They are the developers , the ones being shown the door right now,, who carry in their heads the second codebase, the institutional memory, the hard-won knowledge of what this system cannot survive. We need those people. Not instead of AI. Alongside it. In oversight of it. As the human in the loop that every complex, high-stakes system has always required.
What We Owe the Future
The Apollo program, for all its tragedy, gave us something beyond the Moon landing. It gave us a model of what it looks like to take the risks of complex systems seriously, to build in redundancy, to listen to engineers, to treat institutional knowledge as irreplaceable. We learned those lessons in fire. We should not have to learn them again. The decisions being made in boardrooms today about AI and engineering capacity are not abstract. They will shape systems that billions of people will depend on. They will determine whether the humans who understand those systems are empowered to speak, or whether, like the engineers at Morton Thiokol, they will be overruled by people who have confused a budget line with a safety assessment.
We are inspired by innovation. We should be. The human impulse to reach further, to build what has never been built, to go where no one has gone, that is one of our finest qualities. But so is the capacity to stop. To listen. To ask whether the hatch opens the right way before we seal it shut.
Where are the better angels of our nature? I believe they are still here. I believe they are in the engineers who are watching this moment with unease and hoping someone in power will ask the right questions before the countdown reaches zero. I hope we listen to them this time.
This is the second in a three-part reflection on AI, technical debt, and the human cost of moving too fast. I'd be grateful to hear your thoughts, especially from those of you who have been in the room when these decisions are made.



Comments