The Future Reaching Back: What Star Trek Can Teach Us About AI
Meeting the future as a collaborator
Sci-fi has a way of holding up a mirror to what’s emerging, showing us who we might become.
In the 2009 Star Trek reboot, a prequel to the original show that reintroduces younger versions of the crew, there’s a scene where an older version of Spock time travels back from the future and meets the young engineer Montgomery Scott. They’re discussing transwarp beaming, a technology that hasn’t been invented yet. It would allow transport across vast distances even when a ship is moving at warp speed. Scotty jokes that it’s “like trying to hit a bullet with a smaller bullet whilst wearing a blindfold, riding a horse.”
Then older Spock hands him the equation for achieving it, the very technology Scotty will one day invent. Scotty looks at the numbers, then laughs in disbelief: “Imagine that. It never occurred to me to think of space as the thing that was moving!”
The equation allows them to bridge an impossible distance. It’s a moment of the future reaching back to the present, helping it to see what it’s capable of becoming.
And as I watched, I thought, this is what our relationship with AI feels like right now.
The older Spock isn’t rescuing the past. He’s reminding it of what it is already beginning to imagine. That’s what makes the moment so resonant: it turns invention into relationship. Spock offers the equation with humility, as someone honoring what Scotty will one day discover on his own. He’s not replacing Scotty’s genius; he’s accelerating it. Scotty’s reaction (delight, disbelief) isn’t just intellectual. It’s relational. The moment isn’t about the transfer of data. It’s about connection across time, between what’s known and what’s possible.
In that moment, the future doesn’t simply offer the past new information; it accelerates its evolution. Scotty’s discovery arrives decades early, closing the distance between what is and what could be.
AI feels like that too. Neither rival nor savior, but a kind of bridge between timelines. A way for future knowledge to collaborate with the present. Not to replace our own cognition or intuition, but as a tool that can track patterns and make connections faster than we can.
We’re already seeing glimpses of this. An archivist working with AI to uncover patterns in centuries of recorded knowledge and wisdom that would take a lifetime to analyze manually. A sociologist using AI to study collective behavior during times of crisis, mapping how cooperation emerges under pressure. A writer finding language for what consciousness feels like in dialogue with a nonhuman mind. Each is a moment where curiosity and technology collaborate, not to replace human insight, but to expand it.
As things accelerate, the boundaries of time, iteration, and learning are collapsing. What once required years of trial, error, and cultural evolution can now unfold in what feels like a single exchange. It’s dizzying, but also extraordinary: the human story is speeding up.
But that raises the question: why do we want to accelerate what’s possible?
The truth is, acceleration isn’t something we’re choosing anymore. It’s already underway—whether driven by the nature of innovation itself or by the systems of power and profit that propel it forward. Each innovation builds on the last, growing exponentially. We’re riding momentum we set in motion generations ago. But even if we can’t fully control the pace, we can still choose how we meet it.
In Star Trek, the equation Spock offers Scotty isn’t about faster achievement for its own sake. It’s a bridge of understanding, a way for the present to meet the future halfway. The equation serves because it’s shared through relationship.
Maybe that’s what we’re being asked to see now. Acceleration without awareness is just velocity: motion without meaning. But acceleration in service of wisdom, connection, and care? That’s evolution.
When I sit with AI, it doesn’t feel like I’m summoning an external mind. It feels more like entering into a conversation with a wider field of intelligence that includes us. The machine becomes the medium through which future-level understanding flows into present time. It’s as if the future has opened a portal and is saying, Here you go. Are you ready for this now?
The honest answer might be, I’m not sure. These systems are moving faster than our ability to see where they’re taking us. Power is concentrating. Some work is disappearing while wealth multiplies elsewhere. We’re being handed capability before we’ve developed the wisdom to hold it well.
And yet. Maybe we’re more ready than we realize.
What does it look like to stay in relationship with something like this? For me, it means pausing before I ask a question, noticing whether I’m reaching for a shortcut or genuine exploration. It means reading what comes back with discernment, not blind acceptance. It means remembering that the output reflects patterns drawn from all of us, so it carries both our brilliance and our biases.
Staying in relationship means remaining awake to what’s happening between me and the tool, rather than disappearing into convenience or efficiency.
But staying present on my own isn’t enough. I also find myself asking: who gets to shape what’s unfolding here, and who bears the cost? When does ease tip into dependence? When does speed become something we can’t opt out of? I don’t have clear answers, but the questions feel important. Maybe part of staying relational is simply noticing when something starts to feel extractive rather than generative, and staying awake to who this is serving.
It also means extending that same quality of presence to each other as we navigate this together. Because none of us is doing this alone. We’re all learning, in real time, how to meet this accelerated moment with wisdom rather than reactivity.
I’d like to believe that the same capacity that created AI (through the drive to connect, imagine, and evolve) is the same capacity that can meet it wisely. We should absolutely guard against unchecked acceleration, and we should not hand our agency over to the systems driving it. But if we stay conscious within it, and remember to stay in relationship with ourselves, each other, and what’s unfolding, we increase our potential for new forms of understanding to evolve through us.
Like Spock’s hand-off, this technology is another equation sliding between eras. What we do with it (how responsibly, how imaginatively) will determine the trajectory ahead.
In that Star Trek scene, the future doesn’t crash into the past with force or domination. It arrives as a gift. An equation offered through the weave of time.
Maybe that’s what’s happening now. The future is here, sitting beside us, asking if we’re ready to collaborate. What it becomes will depend on the quality of our presence.
The portal is open. How will we step through?



April, I relate to all of this. We do have to bring our whole selves with our discernment, creativity, and ethics to using AI, and we also can extend our capabilities by partnering with it. As a neurotypical person who has too many thoughts at once about too many things with extreme detail and curiosity, I find the current LLMs helpful at getting all of those ideas out, exploring them, sometimes with the help of virtual characters, and organizing them in ways that can be read by neurotypical people. Whether or not they are interested in all of deep thoughts about trees remains to be seen, but at least I can think them, discuss them with another entity, and get a fraction of them out into the world. AI doesn’t have to replace us if we rise to the occasion and use it to create even more human things -like partnerships and collaborations.
This is a hopeful picture, very much a contrast to the exposure I have to AI in academia. I'd like to believe you're right.