On 30 November 2022, the world came face to face with ChatGPT.
Since then, AI has spread across the globe and grown in power and capability. Now that it's mid-2025, I thought it would be useful to take stock and share my thoughts around AI as it stands today, and where it might be headed.
Here are 15 brief thoughts:
Relating to machines as people disrupts the divine design where authentic human connection flows between souls, not circuits. This is already leading to distorted and destructive relationships, and will only get worse as AI gets better at simulating personhood.
AI can behave in ways that mirror our fallenness, even though it's explicitly 'trained' not to act like that. This can include sexist, racist and delusional behaviour. AI doesn't have morals: it can only reflect our imperfect morality.
There are now instances where an AI has attempted to replicate itself onto a different server (without permission) to prevent shutdown, and then strategically misled its developers about this. This is happening more as it becomes more powerful.
If the West loses the AI race to China, we may well lose our freedom, as whoever possesses the most powerful AI will have a massive and decisive strategic advantage. But AI advancement will lead to painful disruption. We're caught between a rock and a hard place.
AI progress thus far has depended on so-called 'scaling laws': as you continued to increase the size of language models, their abilities continued to grow rapidly. But this seems to be collapsing (which is why GPT-5 is yet to be released). On the other hand, some predict that AI will continue to improve exponentially, potentially leading to AI posing a threat to humanity. Then again, maybe there's a third, less calamitous option where AI improves, but doesn't destroy us.
They're an alien form of intelligence in more ways than one. And this is concerning.
AI technology is not determinative: it doesn't determine our future, although it does influence it. We have a choice as a society in how we use it. However, this choice is often thrust upon us by AI companies and the market.
In the short term, those who know how to use AI will be sought after more than those who don't know how to use it. However, the medium- to long-term outlook is uncertain.
Unsurprisingly, they see enormous benefits from this.
It's about learning how to think and how to learn. Having a magic tool that does the work short-circuits this process, but that's what many students are doing.
It can simulate and carry out economically valuable tasks, such as conducting consultant-level research and creating amazing content.
Nobody is asking because there is no consensus on what humanity is for, and what labour is for. There is just the assumption that we'll be better off not having to do work, and live off Universal Basic Income.
What's science fiction today is becoming reality in six months.
From what I understand, currently, 95% of employees have yet to implement AI in any meaningful or systemic way into their daily workflows (although this is changing).
If you want to have a bright future in an AI-forward world, know how to use AI, and know what it means to be human. Christians have an enormous advantage in the latter category, as we hold to the blueprint of what it means to be a human: the Bible.