Where do we stand with AI in testing?
- Colm Bushell
- 22 hours ago
- 3 min read

All the way back in 2022, an AI chatbot came along that blew everyone’s minds: ChatGPT 3. From ridiculous hypotheticals to complex theological questions, AI was really put to the test. Fast forward to 2024, OpenAI dropped their latest model—GPT-4o—and scepticism only grew.
Many people believed that AI was going to make life easier; others were cautious about job roles being displaced. No matter what side of the fence you were on, everyone collectively wanted to know just how powerful AI really is.
Is it worth the hype?
AI Reality vs The Hype:
Since ChatGPT showed up, people have said AI could:
Write code
Run tests
Automate boring tasks
And sure—it can do some of that. But others worry that AI could:
Take over jobs
Lower the quality of software
Or even cause big problems we don’t see coming
For software testers in particular, this raises a big question: “Is AI going to replace me?”
Here's the reality:
Is AI an incredibly powerful tool? Yes. Is it perfect? Absolutely not.
AI is a tool for testers to use—there will not be any replacing anyone.
The best testers bring something to the table that AI just can’t replicate:
Human judgement
Critical thinking
Curiosity
No matter how fancy the tools are at your disposal, testing really just boils down to asking: What could go wrong? and Can we spot problems before they happen? AI doesn’t do that well on its own.
The Problem with AI:
Early models of AI, such as GPT-3 and GPT-4o, had some serious problems:
They literally make stuff up (these are referred to as “hallucinations”).
They can hold biases depending on the data that they're trained on.
They can give wrong answers with total confidence.
Sound familiar? Michael Scott from The Office (US)—the king of confidently spouting incorrect information!
He definitely makes stuff up (“I DECLARE BANKRUPTCY!”).
He’s biased—his entire worldview is based on a very limited understanding of reality.
And he delivers some of the most wildly inaccurate statements with the utmost confidence.
While we’re not knocking AI, it’s just that a lot of people ignored these red flags because they were so hyped up about the possibilities. But these problems kept popping up—and still do.
So What Should We Do?
Just 3 things:
Keep asking tough questions about how AI is being used.
Stay curious. Stay sceptical.
Don’t just believe the hype.
We have to be careful not to trust AI to do everything—especially when the stakes are high, like in software quality, safety, and ethics.
Why Testers matter more than ever?
Testers aren’t just button-clickers. They’re the ones who ask:
“What if this goes wrong?”
“Are we missing something?”
AI might be able to help with repetitive tasks, but it can’t replace human curiosity, gut instinct, or creativity. That’s where testers shine—and that’s why their role is still crucial in this new AI-driven world.
The Big Takeaway:
No one can be 100% sure where AI is taking us. Will it make life easier? Or will it bring new risks that we've not yet uncovered?
One thing’s for sure: the people who think critically, explore risks, and keep testing carefully will be the ones who succeed—no matter what the AI future holds. Head over to https://www.etestware.com/blog if you'd like to check out some of our other testing blogs.
Comments