This week we have a special guest writer to add to our Nion Insights.
Ana Mitanoska Miladinoska is an ISTQB-certified Software Quality Engineer with 8 years of experience in manual and automated testing for web and mobile applications. With her balanced skillset in both manual and automated testing, Ana makes sure high-quality software will be delivered.
She shared her thoughts on QA vs. AI, and here’s Part 1 of the amazing inputs.
Intro
If we asked some of the greatest minds what they think about AI, we might hear something like, “Artificial intelligence is no match for natural stupidity.” That’s what Albert Einstein might say. Or perhaps: “AI marks the moment when technology begins to blur the line between progress and threat to humanity.”
But is that really true? Are we under threat — or should we instead be amazed at how artificial intelligence, once a dream, is now becoming an essential part of daily life? Can we even imagine a world without it?
The idea of artificial intelligence isn’t new. It dates back thousands of years, to when the ancient Greeks described it as “acting of one’s own will.” Over time, this early concept evolved, and real breakthroughs began in the 20th century through science and engineering.
So, what’s really going on in the QA space? There’s a lot of concern: Will AI take over my job? Is QA still a stable career? Should we be worried? But let’s be honest — quality assurance has gone through a remarkable transformation over time, evolving from the meticulous standards behind ancient Greek temples and Egyptian pyramids, to its roots in manufacturing, and finally, its adaptation to the software world.
Have you ever wondered: is there a kind of “cold war” brewing — between rigorous, methodical QA testing (an unsung hero of the tech world), and the rise of AI, which has taken nearly a century of research to come into the spotlight—first gradually, then suddenly?
The Tension: QA vs AI
What really prompted me to write this post was the growing tension I’ve been noticing within the QA community. One particular comment I came across recently stuck with me and made me think:
“Yes, AI is going to replace QA jobs—and a lot of other roles too. QA has often been viewed as a cost center rather than a value driver, which is why many companies have already outsourced it to cheaper markets. Now, that same cost-cutting mindset is pushing companies to adopt AI for QA. We’re already seeing tools that automate testing using AI.
At the end of the day, businesses want to do more for less, and AI helps them do exactly that. If a company can buy an AI tool to handle development and QA instead of hiring a whole team—with all the extra costs of salaries, HR, and operations—it’s easy to see why they’d go that route.
The idea of a stable, long-term tech career (QA included) is changing fast. If you’re not thinking about alternative income streams or adapting your skills, you could be left behind as AI keeps advancing.”
Is this truly the direction we’re headed? Or is there something more that QA professionals can offer—something AI, at least for now, can’t replicate? After some digging around and reflecting, I think there is. As humans, we’ve evolved to thrive in complex, dynamic environments—and that adaptability is our superpower.
Take AI-powered automation tools, for example. At first, QA engineers used traditional automation tools to speed up test execution. But now, with AI entering the picture, automation has reached a whole new level. AI can already write, run, and even maintain some tests—but it’s far from perfect. These tools are still evolving, and no one really knows what the landscape will look like once AI reaches maturity.
That’s exactly why we need to stay proactive. Instead of fearing it, we should be figuring out how to work with AI. Understand it. Shape it. Use it to elevate what we do best.
Given how unpredictable AI’s rise really is here are some real-world truths QA professionals are facing today:
1. Using AI to Generate Test Cases
Perfect scenario: We send our AI buddy a neat list of requirements and acceptance criteria. It helps us generate test cases and even suggests additional scenarios. Magic! We save time, reduce manual effort, and breeze through the administrative parts of test planning. Life’s good.
Reality: Not quite so perfect. If the requirements we feed it are incomplete, vague, or contradictory, AI will still generate something — but it might be wrong. You could end up with incomplete or misleading test cases. And edge cases? Complex user flows? AI often misses those. That’s why human testers are still critical. We need to validate, refine, and adapt the AI’s output to ensure the test suite is truly complete.
2. Lack of Trust
Perfect Scenario: (AI can provide fast answers, predictions, calculations etc.) We ask AI to help us cut expenses. Seconds later, it spits out a financial plan that’ll supposedly make us rich. We’re thrilled. Starting tomorrow, we’re budgeting like pros and dream-buying beach houses.
Reality: Do we know how AI got to this prediction, or is there AI bias hidden in the algorithm? AI predictions are based on analyzing large data sets, customer experiences, demand and supply chains, following patterns of past data, and so on. This large data is collected from various sources: historical sales, social media behavior, market trends, even demographics. And while that’s powerful, it doesn’t mean it’s always right. The question is: Can we trust our buddy when delivering such a complex and processed dataset?
3. Security & Privacy Risk
Scenario: Not long ago, I needed to write a formal bank application. Writing official documents isn’t really my strong suit, so I turned to guess who. It helped me craft the perfect letter — clear, formal, and professional. I was thrilled.
Reality: By the time I was done, my AI account knew more about me than my mother.
AI tools can also be misused – for phishing, spreading fake news, or even launching cyberattacks. If not properly secured, your sensitive info might be at risk. That’s why privacy and security need to be built in at every level. We can’t afford to treat them as afterthoughts.
4. Too Much Trust in the Tools
Scenario: You set up your AI testing tool, hit “Run,” and were good to go. No need to double check — our coding assistant even created a custom export format and picked your color theme. Nice.
Reality: AI is a great assistant, but it can’t replace a tester. It can’t ask critical questions, challenge assumptions, or think like a user. Relying too much on automation can cause unexpected issues and bugs. Tools need guidance, not blind trust.
5. Ethical Values
Scenario: You’ve trained your AI model, your tests run smoothly, and everything seems neutral. After all, AI doesn’t take sides, doesn’t do harm, and doesn’t create social divides.
Reality: Real-life examples show that AI can be unethical. Let’s say, a tech company trained their Ai to scrate data without consent, to automate hiring decisions, insurance, or loans that reflect societal inequalities.
Making sure AI is ethical takes work: clear rules, inclusive datasets, and regular audits. Experts agree that when AI makes decisions that affect people, a human should always be in the loop. As we gradually hand over more responsibility to AI, we need to make sure we don’t lose sight of human values along the way.
6. Cost
Scenario: You bring in AI testing tools, and suddenly everything’s cheaper and faster. Tests run on their own, your team does less manual work, companies can offer more competitive prices, capture a larger market share, and—most importantly—crush their competitors.
Reality: AI can speed things up and save money over time – but not over night. There’s still setup, training, maintenance, and sometimes fixing the tests that don’t perform as expected. It reduces costs, but only when it is associated with smart planning and human oversight.
TBC…