The evolution of software testing
eTestware is part of theICEway ecosystem with CRIBB Cyber Security and ICE. Together we combine to deliver complete digital solutions for our clients. CRIBB is turning 5 this month and ICE will turn 15 next May – which has made us all here feel nostalgic. So then, for our latest blog we decided to take a look at the evolution of software testing…
The amount of software in a device doubles every 18 months or so. That is an incredible statistic and one which we believe offers compelling evidence as to why testing is so important. Just think of all those apps within which errors and issues are waiting to occur, potentially harming the system as a whole. But what was software testing like 10 years ago? Or 15 years? 20 even? Today we will attempt to shed light on those questions.
The different software testing eras
In order to explore the evolution of software testing, we must first establish a timeline to cover the start of testing:
- In the beginning -> The programming and debugging phase saw testing equated to errors being found during debugging
- 1957 to the late 1970’s -> Testing is now seen as a discipline used to ensure that software works to the specified requirements; it is then extended to find errors
- 1980’s to ~1994 -> Testing has evolved to be considered a measurement of quality. It therefore grew in importance, becoming a clearly-defined and managed process of the software development life cycle (SDLC)
- Mid-1990’s onwards -> The testing process gains its own life cycle (STLC)
The programmer and tester era
Here, testing and development were mutually independent activities. Developers announced that the software was ready and then passed it to the testing team to verify this. Testers were hampered by a lack of insight into end user requirements and expectations. They depended on the information given to them in the form of feedback from the developers and documentation. Testing was ad-hoc and most certainly not comprehensive.
The exploration & manual testing era
This era welcomed manual testing methodologies such as agile testing and exploratory testing using detailed test cases and test plans. Testers enjoyed freedom to really test and break software down. As the software development process grew, more comprehensive ways of testing were then required. This is where agile testing helped, ultimately paving the way for test automation for repetitive tests.
The automation testing era
The 2000’s welcomed new approaches to software testing. Testing was recognised as a crucial part of SDLC at every step and quality assurance (QA) also gained importance. Automation elevated testing hugely and enabled testers to increase their efficiency. It particularly helped with regression testing and sanity testing, infusing both with greater speed and accuracy. Up-scaling then became a necessity. Crowdsourcing and cloud testing injected even more speed into the testing process. They also led to lower investments in resources being required.
The continuous testing era
End users began expecting intermediate working models of the end product. The demand for frequent and intermediate software releases then increased. Greater connectivity was achieved through improved network infrastructure, leading to faster testing across multiple platforms. ‘Continuous Integration’ and ‘Continuous Deployment’ grew in popularity, with continuous testing also gaining importance. This was due to the fact that risk assessment needed to take place at every stage of the SDLC now that DevOps and CI/CD had appeared, shortening delivery cycles. Continuous testing enabled bugs to be managed before every software release, thus helping with this enormously. However, demands for intermediate releases continued to grow, meaning that continuous testing had to evolve.
The artificial testing era (AI)
Most people have an idea of what AI means. Essentially, it describes a machine which can imitate human behaviour via perception, understanding and learning. AI-based testing is a technique which uses AI and Machine Learning (ML) algorithms to test a software product. It uses the predictive analysis of data, helping exponentially in unit testing, API testing, UI testing, visual testing and more. The objective of AI testing is to make the testing process smarter and more effective.
The evolution of software testing certainly makes for an interesting discussion. Hopefully it also makes for an interesting read and one which shows how complex and important it is. Contact the professionals at eTestware if you need help with software testing and we’ll be glad to help. We’d also love to hear your thoughts on this article so please leave your comments below.