Software testing overview (a brief history)
In any software or web development project, testing is an essential element. Today we look back at its history with this software testing overview. You can read a similar piece we published in November 2021, ‘The evolution of software testing’, here. Alternatively, read on for a slightly different take.
eTestware approaches its 8th birthday in February 2022, whilst our colleagues at ICE will be celebrating reaching 15 in May. It’s no surprise to hear then that we’re all feeling very nostalgic – hence ICE’s series of blog posts looking back through their history and this one from us today!
What is software testing?
It is the process used to verify whether or not software products do what they are meant to do. The benefits of software testing are numerous and include the prevention of bugs and the enhancement of performance. There are many different types of software testing solutions, which you can read about here.
Testing should be based upon assessing the original requirements. We advise that you then deploy exploratory testing to discover unpredictable scenarios. The importance of software testing cannot be overstated. Without testing, how can you launch a product and then expect it to be ready for the end user?
Testing has been around for much longer than you may think. In fact, it dates back to the 1950’s. Many testers, including some of our own, break its history down into the following key areas.
Debugging
During the early 1950s, testing and debugging were considered one and the same. The key objective of fixing bugs was accomplished as developers wrote code and then analysed errors as they appeared. Ultimately they then debugged these errors.
Demonstration
Between the late 1950’s and the late 1970’s, testing and debugging became seen as separate activities. The key objective was no longer just to find and fix bugs. The original software requirements were now verified.
Destruction
In 1979, Glenford J. Myers officially introduced the separation of debugging from testing. He achieved this via breakage testing, which saw software testers breaking software during testing. For example, they would complete fields incorrectly to detect new errors.
This move was in line with the software engineering community’s desire to bring recognition to software verification. They wanted to strip development activities such as debugging out. The prevention of defects was not a consideration at this time. Ultimately, the destruction-based approach failed as software would never get released. There were always bugs to be found and even fixes could then lead to more.
Evaluation
The quality of software became the key focus from 1983 to 1987. Testing improved confidence in how the software was working. Testers worked towards an acceptable point at which the number of bugs detected was reduced.
Prevention
The late 1980’s to 2000 then saw a new approach adopted. Tests were now based upon three key objectives:
- Software meeting its specification
- The detection of faults
- Preventing defects
Identifying which testing technique to apply became crucial. The 1990’s also welcomed the advent of exploratory testing, where testers explored software in greater depth.
2000 and beyond
The early 2000s saw new testing concepts emerge such as test-driven development and behavioural-driven development. Then from 2004, the introduction of automation testing and API testing tools were huge turning points in testing’s history. Nowadays, software testers are moving towards artificial intelligence and cross-browser testing tools.
We hope that this software testing overview has given you a taste of the world of testing. Far from being a recent innovation, software testing has seen testers plying their trade for many years. Our own experts have been exercising their test muscles for the best part of a decade. They are keen to raise awareness of how important the discipline is. If you need help with software testing, contact us today and we’ll be glad to help.