- 12 September 2024
- Asa Sargeant
- 0 Comments
Cryptography Failures and How Testing Could Have Prevented Them
Cryptography is like creating codes to protect information. Imagine that you send a very personal/private message to a friend. I’m sure you wouldn’t be happy if someone intercepted the message and could understand what was said. So, you’d scramble the message into a secret code that only your friend, can unlock and read. However, despite the robust nature of cryptographic algorithms, failures in cryptographic implementation or testing can lead to catastrophic breaches. This blog will examine some of the most prolific high-profile cryptography failures, explore what went wrong, and suggest how more thorough testing could have prevented these failures.
1. Heartbleed (2014)
The infamous Heartbleed vulnerability was a severe flaw in the OpenSSL cryptographic library that affected millions of websites. OpenSSL is a widely used open-source library that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, which are fundamental to internet security. A vulnerability arose in the heartbeat feature leading to an attacker getting access to 64KB of memory from the server’s process space. This may have been sensitive data such as passwords and private keys.
What was the problem?
The main problem was that the server didn’t verify that the data was within safe limits. Because of this, if someone sent a specially designed message, the server accidentally sent back more information than it was supposed to. Consequently, private data may have been revealed. The worst part is the error went unnoticed for two years before anyone found out.
How Testing Could Have Prevented It:
- Fuzz Testing: Fuzz testing involves flinging random or messed up data, to the system to see if it breaks. If the Heartbleed bug had been tested this way, it might have been picked up before it caused the system to send back more data than it should.
- Boundary Testing: Boundary Testing ensures data is constrained to the set limitations. Testing that focused on inputting large-scale requests would likely have uncovered the issue sooner by revealing that the system was sending out too much information.
2. ROCA Vulnerability (2017)
The ROCA vulnerability is a security issue that affected millions of cryptographic keys generated by Infineon hardware security chips. Specifically, these chips were used in smart cards, laptops, and government ID cards. Consequently, they were supposed to create secure keys for encrypting information. However, a flaw in how the keys were made meant hackers could break encryption and access the supposedly secure data.
What was the problem?
The method used to create the security keys was flawed. Specifically, it used a deterministic approach, which made the keys predictable. This predictability made them easier to factorise using Coppersmith’s method, an advanced mathematical technique.
How Testing Could Have Prevented It:
- Peer Review and Cryptographic Expertise: Having external cryptographic experts review the implementation process might have led to identifying the deterministic flaw.
- Algorithm Robustness Testing: Simulating attacks against the keys generated by the hardware to see how they held up might have revealed the problem before they were widely distributed.
3. WEP Encryption (1990s – 2000s)
The Wired Equivalent Privacy (WEP) protocol was developed in the 1990s as a standard for wireless network security. However, it didn’t take long to realise these were severely flawed and were ultimately replaced with the more secure WPA (Wi-Fi Protected Access) protocol. The main problem with WEP was that it used a type of encryption called RC4, which, along with weak random starting points, made it easy for attackers to break.
What was the problem?
WEP used a small 24-bit random number that was often repeated, especially when the network was busy. Since these numbers were repeated, attackers could collect enough to figure out how to break the encryption and read messages.
How Testing Could Have Prevented It:
- Stress Testing Under High Traffic: If developers had tested WEP in busy real-world scenarios, they would have noticed the issue of random numbers repeating themselves. They would have realised that the numbers were too small and easy to exploit.
- Penetration Testing: Testing WEP using penetration testing tools that simulated actual attack methods would have revealed weaknesses in the encryption scheme. This could have led to earlier warnings about the protocol’s security.
Conclusion
Cryptographic failures are often the result of flawed implementations, inadequate testing, or a lack of thorough validation. In the case studies discussed above—Heartbleed, ROCA, and WEP—better testing and review processes could have prevented widespread exploitation. By incorporating the suggested practices into the development lifecycle, companies can significantly reduce the risk of cryptographic vulnerabilities. Remember the resilience of systems is holding the world together, without it… chaos will ensue! If you’d like to catch up on some of our other blogs, please visit, www.etestware.com/blog/.