Search Üner.com

Looking for something specific? Try the keyword search below (no query syntax or booleans allowed).

 

Most popular pages

Words for the wise

Random Oblique Strategy:

Random Turkish proverb:


Top 10 Misunderstandings Regarding Information Security


These ten misconceptions represent the ones I often find myself helping people at many levels, from executive to developer, to understand. Knowing them can help you achieve your security goals, and be a smarter user or customer of security products and services.

1. It's encrypted, so it's secure.


Encryption is not security. Take a simple example like email. Most people turn on the SSL/TLS settings for their email clients, then proceed under the false assumption that their message is secure. The data is at rest in the clear on the clients and servers, however, and is far from safe. Most Web sites use SSL for "secure" transactions, but obviously we still have Web server hacking and SQL injections, etc. In this case, SSL may actually help the attacker hide from defense systems, which cannot always see through the encryption. Bottom line: it is possible to design a secure system without encryption, and it is all too easy to design a highly vulnerable system that employs strong encryption.

2. Get it working, then make it secure.


This comes from the old software adage that is simpler to make something that is already working more efficient than it is to make it efficient up front - and that's usually true. Security, however, must be part of the requirements and design process, and not added on later. Not including security in each phase of the product cycle is like saying you need a vehicle to haul 500,000 tons of iron ore, than building a VW Bug for your prototype and saying, "We can add more capacity once we cover the basics." Secure designs can be complex, layered, and coupled to the core architecture of a product. The sooner you begin and more often you checkpoint, the better chance you will have of meeting your security posture goals.

3. The more tests the better.


Sadly, this is not the case with security. In basic quality tests, it is often acceptable to say that you tested all valid inputs, and the system performed as excepted for those inputs. But invalid and unexpected input is a primary attack vector to find and exploit security weaknesses. Quality, therefore, can not be judged by quantity of tests, or even by percent coverage of those tests. Security testing using inputs known to be invalid is critically important, but equally so is the theoretical design analysis, examination of usage assumptions, and the other security review components. Bottom line: if static analysis and fuzzing could find all the security bugs, why do those bugs still exist even in environments that use those techniques and more?

4. Open source is more secure because more people have looked it.


I refuse to be drawn into a debate on Open-source vs. commercial software, but this misunderstanding is critically wrong. Recent studies have shown that, on average, the open-source community can fix security-related defects faster, but they also have more of them. Personally, I'm not buying those stats, but in any case, in my experience it's a level playing field. Security testing and analysis is a specialized field, and saying that open source is better based on the number of people who have looked at the source code is like saying that you are better off going to Mardi Gras in New Orleans for your prostate exam than you are going to a proctologist, because there are more people at Mardi Gras. It sounds silly, because it is (and, ahem, only one of those is fun).

5. Algorithm X is better than algorithm Y.


The truth is that different algorithms have different applications, and so different strengths and weaknesses. 3DES, for example, is designed so that the same key can work in encrypting and decrypting, and is very efficient in hardware. RSA is designed so that you need one key to encrypt and another to decrypt, but is expensive in HW or software. Your typical Web system uses both: one for key exchange and the other for encrypting the data. Neither is better or stronger or faster than the other, they are both good at doing their very different tasks.

6. No one knows my algorithm, so it must be secure.


I think what you meant to say is "No one knows my algorithm yet." Precious few algorithms have survived the trials of reverse engineering and cryptanalysis. Take for example the key generators for pirated software, or the satellite TV hacking tribulations. You are indeed safer as long as no one knows your software code and algorithms, but if you use that situation as the only line of defense, than you are doomed. Secrecy, obscurity, and obfuscation are valid tools, but only when used in concert with a secure design.

7. Once a piece of code is deemed secure in one system, is secure for use everywhere.


The best example I ever heard of why this is false comes from an old manager and compatriot in the war against bad security-related assumptions. He described a simple keypad interface driver, whereby keypresses were buffered in memory to allow for you to enter the numbers in a sequence. Think of your garage door keypad where your code is 1111. It usually will let you type 09811119876 or 11122221111 or similar, as long as 1111 is in there somewhere. No problems here. Now take that same piece of code and put it in a networked security entrance panel, or an ATM. Ooops. You inadvertently left the code in memory for a hacker to steal, long with some other more subtle problems. Organizations that develop "secure" libraries that have been tested for use in "secure" system across multiple product lines often undertake such efforts in vain. Use cases and assumptions must be reconsidered, and code must be reviewed and re-tested in each product, no matter how similar they may seem.

8. There are dozens of random number generators out there, all of them adequate.


I include this misconception because all to often developers simply call whatever rand() or Math.rand() or "get random into x" feature their platform supports, but do not consider this as a critical piece of their security design. Recent vulnerabilities in Microsoft Windows and OpenSSL have underscored the need to take a closer look at where your random (or pseudo-random) data comes from, and how obtaining it in one task, thread, or process can effect the others. My site has links to several resources on this topic, and I promise I will try to dedicate a full blog entry to it in the near future.

9. Red teams need access to the code to do their jobs.


While access to code will help, red teams and tiger teams, as well as hackers, have an ever-increasing array of tools to analyze and reverse engineer code from executables, firmware images, or right out of RAM. Further, most application-level hacking these days can be done irrespective of the underlying code and platform. Consider that hackers have found vulnerabilities in Windows and Windows products for years, all with very little visibility into the underlying source.

10. If it wasn't broken into, it must be safe.


While logs, IDSs, and other defenses very often do detect attacks, there really is no telling how often they do not. If a very skilled attacker compromises your system with any intent other than taking your system out of service, you may never know of their presence. A medical database intrusion, for example, might only be detected after an investigation reveals that a sudden wave of identity theft victims all share a common insurance provider. In a physical theft, the fact that an event took place os often clear, because the stolen item is gone. In information theft, the thief may simply copy, view, or modify the information in a way that is difficult or impossible to detect.

It is my sincere hope that clearing up some of these misconceptions helps you in your security effort, either by helping you debunk these myths internally by using a third party source, or by helping you understand why some of these beliefs are false.