This article landed in my inbox this morning.

 

http://www.smh.com.au/it-pro/security-it/security-breaches-the-resu...

 

After reading that, what are your thoughts?

 

I personally think this article couldn't have gotten it more wrong and I think it stinks of the "Testers should test every scenario" and "Blame the Tester because the system has bugs" mentality.

Views: 657

Reply to This

Replies to This Discussion

Of course it is testers' fault! They create the security bugs, they do the installation, they set the project dead lines, they define the priorities of testing, they should get 100% all-possible-coverages to tests, they should check that routers are not misconfigured, they should check that all security patches are installed immediately and so on.

 

No. Seriously - security is more than just testing. Testing is just small subset. Security starts from the requirements, skilled developers, good maintenance practices and so on. You just should check Secure Development Lifecycle what kind if things it has. It's quite nice process model for developing high secure applications and systems.  

Yes totally agree, its another classic example of quality IT journalism talking to an informed 'IT expert'.  There was an article a while back on a security breach at a bank.  The issue was manipulating URL strings on a certain website - allowed you to get to an unauthorised part of the site.  As I remember - a so called security expert was commenting how this was a very clever type of attack and the journalist lapped it up.  The article says a lot about that profession, poor research, lack of quality informed sources and general lack of understanding of science and IT - and willingness to print anything just to get reactions and sensationalise.

I don't agree that the writer couldn't have got it more wrong. The Swoose Partnership focusses on IT management, risk and security, Cynthia Karena is a journalist.

It's a question of a security expert with valid concerns, but sketchy answers, having his views filtered, and probably inadvertently distorted by a journalist with only a superficial knowledge of the subject. I should think Milton Baar cringed at the sight of "any hacker worth their salt is smarter than the black box".

Also the sentence "after testing for every expected scenario, testers need to look for any unexecuted code" seems to misunderstand what Baar was saying about the problem of merely testing the expected scenarios.

It would have been a far more interesting article if they'd got a tester in to discuss Milton Baar's criticisms. 

I don't think that SQL injection attacks really counts as an unexpected threat. It's a pretty basic threat, and failing to test for it would be naive or negligent. However, I do agree with him that testers often focus too much on testing to see if the intended functionality works, without considering what the unintended functionality might be. I'm talking about legitimate users abusing the application to make it do the wrong things.

However, he's gone for exactly the wrong solution. He seems to think the answer is more scripting, which forces people to think along tramlines, rather than thinking creatively about how the application could be exploited.

Again with code inspections I think he has a point, but I'm not sure what he's offering. He seems to be advocating formal inspections by testers after the code's been slung over the wall. It would be far more effective to do it as early as possible, whether by testers involved alongside the coders, or implicitly by pair programming.

However, Baar seems to think Agile makes the problem of bad code even worse. Reusing inefficient, insecure or plain crap code has always been a problem. When I started back in the 80s nobody ever wrote a program from scratch. We always cloned as much as we could, and used existing shared routines as much as possible. I'm far from convinced that Agile has made this problem any worse.

At the end of the article there seems to be a call for a professional body to maintain standards. That comes from an security consultant, but the call seems to apply to all testers. I wonder if he meant that. There's a different, and stronger argument, for certification for technical security testers, the ethical hackers, than for "ordinary" testers, where the argument is weak to the point that it's hardly credible.  

I agree. The problem is, that traditionally testing has been based to requirements. It's "what application should do". Bugs and priorities are based to those. Unfortunately the applications are doing a lot more than that. Exploratory testing is addressing to that, but because it breaks the tradition, it does not fit to most project managers and product owners. They have learned what testing is (=scripted based to requirements) at university same way as most of the tester does.

Problem with exploratory testing & security is, that it needs good view to security to be able to understand what kind of real risks lack of input validation have. And also it needs knowledge how to work around basic input validation. If the tester can't describe the real risks, the priorities of bug goes wrong. E.g. tester submits bug report:"quote (') mark at name resulted page which says 'unexpected error'" The project manager notices that, realises that "oh well... nobody has quote at his name, so this is low priority and low severity bug". Security person - on the other hand - might relise that it's likely SQL injection, and after short testing (and database structure review) finds out that thru the sql injection attacker can forge e.g. timestamps. And later at search he can view secret information from other users. Now the small bug might become high priority & severity bug.

But how did that input validation mistake end up to code? Mistake. No matter if it's pure error by competent developer or error by incompetent developer. It still exists there. Now there isn't proper change review process which is supposed to be at many applications which are open to Internet. This might be even architectural level problem. Instead of centrelized data validation, every field has own validation and then part of them are missing them. Or the whole problem can become from the configuration of production environment which differens from dev's environment.

Security is difficult thing. So many moving parts which make leave the application vulnerable.

Security and usability are the areas where I feel strongly that traditional development and testing approaches are inadequate. 

If the testers work away in the background on test scripts based on the requirements and then wheel into action one the application is built then they are too late to be effective, and they are not even looking for the right things. 

Testers have to be proactive and forcing themselves into the development much earlier, trying to ensure that weaknesses are detected before they are built.

Testers should be familiar with OWASP. My more detailed thoughts on this subject can be found here.

Well, the article is written poorly to be sure.  It appears Baar isn't criticizing testers so much as he's attacking quality control methods.  He specifically speaks to code reviews which (in my experience) is typically left to developers, not testers.  To his point, security testing is a different beast entirely and requires a specific skill set.  I wouldn't presume to suggest that most testers do/do no or should have this skill, but if security is a concern, the company should appropriately allocate the time and money on the resources and people to perform the task effectively.

There are several unknowns about this particular scenario, so it's hard to make a generalization as to who or what is responsible, but I don't have issues with a customer or client coming to me as a tester or test manager and asking me to explain how defects escaped into a production environment.  It's called accountability and I should be able to explain this to a certain degree.  They may not like the answers, however.  Especially if it's because they loaded me up with unqualified people in a remote location and gave me a week to do what should have taken a month. 

Adam, my thoughts are that in my experience, in the environment I currently work, security bugs are like any other bugs and are prioritized like other bugs. In other words, unlike other engineering environments where I have worked, not all security bugs are automatically given high priority here. In our base product, I have found many licensing holes which are in my opinion security bugs and should be addressed, especially since we sell source code. When I reported them, because they are tied to licensing, they were deemed not worthy of being logged in the defect tracking system, much less fixed. 

 

Additionally, we support 3 platforms. The same code behaves differently in 2 of the 3 platforms. In one platform, with restricted settings in place, users display; in the other, they don't. I was sure that one would get escalated & fixed, but it has been 3 years now, and it hasn't. To that end, I humble myself and my ego, and say hmmm, maybe the decision makers *do* know something I don't ;)

 

Security 'beauty' is in the eye of the beholder. If the beholder (management or other decision maker) does not deem the security bug worthy of being fixed, it won't get fixed until a customer reports it. Because of this, I find the article presented here to be lacking in holistic integrity.

 

Someone commented to the effect of security breaches partially due to so much focus on requirements. Now, that is funny. I haven't tested off requirements in 6 years. I wouldn't want to brag, but in spite of the lack of requirements, our customers, who are a rather devoted bunch, rate our product and support (which includes Engineering calls with customers) very high, year after year.

When I'm reporting security issue, I try to write clear description how it has serious implications to users. I usually do that in form of story, where money is lost, privacy is lost, or something else very serious thing happends. I have even written blog entry about that. That helps non-security people to understand, why something should be fixed. Otherwise the bugs are too often are left open.

I think that when you are testing, in a way to don't let this happened, the tester must think IN and OUT of the box... we all know as testers that its impossible to cover ALL possible case, just because is IMPOSSIBLE, but if the tester thinks out of the box (and im not talking about Black box or White box testing...), maybe some of these kind of situations could be taking in consideration, and could be avoided some times...

 

/// Offtopic, I didnt know that wolverine was a tester too... =P

Definitely a poor choice of headline for the article...  'Sloppy testing' gets your back up right from the get go.  As a tester, you're then reading the rest of the article with a negative and defensive outlook (well I was anyway).

 

It's great to see some testers (my assumption) replying to the article.  Fingers crossed most people don't just read the article and take the time to read the comments following it.

 

Having just finished a piece of work where the security testing was deemed 'too expensive' when compared to the risk level, this one bites a little too hard!

 

Pff... I just heard that Steam was hacked (or his forum... i didnt understand it well), but the thing is that the hackers could have the credit card numbers of the users... I can be one of those account... what do you think? great hackers? or bad testing work?

Very like bad development process is part, but another part is good hackers. There isn't 100% security. There's always too many moving parts which can lead to security vulnerability. The attack can be e.g. targetted malware attack where attacker sends malware to workers of target company. That happened to SalesForce 2007 when attackers managed to hijack customer database of SalesForce.

As long as they don't tell us exactly what went wrong, we don't know what is the problem. But never blame only testing. The insecurity is issue at whole organization from bottom to top.

RSS

Adverts

Ministry of Testing

© 2014   Created by Rosie Sherry.

Badges  |  Report an Issue  |  Terms of Service