Haven’t been in Riga yet? Want attend a cheat test conference? Attending 11th test conference in Riga this summer is for free. Language – English. If you are interested to talk - Call for papers
is just announced.
One of topics – post agile testing
I’ve been talking on this conference for last 8 years already. One thing intrigued me this year. Call for papers recommend (among other) a topics post agile testing. I… Continue
Added by Ainars Galvans on January 26, 2010 at 12:48 —
I just spilled my coffee on the table… what a shame. But wait a bit – I’m the only one in an office and if I clean up before anyone notice - the shame would be all gone, wouldn’t it?
If we fix bug in a sprint where it was introduced then our customer don’t need to know about the bug, do they? We are lucky to have JIRA defect reporting system with it’s ability to mark some bugs confidential hiding them from customers, aren’t we?
There is but one problem when you hide… Continue
Added by Ainars Galvans on January 21, 2010 at 9:00 —
Management asks you to estimate testing. You try hard but your estimate is declined because it significantly extend what they had in mind before asking you.... Ever been in such situation?
There is nothing wrong about it! The goal of test estimation isn’t always the estimate. “Test estimation” in a lot of cases is actually a testing (service) quality (how much will we test) negotiation process; i.e. if you estimate twice as much you know you have to test less carefully than… Continue
Added by Ainars Galvans on January 15, 2010 at 10:00 —
I’ve been skeptic about articles talking about SOA performance testing special challenges. Last project experience showed I was wrong. There is a specific. It is not directly related to SOA. It is related to fact systems exchanging information must be emulate along with user load. Some of them could create unpredictable load to your system. Besides synchronization process performance may vary a lot depending on the size of delta between systems.
In general SOA performance testing… Continue
Added by Ainars Galvans on January 7, 2010 at 13:11 —
Blogging is my way of thinking about testing and learning more about testing. I’ve been thinking why I’ve not been blogging almost a month. Is not learning a problem for me? Is not learning a problem for an Exploratory Tester?! Yes it is, because learning is part of Exploratory Testing. Read on more of my conclusions I’ve done today
I’ve seen blogs which says basically “I’ve not been blogging because too much work to do”. I have just the same… Continue
Added by Ainars Galvans on December 11, 2009 at 13:00 —
3 year ago I wrote For functional testing excuses like “we can’t do good tests without detailed requirements, or too late code freeze” don’t work any more.
But there is an excuse still seem to work – “we don’t have the appropriate tools purchased” or “we don’t have time dedicated for test automation”. A lot of effort is put into manual regression tests and not enough to test new… Continue
Added by Ainars Galvans on November 12, 2009 at 15:30 —
I’ve seen companies that differentiate salary based on list of “responsibilities”. Only execute written test cases - junior role and salary is low. If you design tests – intermediate, able to define approach/strategy/plan – senior.
However doing Exploratory Testing requires single person to do everything. So everyone deserves the same salary, right?! Wrong!
What’s a difference between junior and senior ET?
The difference is skill, right? Now what difference does it to the… Continue
Added by Ainars Galvans on November 10, 2009 at 11:21 —
No, not again! I was told by a customer management to provide more visibility into test process. Recently I’ve blogged series of providing visibility into functional testing, but I was so busy with functional testing that forgot about non-functional. I’m still working on ways to provide the visibility, so today I only wanted to blog about how essential it is to be able to provide it.
Who is your Dr Watson?
We learn a lot about Sherlock Holmes from… Continue
Added by Ainars Galvans on October 5, 2009 at 8:40 —
This is my yet another attempt to understand how to manage Exploratory Testing. I try to learn from how would i "manage" exploring a territory.
This blog contains details of managing Requirements Based Exploratory Testing
in my current project. I use “test case” (and the tool is JIRA + confluence wiki). I’m happy to have customized Test Case… Continue
Added by Ainars Galvans on September 1, 2009 at 7:00 —
So as a tester I’m a big proponent of Exploratory Testing. But as a test manager I know it’s hard to manage. Even harder to describe what’s done. Yet harder: to understand and describe what’s left. The hardest: to do it so that both developers and customers would understand. I don’t fool myself anymore hoping they understand the QA language. I know: everyone have their own interpretation of test case and bug statistics I’m providing. I've realized that my real problem is translation between… Continue
Added by Ainars Galvans on August 24, 2009 at 8:30 —
In my last blog I promised to describe my test management
approach: an alternative to the well known best-practice – manage it using “test cases”. I try to keep promises especially those I give to myself. However I realized it’s quite a topic so I split it into several parts. In this blog I’ll introduce my approach by metaphor.
Metaphor: shopping list
I’m used to take the list going to supermarket. List is a shopping requirement: it tells me what I must put into my… Continue
Added by Ainars Galvans on August 13, 2009 at 7:06 —
Why do we write test cases? To manage testing? Yes I used to believe it is easier to manage defined test cases rather than people. I’ve developed a different approach. However you will have to wait for my next blog to read about it. In this blog I want to share my (2 years of) experience with managing test cases in JIRA.
Hint 1: JIRA is for management, not for content
First of all we don’t store test case content (sometimes referred to as test steps though I don’t like this… Continue
Added by Ainars Galvans on July 20, 2009 at 7:30 —
It seems to me that word feedback was adopted by software development field so quickly that it became a buzzwords with the typical issue of using it loosely or carelessly. Am I wrong? Analyzing cases where the word was used turned out to be so exciting that I want to share my observations.
Why I do so? I don’t care for term usage I care that we are careless about the bigger story behind the term. And there is big story. Are you able to give and receive meaningful feedback? I’m still… Continue
Added by Ainars Galvans on June 30, 2009 at 12:38 —
writes that we (software testers) lack a body of knowledge that is passed from wizard to apprentice
. I may agree with that. However knowledge (spell books) alone is not enough.
It is dangerous, very dangerous to assume that it is enough for wizards to write the spell-books so that apprentices could read the spells from them.
That’s actually how a lot stories in fantasy… Continue
Added by Ainars Galvans on June 26, 2009 at 7:23 —
I associate Verification and validation with process of comparing documented requirements with code. However if we want to create software that users is willing to pay for (instead of simply protection from being sued for low quality software) then perhaps testers should do something more…. Let me tell you what I do and why
The gap between requirements and user needs
I believe everyone have seen tire… Continue
Added by Ainars Galvans on June 17, 2009 at 12:30 —
10th Annual Conference for Software Testing Professionals
in Riga, Latvia is somewhat unique event as it is free of charge (venue, proceedings, coffee breaks, handouts), thanks to sponsors and enthusiasts organizing it! This year I tried to speak English as many other Latvians did. We hope to attract more testers from Baltic region in future.…
Added by Ainars Galvans on June 8, 2009 at 9:00 —
I have been wondering for years why some tester cultures keep copy-pasting text from requirements into their test cases (sometimes adding “validate that” word or something like that). And later on spend a lot of time updating test cases because requirements keep changing. I have a better idea, which seems so natural to me that I can’t understand – either this is discovered by other people already (any links, please) or I’m just so blind and don’t see potential issues using such an approach…… Continue
Added by Ainars Galvans on June 1, 2009 at 6:30 —
I had a trouble with performance testing GWT. Some of the problems were solved by correcting char-set and removing encoding – i.e. turning off data zipping. So I’m successfully testing GWT using grinder now... See details below (they are described for grinder, but I believe the same could apply to JMeter and other tools.
More over I want to share the story of why client-side performance becomes an important part of performance testing in case of GWT.
What’s so special about… Continue
Added by Ainars Galvans on May 29, 2009 at 9:00 —
As I’ve just discussed
when creating performance test scripts we sometimes need to do what is called correlation in load runner. Load Runner has a nice feature where you could define the correlation rule and it will do the magic for you during recoding. In other tools you may need to do it manually. I’ve attached a sample grinder script (python) that does the stuff.
In this case I know that… Continue
Added by Ainars Galvans on May 27, 2009 at 8:09 —
No matter which performance test tool you use, if it has capture-playback capabilities you may run into a problem which in Load Runner is solved by feature called correlation. It is used to process stuff like session id, viewstate, object id, etc. A colleague asked me why should a script correlate anything? So I decided to describe the basics first.
Have you ever red discussion about Stateful vs Stateless Session (beans). Do you know why cookies are introduced? Basically the problem… Continue
Added by Ainars Galvans on May 27, 2009 at 7:30 —