Saturday 17 September 2011

Smoke Testing

Smoke testing is a non-exhaustive software testing covering the most crucial functionalities in the system so as to ascertain that they work as expected. Here focus is to cover these identified crucial functionalities using very few sample test cases without  bothering with finer details. Smoke testing is effectively applied to build testing where in code that is changed is validated before checking in that change into product’s configuration library. In other terms, here smoke testing is restricted only to test and validate against changed code but not for entire product that is built from configuration library that includes even changed code. Like this, smoke tests are designed to confirm that changes in the code are implemented and work as  expected and also, change does not destabilize an entire build.
A common practice at Microsoft and some other shrink-wrap software companies is the "daily build and smoke test" process. Every file is compiled, linked, and combined into an executable program every day, and the program is then put through a "smoke test," a relatively simple check to see whether the product "smokes" when it runs.
BENEFITS. This simple process produces several significant benefits.
It minimizes integration risk. One of the greatest risks that a team project faces is that, when the different team members combine or "integrate" the code they have been working on separately, the resulting composite code does not work well. Depending on how late in the project the incompatibility is discovered, debugging might take longer than it would have if integration had occurred earlier, program interfaces might have to be changed, or major parts of the system might have to be redesigned and reimplemented. In extreme cases, integration errors have caused projects to be cancelled. The daily build and smoke test process keeps integration errors small and manageable, and it prevents runaway integration problems.
It reduces the risk of low quality. Related to the risk of unsuccessful or problematic integration is the risk of low quality. By minimally smoke-testing all the code daily, quality problems are prevented from taking control of the project.
It supports easier defect diagnosis. When the product is built and tested every day, it's easy to pinpoint why the product is broken on any given day. If the product worked on Day 17 and is broken on Day 18, something that happened between the two builds broke the product.
It improves morale. Seeing a product work provides an incredible boost to morale. It almost doesn't matter what the product does. Developers can be excited just to see it display a rectangle! With daily builds, a bit more of the product works every day, and that keeps morale high.
Check for broken builds. For the daily-build process to work, the software that's built has to work. If the software isn't usable, the build is considered to be broken and fixing it becomes top priority.
Each project sets its own standard for what constitutes "breaking the build." The standard needs to set a quality level that's strict enough to keep showstopper defects out but lenient enough to dis-regard trivial defects, an undue attention to which could paralyze progress.
At a minimum, a "good" build should
·         compile all files, libraries, and other components successfully;
·         link all files, libraries, and other components successfully;
·         not contain any showstopper bugs that prevent the program from being launched or that make it hazardous to operate; and
·         pass the smoke test.
Smoke test daily. The smoke test should exercise the entire system from end to end. It does not have to be exhaustive, but it should be capable of exposing major problems. The smoke test should be thorough enough that if the build passes, you can assume that it is stable enough to be tested more thoroughly.

The smoke test must evolve as the system evolves. At first, the smoke test will probably test something simple, such as whether the system can say, "Hello, World." As the system develops, the smoke test will become more thorough. The first test might take a matter of seconds to run; as the system grows, the smoke test can grow to 30 minutes, an hour, or more.

Mutation Testing

In structured testing, effective testing means better test coverage and more number of reported errors. This demands more number of test suites. However it is difficult to measure accuracy of test suites. The method of Mutation Testing was introduced to measure the accuracy of test suites.
In mutation testing, we perform following activities:
Step 1:  We consider a perfect program and its corresponding perfect test suite. This means, we select test suite that covers all possible and a program that passes this test suite.
Step 2: We change the code of the program. This activity is called mutating. The resulting program with this change is referred to as mutant.
Step 3: Execute test suite on mutant.
Step 4: Observe the result. If results of the program are affected by the change and test suite detects the change, then the mutant is called a ‘killed mutant’. If the results of the program do not change and the test suite does not detect the mutation, then the mutant is called an ‘equivalent mutant’.
Step 5: Continue creating more mutants and run tests using test suites. 
Step 6: Count Total Number of Mutants; Total Number of Killed Mutants
Step 7: Calculate Ratio of Total Number of Killed Mutants to Total Number of Mutants. The value arrived at provides information about accuracy of the test suite

Adhoc Testing

This means impromptu testing using test cases that have been developed with no support of project related documents. These techniques require creative thinking and many a times use knowledge gleaned in the past on programmers or product. Ad hoc testing is less comprehensive, often inventive requiring lesser time, effort and documentation.

Thursday 3 March 2011

Primary and Secondary Qualities

Requirements specifications is a "representational" perceptual model that is built based on inner "ideas", "impressions' or "sense data" of an observer (requirements study team member) and his inferences. As such what is real and what is represented always differ.  We can see that there will be differences between what is expressed and what is represented. If we find a way to directly establish a link between the observer's inner world and external object, we can have better representation of requirements. However, major hurdle to achieve this is unreliability of our perceptions.
To address such an unreliability and thereby to reduce the gap between perceptual model of our inner ideas and outer objects , we need to understand that there are two types of qualities, namely, Primary Quality (Absolute Quality) and Secondary Quality (Relative Quality).
The color of the User Interface is not a property of the screen itself but a product of the interaction of various factors, including certain physical attributes of the screen such as power supply, resolution, the peculiarities of our own sensory system; and the environmental conditions prevailing at the time of the observation. All these properties do not belong to the screen as such but are extrinsic. Such properties are said to be "Secondary Qualities". These qualities vary based on time and conditions and as such they define relative quality.
At the same time, a screen has certain true properties which are intrinsic, such as its size and shape, which do not depend on the conditions under which the screen is observed or on the existence of the viewer. These are defined as "Primary Qualities" of screen. Primary qualities also help us in explaining and also, developing an experience of the secondary qualities. Unlike secondary qualities, our ideas that we develop in our mind  on primary qualities closely resemble the the physical object itself. Thus, primary qualities of physical objects define absolute quality.

We can extend these concepts of primary and secondary qualities to requirements specification. While capturing requirements always think on primary qualities of clients wants and needs. If you are able to identify such primary qualities, then you can have concrete requirements that are beyond skepticism. Such requirements which can be represented using primary qualities are implementable and measurable. For example, requirements like accuracy of numbers, length of any text field, number of permissible users, number of transactions that need to be supported by the system etc are primary qualities.

On the other hand, secondary qualities of requirements can not be concrete and as such they are the basis for skepticism. Secondary qualities can not be implemented to perfection and also, can not be measured. For example, requirements like system shall be user friendly, system shall have recoverability feature, user interfaces shall be pleasing etc are secondary qualities.

Thus, while arriving at requirements specification if we focus on primary qualities  then our requirements will be concrete. Else requirements will be representation full of skepticism.

Wednesday 2 March 2011

Random thoughts that rule the world- My first posting

My friend Srini Kulkarni - Software Testing Generalist, Systems thinker, Skeptic (on twitter @shrinik) during tweeting asked me why I do not yet sharing my thoughts through my blog and asked me to create the one. Now I have the one Srini and I named it as www.testexclusive.blogspot.com. I also promise that I will share my ideas. Here we start. That too from the point that I and Srini were discussing.


"Now put together these two things "quality" and "requirements" - what we get?" Shrini asked me.
My answer was - "Gap between them represents defects. Fulfillment results in acceptance"
Since we had space restrictions we could not go further in our tweeting too far. Here I continue.
Gap always exists and also,  in practice, acceptance also takes place may be with certain conditions which need to be addressed within a client specified time. And life goes on! However I am not very much concerned with this "business drama" but want to show some way to address them.  
As a step forward, here are some billion dollar questions- 
"Why there are always gaps between what was agreed upon (requirements) and what is delivered (product)?" and "Why we need to live with these defects and their impacts?"
My answer has two folds-
------------------------------
1.It is because of the fact that requirements engineering is the least matured phase in software engineering and thus requirements specification that emerges out of it  is not concrete. We use such requirements as the basis for development and testing. As such both development and also, testing will not be good. Good output always requires good input. Thus what we  require as starting point is quality requirements, means, requirements that are clear, complete, unambiguous, verifiable, and traceable. 
In summary, we need to have quality requirements and also, we need to measure clarity, completeness, unambiguity, verifiability, and traceability of requirements so as to define their quality. But How?? I will answer these in near future.
------------------------------
2. Definition of quality can not be generic. Meaning of quality is always specific to project and product. In other terms quality is always relative. Meaning of quality is highly diversified since stake holders and their requirements & expectations are different and many a times conflicting. Thus we need to arrive at definition of quality specific to that product and project before we start our project. But, again, How?? I will provide insights on it as well.
----------------------------
Now I look forward to your thoughts and points ..