Tuesday, March 25, 2014

Slides from Defect Sampling Presentation at ASTQB Conference.

Hi all,

Thanks to everyone at my presentation at the ASTQB Conference on defect sampling.

Here are my slides from today's presentation:


Here is the video from Gold Rush:



Wednesday, March 19, 2014

Webinar Video and Slides - What to Automate First?

Hi Folks,

Thanks for attending the webinar today on What to Automate First - The Low-Hanging Fruit of Test Automation. If you missed it, here is the video:


Here are the slides:


I hope you enjoy the content!

Tuesday, March 04, 2014

Is Software Testing Once Again a "Concept Sale?"

In marketing there is a thing known as a "concept sale." That is, before you can even discuss features or price, you have to make the case for WHY the item or service is needed.

Back in the day (1990 - 1995), test automation tools were a concept sale. People had to be shown in very clear and compelling terms what test automation was and how it was helpful. After a few years, people understood the need and value, so the questions started moving toward, "Which type of test tool(s) do we need?"

Based on what I see in terms of software quality overall and hear in conversations with c-level execs, I get the feeling that software testing is going down the same path as other quality-related practices, such as six-sigma, TQM, requirements engineering, standards, etc.  Yes, I fear that software testing is quickly becoming a "concept sale."

What used to be considered an essential part of the software development life cycle, is now in danger of becoming a secondary and optional part of the life cycle. It's certainly not that way in all companies, but I see a clear trend emerging that companies are willing to take huge risks in releasing largely untested software.

It seems that management knows that testing is needed, but it is a luxury. The attitude seems to be that defects found "in the wild" can be fixed cheaper and faster than before release. Risk is secondary to release dates.

Does that mean that "testing is dead" after all? Well, it reminds me of the scene from Monty Python and the Holy Grail, where some people are trying to put a man on a cart, but he keeps saying "I'm not quite dead yet."

I have no easy answers on this. All I can say is to: 1) keep showing your value by being an information provider of things beyond defects and failures, 2) be professional and continue to work on your skills (get better at providing feedback, learn how to notice details better, become a creative thinker, etc.), and 3) keep making the message that testing is a lifecycle activity, integrated into a development process. (I do think it is significant that the concept of a software lifecycle is also a confusing and elusive topic for many companies.)

The particular method of developing software isn't the issue. The same thinking goes for architecture, requirements, design and all the other disciplines that make solid software. The magic is not in the method. The magic is to understand user needs, deliver to those needs in value-added ways, and know the solution actually works by testing it!

Don't assume that everyone understands why testing is needed.  Be prepared to show the value of your testing at a moment's notice. Dashboards are a good way to do that. In fact, I have a presentation on this posted on YouTube and Slideshare.net.

You never know when you'll have to make a concept sale for testing!

I would be interested in hearing your opinions on this.



Thursday, February 20, 2014

Tester to Developer Webinar Slides Posted

Hi everyone,

Thanks for attending the webinar today and for all the great conversation. Here are the slides:


The video will be posted very shortly.



Wednesday, February 12, 2014

Metrics for User Acceptance Testing

Recently I responded to a question at Quora.com about UAT metrics.

"What user acceptance testing metrics are most crucial to a business?"

Here is an expanded version of my answer, with some caveats.

The leading caveat is that you have to be very careful with metrics because they can drive the wrong behavior and decisions. It's like the unemployment rate. The government actually publishes several rates, each with different meanings and assumptions. The one we see on TV is the usually the lowest one, which doesn't factor in the people who have given up looking for work. So, the impression might be the unemployment situation is getting better, while the reality is a lot of people have left the work force or may be under-employed.

Anyway, back to testing...

If we see metrics as items on a dashboard to help us drive the car (of testing and of projects), that's fine as long as we understand that WE have to drive the car and things happen that are not shown on the dashboard.

Since UAT is often an end-project activity, all eyes are on the numbers to know if the project can be deployed on time. So there may be an effort my some stakeholders to make the numbers look as good as possible, as opposed to reflecting reality.

With that said...

One metric I find very telling is how many defects are being found per day or week. You might think of this as the defect discovery velocity. These must be analyzed in terms of severity. So, 10 new minor defects may be more acceptable than 1 critical defect. As the deadline nears, the number of new, critical, defects gains even more importance.

Another important metric is the number of resolved/unresolved defects. These must also be balanced by severity and should be reflected in the acceptance criteria. Be aware, though, that it is common (and not good) practice to reclassify critical defects as "moderate" to release the system on time. Also, keep in mind that you can "die the death of a thousand paper cuts." In other words, it's possible to have no critical issues, but many small issues that render the application useless.

Acceptance criteria coverage is another key metric to identify which criterion have and have not been tested. Of course, proceed with great care on this metric as well. Just because a criterion has been tested doesn't mean it was tested well, or even passed the test. In my Structured User Acceptance Testing course, we place a lot of focus of testing on the business processes, not just a list of acceptance criteria. That gives a much better idea of validation and whether or not the system will meet user needs in the real world.

Finally, stakeholder acceptance is the ultimate metric. How many of the original acceptance criteria have been formally accepted vs. not accepted. It may be the case where just one key issue holds up the entire project.

As far as business value is concerned, a business must see the value in UAT and the system to be released. Here is an article I wrote that address the value of software quality: The Cost of Software Quality - A Powerful Tool to Show the Value of Software Quality.

I hope this helps and I would love to hear about any metrics for UAT you have found helpful.



Monday, February 10, 2014

Tester to Developer Ratio Survey

I have a new survey posted for the Tester to Developer Ratio topic. If you have 5 minutes to answer it, I would appreciate it!