Wednesday, April 6, 2016

How much does Test Automation Outsourcing Save?

It Depends on the Opportunity...

As a Software Professional who's studying finance for my MBA, I thought I try to apply what I've learned to figuring out how to create financial model of when to outsource test automation.  I've worked in the past with companies who have heavily outsourced, but also seen many companies bring their operations back on shore.  But one thing is common, no one really knew how much they were saving sending things offshore, or bringing things on shore.

Assumptions

To perform this analysis, I used assumptions made in the market place, and about the productivity difference between onshore and offshore.  Using some quoted figures from Compass Consulting which did a study back in 2009 (sourcingfocus),  the key numbers I used from that study were, the 60% drop in productivity, and the 20% increase in management overhead.  I've also used a 23.1% time to market difference for outsourced IT projects.

Calculations

Using Glassdoor for national averages (2016), I got the rate for SDETs in the US and Project Managers that need to put more time into the management overhead.

OutsourcedIn House US
Cost Savings:(notes)
SDET Salarybased on $20/hr IT outsourced agency it rates$40,000.00$109,000.00
FICA / Employment tax overhead20%
Productivity Difference60%
Project Mgmt Salary Onshore$91,000.00
Increased Costs due to Managment Overhead20%
$118,200.00$130,800.00
Total Cost Savings$12,600.00

As you can see, the average savings is roughly $12k for every SDET each year, for roughly the same quantity of output.

The next step of the calculation is the opportunity cost of the delay to market from the 23.1% slower time to market.  To do this, I calculated the percentage of SDET's time related to total development time, then did a sensitivity analysis to find the break even point where the enterprise value of the project equals the cost savings.

Opportunity Costs:
Percent of time project time taken by testing30%
Speed difference between in Offshoring23.1%
Percent time delay to market:7%
Enterprise Value of the Feature$186,490.85
Expected defferred sales from feature delay$12,923.82
Risk Free Rate adj rateBased on 10yr treasury2.57%
Time Value of Money of Delayed Sales$12,600.00
Estimated Savings$0.00

Conclusion

So the answer to the question is, projects valued under $186k / SDET needed, you should outsource, projects over that keep in house.

I love to hear your thoughts and suggestions of how I can improve these calculations.

References:

Outsourcing all IT can lead to productivity drop, says Compass - News - outsourcing - sourcingfocus.com. (n.d.). Retrieved April 06, 2016, from http://www.sourcingfocus.com/site/newsitem/outsourcing_all_it_can_lead_to_productivity_drop_says_compass/

Fariñas, J. C., López, A., & Martín-Marcos, A. (2011, July). Offshoring, domestic outsourcing and productivity: A production function approach. Retrieved from http://www.leuphana.de/fileadmin/user_upload/PERSONALPAGES/Fakultaet_2/Wagner_Joachim/files/Farinas_Lopez_Martin-Marcos.pdf

Thursday, March 24, 2016

Three Amigos Meeting - Getting QA, Dev, and Business on the same page.


Back in the days I use to work in a RUP (Rational Unified Process), we've been taught to do the thinking up front, because mistakes down the line become costlier.  As many shops have been moving towards Agile methodologies, the up front thinking has been reduced to at best example scenarios, and worst just 1 liners describing what the product owner wanted.

Recently, I came about the Three Amigos Meeting method, it promises to be a simple way of getting BA, QA, and Devs on the same page.

There are some things I like about it,

  • It is easy to implement, and a simple process.  Just tack an one hour time-boxed meeting sometime before sprint planning for the 3 parties to sit together.
  • It promotes better understanding between the 3 parties.  Things like technical difficulties, edge cases, and business drivers are often lost in simple tickets that are being cranked out.
  • It gives a good structure and approach to doing more up front homework before code is written.  Generally when I see code is written that does not conform to the realities of the business needs, I will see things where they try to keep the existing code structure while bending over backwards to make things work.  For example, a schema might be designed, then at the point of realizing the mistake is made, a developer may write additional adapters to massage the data into the desired form before reaching the database.
Here are a few things I don't like about it.
  • Because this is working in the context of a single agile team, where a company may have many.  A lot of useful information is lost to those outside the agile team, but in the greater product / service team.
  • In an information age and modern framework, actual time spent writing code is very little compared to research and learning how to approach the problem.  I think a simple 1 hour workshop every 2 weeks, is not enough when it is more important to put more upfront effort into design than it is to simply type code.
Given these shortfalls, I still think this is a good idea.  Most shops have very little interactions between Devs, QA, and BA's.  And especially when Devs see only a small part of the entire product, having these short workshops can payoff big by promoting the business level and quality level understanding.

Wednesday, October 7, 2015

Don't collapse all programming concepts into Selenium.

As a test developer in Selenium, you'll often time need to access things that are outside of the browsers control.  For example,

  • How do I open an excel file?
  • How do I write test results to file?
  • How do I run my test in Jenkins?
  • etc...
Beginners might search on Google, the term, "How do I open excel files in Selenium?", but what you'll find is your results might not match expectations or no results.

Let's try to understand the test ecosystem and architecture so you can better phrase your search queries next time you need to do something.

How it all fits together

In your Selenium WebDriver or RC test, you have these 3 main concepts.
  1. Your programming language - Your basic building block.
  2. Your test framework - The context of how your test runs and reports results, and execution of your test scripts/programs.
  3. Selenium Webdriver - The interface which your test scripts/programs use to access the browsers functions.  More specifically, accessing web elements within the browser.

Asking your question

Now when you try to ask the question, you'll want to go through these backwards.

Let's take the example, "How do I read test data from an Excel file?".

Going through the steps, we ask ourselves the following questions..
  1. Are we interacting with a web element or browser at this point? - No. We don't need to search for a Selenium/WebDriver solution for this problem.
  2. Are we trying to interact with the test environment? - Yes, we need to feed data
  3. Is there something I'm trying to do that's not part of Selenium/WebDriver or the test framework.  Possibly.  We may need to figure out how to open the file.
Given that 2 and 3 are yes.  You'll probably want to search along the lines of..
  • How do I feed Excel data into (insert your test framework here)
  • Reading Excel data into (insert your test framework here)
  • etc...
If that fails, you'll want to fall back to the language, and break the problem down into smaller pieces.  Here we'll want to read the Excel file into some sort of data structure, then feed that data structure into the test framework.
  • How do I open a Excel file in C#,Java, etc...
  • How can I read an Excel file in ...
  • How do I feed array data into (insert test framework here)

Conclusions

When you find yourself needing to find a solution.  Remember to figure out where in your test architecture you need the solution at and break down the problem.

Tuesday, October 6, 2015

Thoughts on C# and reimplementing the Abstract PageObject Pattern with C#

Quick Thoughts on C#

I've started a new job in a C#.Net shop after working in Python for 1 1/2 years.  One of the things I'm dealing with now is re-writing some of the framework stuff I've written over the years in C#.  There are some things I like and don't like about C# vs Python.

Likes

  • I like strongly typed languages in that tools for linting, syntax checking, and re factoring tends to be much better.


Don't likes

  • Closures are a bit tricky to deal with.  In python it was very easy to go crazy with lambda statements, which I did very liberally especially with wait and exception handling wrappers.
  • Decorators didn't seem as straight forward.
  • REPL - scripting languages you can evaluate nearly anything in the REPL.  C# however, you can do somethings, but say importing new classes gets tricky.




Re-implementing Abstract PageObject pattern in C#

Thought I share this, since I've reimpelmented it in C#.  The orginal implementation I've done in Java you can find here, http://engineeringquality.blogspot.com/2012/06/creating-generic-advanced-page-factory.html.

For C#, most of it follows very similar to Java.

using System;
using System.Linq;
using OpenQA.Selenium;
using OpenQA.Selenium.Support.PageObjects;
 
namespace web
{
    internal class AbstractPageFactory
    {
        /// <summary>
        /// An advanced version of Selenium's PageFactor.InitElements that uses reflection to infer subtypes.
        /// </summary>
        /// <typeparam name="T">PageObject or Interface</typeparam>        
        /// <param name="searchContext">Selenium webdriver or webelement</param>
        /// <returns></returns>
        public static T InitElements<T>(ISearchContext searchContext)
        {
            var type = typeof (T);
            var subtypes = AppDomain.CurrentDomain.GetAssemblies()
                .SelectMany(s => s.GetTypes())
                .Where(type.IsAssignableFrom);
 
            Exception lastException = null;
            foreach (var subtype in subtypes.Where(subtype => subtype.IsClass).Where(subtype => !subtype.IsAbstract))
            {
                try
                {
                    var pageObj = Activator.CreateInstance(subtype, searchContext);
                    PageFactory.InitElements(searchContext, pageObj);
 
                    return (T) pageObj;
                }
                catch (Exception e)
                {
                    lastException = e;
                }
            }
 
            if (lastException != null)
                throw new NoSuchWindowException("Could not find a matching PageObject. Last exception:" + lastException);
 
            throw new NoSuchWindowException("No matching PageObjects found.");
        }
    }
}

Thoughts on what should be automated

It's bee traditional thinking to...

  1. 1-to-1 parity between manual tests and automated tests.
  2. Trying to have as much requirements and test case coverage as possible.
  3. Automated tests should be able be reproducible manually.
But over the years, I've found that having 1000's of test cases I tend to cause even more problems.

1. Test code is harder to maintain that production code.

Besides the technical aspects like having to bend over backwards to accommodate changes in the software under test, it is also the lack of support from business and development for test code.  For example, very few shops will hold up a release or delay development if there are broken tests or flaky tests.  The default is generally to comment those tests out..

2. The need for quicker feedback.

What tends to happen with 1000's of test cases is many of the tests cases will get moved into some sort of nightly or weekend stage.  But instead of catching bugs the moment they are checked in, they are caught much later.  What ends up happening is you'll have to first investigate to make sure that issue is not a flaky test, then reproduce that step across different environments to make sure its not an existing bug.  By the time you gone through all the work, the developer probably have already moved on to the next task and story, and even worse, some other developer probably already pulled/rebased his code to include the bad code.  

3. When writing many tests, you may not have spend as much effort writing them for debug-ability

Tests are generally hard to debug.  Any number of things can cause them to fail, and from the test's point of view, an unexpected element or an expected element might be missing.  But the cause could be a slow render, or a misstep in an earlier setup, an upgrade window getting in the way of the test, a tree falling on the data center, etc...  


Solution? 

  1. Write fewer tests
  2. Focus on performing complete workflows over single actions.  Testing single actions in isolation will lead to number of tests creeping up.  Working workflows will keep it at a smoke test level.
  3. Write code with the intention of debugging every single line.  Pay close attention to your logging and exception handling.  Make sure a failure at any point will return you a reason.
  4. Use good programming patterns and test architecture to make it easier to refactor.

Thursday, June 11, 2015

My Favorite Non Automation Tools

Every once in a while, I'll share some of the tools I use beyond just test automation.  In this post, I'm going to share 5 tools I use on an everyday basis.

Jing by Techsmith


Jing is a very handy tool for capturing screenshots and videos.  I've been using it to capture and annotate screenshots to attach to my bug reports, and occasionally short videos demonstrating bugs I have observed.  Actually, many of the screenshots in this post are taken using Jing.

PostMan Rest Client


Postman is one of my favorite REST clients.  What I love about PostMan is it is a Chrome Extension Web App.  Since I'm usually debugging web apps using Chrome, I like how I can launch a rest client via a new browser tab.  I use it to debug Ajax requests, and switch between the chrome developer tools and Postman to copy over request parameters and run them in Postman.

1Password by Agilebits


There are many password managers out there.  But I've been using 1Password for some time now.  What I like about it is it has plugins for various browsers, they have a service that will warn you to change you password if the corresponding site is known to have been compromised, and you can store your password store to DropBox (and a few other cloud services), which makes it easy to use 1Password between different computers.

ng-Inspector


I'm currently working in an AngularJS shop.  I find it useful to examine the data models behind the scenes when I want to try to troubleshoot a bug in our Web Apps.  What I love about ng-Inspector is it's very lightweight, is easy to get to, and does a good job displaying the data in the $scope of the currently loaded AngularJS Web App.

ConEmu


ConEmu is a good way to organize all your console windows.  Instead of having separate console windows for your cmd console, git shell, PowerShell, developer command prompt for Visual Studio.  ConEmu allows you to unify those under 1 window, and have them in separate tabs to keep your work space neat and organized.  Besides organization, it gives you some shortcuts such as easy launching of new consoles and searching through the text in existing open consoles.

Friday, May 22, 2015

Avoid typos in your [Category] attributes by extending CategoryAttribute

NUnit categories are a great way to categorizing your tests so you can stage your test runs.  For example, I can have a Smoke and Regression set of test cases, and additionally also tag them by features as well.  That way when I run my NUnit tests, I can filter by the set I want, and run just the tests of the features I'm testing.

The Problem

However, one problem with the Category tag is if you're in a team, and each person is typing the same category strings over and over again, you run into the problem of someone doing a typo somewhere along the way that the compiler cannot catch.

   public class SomeTest  
   {  
     [Category("TestSuite.Smoke")]  
     [Category("Feature.UserInfoForm")]  
     [Category("Speed.Fast")]  
     [Description("Test survey form fields are sticky.")]  
     public void PartiallySubmittedFormShouldRemainSticky()  
       ....  

Imagine having hundreds of tests, where that same string gets typed over and over again.

Possible Solution

One solution you can do is use constants contained in a static class,

   public class SomeTest  
   {  
     [Category(TestType.Category.Smoke)]  
     [Category(TestType.Duration.Short)]  
     [Category(TestType.Stability.Stable)]  
     [Description("Test survey form fields are sticky.")]   
     public void PartiallySubmittedFormShouldRemainSticky()  
       ....  

A more elegant way

we can extend the category attribute like this, and use it in a more expressive way that's more appealing to the eyes.
   public static class TestCategory  
   {  
     public class SmokeAttribute : CategoryAttribute  
     {  
       public SmokeAttribute() : base("TestCategory.Smoke")  
       {  
       }  
     }  
     public class SmokeRegression : CategoryAttribute  
     {  
       public SmokeRegression()  
         : base("TestCategory.Regression")  
       {  
       }  
     }  
   }  

It will look a bit cleaner in usage, and makes your test code look like something less improvised.

   public class SomeTest  
   {  
     [TestCategory.Smoke]  
     [Stability.Stable]  
     [Speed.Short]  
     [Description("Test survey form fields are sticky.")]  
     public void PartiallySubmittedFormShouldRemainSticky()  
       ....