Anand Chakravarty, Microsoft
With the rapid proliferation of web-services architected in different flavors, providing users with a diverse range of features and supporting traffic at scales far higher than traditional applications, verifying server-side components has become an essential deliverable for many Test organizations. The test scenarios, their priorities, execution and verification of web-services are necessarily different from those used for desktop/standalone applications. Efficiently verifying web-services requires an understanding of the key differences between these two classes of products and basing the test approach on that understanding. The transition from the waterfall to agile model of product development also requires a recalibration of test cases definition, development and execution.
In this paper, we share our experiences in shipping a set of web-services that use large volumes of data and Artificial-Intelligence algorithms to translate text between human languages. These web-services are part of Microsoft’s Machine Translation products, shipped by an incubation team under Microsoft Research.
The areas of performance and stress testing yield bugs that present unique challenges in their findings, investigations, fixing and regressions. In the early days of testing services, a lot of the necessary infrastructure and tools were written by the individual teams geared towards perceived unique characteristics of each service. Over time, these have evolved into now publically available automation libraries that handle a lot of the commonly performed functions, such as results summarization, performance monitoring, etc. While these tools are beneficial and add value to a test team’s quality coverage, it is essential to use them in a targeted manner, with the priority being measurement of quality metrics and the tools being a means to obtain them. Using the MSR-Machine Translation system as a case-study, we attempt to present a test approach based on real-world scenarios and practical solutions that have helped ship a set of web-services that receives traffic of millions of users a day, with continuously improving quality and performance.
A big lesson from our experiences has been the value of ‘Keeping It Simple’, regardless of the complexity of the systems under test. A proper understanding of user scenarios to help correctly prioritize test cases, combined with focused automation, helps achieve the goals of software quality and agile development schedules, leading to successful products and passionate users.
2010 Technical Paper, Anand Chakravarty, Abstract, Paper, Slides