Friday, April 11, 2008

Web Terminologies: Useful for web application testers

This article will help you to learn basic web terminologies. While testing web applications it’s very necessary to know all web technologies. This will increase the test coverage and also the capabilities of web application tester.
This web terminology article is compiled by Meenakshi M. She is working as a Test Engineer and having 3+yrs of experience in Manual and Automation (QTP) testing.
This article basically covers following terminologies:
What is: Internet, www, TCP/IP, HTTP protocol, SSL (Secure socket layer), HTTPS, HTML, Web servers, Web client, Proxy server, Caching, Cookies, Application server, Thin client, Thick client, Daemon, Client side scripting, Server side scripting, CGI, Dynamic web pages, Digital certificates and list of HTTP status codes


Web technology Guide
If you are working on web application testing then you should be aware of different web terminologies. This page will help you to learn all basic and advanced web terminologies that will definitely help you
to test your web projects.

Web terminologies covered in this page are:
What is internet, www, TCP/IP, HTTP protocol, SSL (Secure socket layer), HTTPS, HTML, Web server, Web client, Proxy server, Caching, Cookies, Application server, Thin client, Daemon, Client side scripting, Server side scripting, CGI, Dynamic web pages, Digital certificates and list of HTTP status
codes.

•Internet
–A global network connecting millions of computers.

•World Wide Web (the Web)
–An information sharing model that is built on top of the Internet, utilizes HTTP protocol and browsers (such as Internet Explorer) to access Web pages formatted in HTML that are linked via hyperlinks
and the Web is only a subset of the Internet (other uses of the Internet include email (via SMTP), Usenet, instant messaging and file transfer (via FTP)

•URL (Uniform Resource Locator)
–The address of documents and other content on the Web. It is consisting of protocol, domain and the file. Protocol can be HTTP, FTP, Telnet, News etc., domain name is the DNS name of
the server and file can be Static HTML, DOC, Jpeg, etc., . In other words URLs are strings that uniquely identify resources on internet.

•TCP/IP
–TCP/IP protocol suite used to send data over the Internet. TCP/IP consists of only 4 layers - Application layer, Transport layer, Network layer & Link layer


Internet Protocols:
Application Layer - DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, Bit Torrent, RTP, rlogin.

Transport Layer- TCP, UDP, DCCP, SCTP, IL, RUDP,

Network Layer - IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP,

Link Ethernet Layer- Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM,
DTM, Frame Relay, SMDS,

•TCP (Transmission Control Protocol)
–Enables two devices to establish a connection and exchange data.

–In the Internet protocol suite, TCP is the intermediate layer between the Internet Protocol below it, and an application above it. Applications often need reliable pipe-like connections to each other,
whereas the Internet Protocol does not provide such streams, but rather only unreliable packets. TCP does the task of the transport layer in the simplified OSI model of computer networks.

–It is one of the core protocols of the Internet protocol suite. Using TCP, applications on networked hosts can create connections to one another, over which they can exchange data or packets. The
protocol guarantees reliable and in-order delivery of sender to receiver data. TCP also distinguishes data for multiple, concurrent applications (e.g. Web server and e-mail server) running on the
same host.

•IP

Specifies the format of data packets and the addressing protocol.The Internet Protocol (IP) is a data-oriented protocol used for communicating data across a packet-switched internet work. IP is a network layer protocol in the internet protocol suite. Aspects of IP are IP addressing and routing. Addressing refers to how end hosts become assigned IP addresses. IP routing is performed by all hosts,
but most importantly by internetwork routers

•IP Address

–A unique number assigned to each connected device, often assigned dynamically to users by an ISP on a session-by-session basis
– dynamic IP address. Increasingly becoming dedicated, particularly with always-on broadband connections
– static IP address.

•Packet
–A portion of a message sent over a TCP/IP Network. It contains content and destination
•HTTP (Hypertext Transfer Protocol)
–Underlying protocol of the World Wide Web. Defines how messages are formatted and transmitted over a TCP/IP network for Web sites. Defines what actions Web servers and Web browsers take in
response to various commands.
–HTTP is stateless. The advantage of a stateless protocol is that hosts don't need to retain information about users between requests, but this forces the use of alternative methods for maintaining users'
state, for example, when a host would like to customize content for a user who has visited before. The common method for solving this problem involves the use of sending and requesting cookies. Other
methods are session control, hidden variables, etc

–example: when you enter a URL in your browser, an HTTP command is sent to the Web server telling to fetch and transmit the requested Web page oHEAD: Asks for the response identical to the one that
would correspond to a GET request, but without the response body. This is useful for retrieving meta-
information written in response headers, without having to transport the entire content.

oGET : Requests a representation of the specified resource. By far the most common method used on
the Web today.

oPOST : Submits user data (e.g. from a HTML form) to the identified resource. The data is included in the body of the request.


oPUT: Uploads a representation of the specified resource.

oDELETE: Deletes the specified resource (rarely implemented).

oTRACE: Echoes back the received request, so that aclient can see what intermediate servers are adding or changing in the request.

oOPTIONS:

oReturns the HTTP methods that the server supports. This can be used to check the functionality of a web server.

oCONNECT: For use with a proxy that can change to being an SSL tunnel.


•HTTP pipelining
–appeared in HTTP/1.1. It allows clients to send multiple requests at once, without waiting for an answer. Servers can also send multiple answers without closing their socket. This results in fewer
roundtrips and faster load times. This is particularly useful for satellite Internet connections and other connections with high latency as separate requests need not be made for each file. Since it
is possible to fit several HTTP requests in the same TCP packet, HTTP pipelining allows fewer TCP packets to be sent over the network, reducing network load. HTTP pipelining requires both the
client and the server to support it. Servers are required to support it in order to be HTTP/1.1 compliant, although they are not required to pipeline responses, just to accept pipelined requests.

•HTTP-Tunnel
–technology allows users to perform various Internet tasks despite the restrictions imposed by firewalls. This is made possible by sending data through HTTP (port 80). Additionally, HTTP-Tunnel
Technology is very secure, making it indispensable for both average and business communications. The HTTP-Tunnel client is an application that runs in your system tray acting as a SOCKS server,
Managing all data transmissions between the computer and the network.

•HTTP streaming
–It is a mechanism for sending data from a Web server to a Web browser in response to an event. HTTP Streaming is achieved through several common mechanisms. In one such mechanism the
web server does not terminate the response to the client after data has been served. This differs from the typical HTTP cycle in which the response is closed immediately following data transmission.
The web server leaves the response open such that if an event is received, it can immediately be sent to the client. Otherwise the data would have to be queued until the client's next request is made
to the web server. The act of repeatedly queing and re-requesting information is known as a Polling mechanism. Typical uses for HTTP Streaming include market data distribution (stock tickers),
live chat/messaging systems, online betting and gaming, sport results, monitoring consoles and Sensor network monitoring.

•HTTP referrer
–It signifies the webpage which linked to a new page on the Internet. By checking the referer, the new page can see where the request came from. Referer logging is used to allow websites and web
servers to identify where people are visiting them from, for promotional or security purposes. Since the referer can easily be spoofed (faked), however, it is of limited use in this regard except on a casual basis. A dereferer is a means to strip the details of the referring website from a link request so that the target website cannot identify the page which was clicked on to originate a request. Referer is a common misspelling of the word referrer. It is so common, in fact that it made it into the official specification of HTTP – the communication protocol of the World Wide Web – and has therefore become the standard industry spelling when discussing HTTP referers.

•SSL (Secure Sockets Layer)
–Protocol for establishing a secure connection for transmission, it uses the HTTPS convention
–SSL provides endpoint authentication and communications privacy over the Internet using cryptography. In typical use, only the server is authenticated (i.e. its identity is ensured) while the client remains unauthenticated; mutual authentication requires public key infrastructure (PKI) deployment to clients. The protocols allow client/server applications to communicate in a way designed to prevent eavesdropping, tampering, and message forgery.

–SSL involves a number of basic phases:
oPeer negotiation for algorithm support
oPublic key encryption-based key exchange and certificate-based authentication
oSymmetric cipher-based traffic encryption
during the first phase, the client and server negotiate



Which cryptographic algorithms will be used. Current implementations support the following choices:
of or public-key cryptography:

RSA, Diffie-Hellman,

DSA or Fortezza; of or symmetric ciphers: RC2, RC4, IDEA, DES, Triple DES or AES; oFor one-way hash functions: MD5 or SHA.

HTTPS
–is a URI scheme which is syntactically identical to the http: scheme normally used for accessing resources using HTTP. Using an https: URL indicates that HTTP is to be used, but with a different default port and an additional encryption/authentication layer between HTTP and TCP. This system was invented by Netscape Communications Corporation to provide authentication and encrypted communication and is widely used on the Web for security-sensitive communication, such as payment transactions.
•HTML (Hypertext Markup Language)
–The authoring language used to create documents on the World Wide Web

–Hundreds of tags can be used to format and layout a Web page’s content and to hyperlink to other Web content.

•Hyperlink
–Used to connect a user to other parts of a web site and to other web sites and web-enabled services.

•Web server
–A computer that is connected to the Internet. Hosts Web content and is configured to share that content.
–Web server is responsible for accepting HTTP requests from clients, which are known as Web browsers, and serving them Web pages, which are usually HTML documents and linked objects (images, etc.).

•Examples:
oApache HTTP Server from the Apache Software Foundation.
oInternet Information Services (IIS) from Microsoft.
oSun Java System Web Server from Sun Microsystems, formerly Sun ONE Web Server, iPlanet Web Server, and Netscape Enterprise Server.
oZeus Web Server from Zeus Technology
•Web client

–Most commonly in the form of Web browser software such as Internet Explorer or Netscape

–Used to navigate the Web and retrieve Web content from Web servers for viewing.

•Proxy server
–An intermediary server that provides a gateway to the Web (e.g., employee access to the Web most often goes through a proxy)
–Improves performance through caching and filters the Web
–The proxy server will also log each user interaction.

•Caching
–Web browsers and proxy servers save a local copy of the downloaded content – pages that display personal information should be set to prohibit caching.

•Web form
–A portion of a Web page containing blank fields that users can fill in with data (including personal info) and submits for Web server to process it.
•Web server log
–Every time a Web page is requested, the Web server may automatically logs the following information:
Other IP address of the visitor

Date and time of the request other URL of the requested file

The URL the visitor came from immediately before (referrer URL) othe visitor’s Web browser type and operating system

•Cookies
–A small text file provided by a Web server and stored on a users PC the text can be sent back to the server every time the browser requests a page from the server. Cookies are used to identify a user as they navigate through a Web site and/or return at a later time. Cookies enable a range of functions including personalization of content.

•Session vs. persistent cookies
–A Session is a unique ID assigned to the client browser by a web server to identify the state of the client because web servers are stateless.

–A session cookie is stored only while the user is connected to the particular Web server – the cookie is deleted when the user disconnects

–Persistent cookies are set to expire at some point in the future – many are set to expire a number of years forward

•Socket
–A socket is a network communications endpoint.

•Application Server
–An application server is a server computer in a computer network dedicated to running certain software applications. The term also refers to the software installed on such a computer to facilitate the
serving of other applications. Application server products typically bundle middleware to enable applications to intercommunicate with various qualities of service — reliability, security, non-repudiation, and so on. Application servers also provide an API to programmers, so that they don't have to be concerned with the operating system or the huge array of interfaces required of a modern web-based application. Communication occurs through the web in the form of HTML and XML, as a link to various databases, and, quite often, as a link to systems and devices ranging from huge legacy applications to small information devices, such as an atomic clock or a home appliance

–An application server exposes business logic to client applications through various protocols, possibly including HTTP. the server exposes this business logic through a component API, such as the EJB (Enterprise Java Bean) component model found on J2EE (Java 2 Platform, Enterprise Edition) application servers. Moreover, the application server manages its own resources. Such gate-keeping
duties include security, transaction processing, resource pooling, and messaging –

Ex: J Boss (Red Hat), Web Sphere (IBM), Oracle Application Server 10g (Oracle Corporation) and Web Logic (BEA

•Thin Client
–A thin client is a computer (client) in client-server architecture networks which has little or no application logic, so it has to depend primarily on the central server for processing activities. It is
Designed to be especially small so that the bulk of the data processing occurs on the server.
•Thick client –It is a client that performs the bulk of any data processing operations itself, and relies on the server it is associated with primarily for data storage.


•Daemon
–It is a computer program that runs in the background, rather than under the direct control of a user; they are usually instantiated as processes. Typically daemons have names that end with the letter
"d"; for example, sys logd is the daemon which handles the system log. Daemons typically do not have any existing parent process, but reside directly under init in the process hierarchy. Daemons usually
become daemons by forking a child process and then making the parent process kill itself, thus making init adopt the child. This practice is commonly known as "fork off and die." Systems often start (or "launch") daemons at boot time: they often serve the function of responding to network requests, hardware activity, or other programs by performing some task. Daemons can also configure hardware (like devfsd on some Linux systems), run scheduled tasks (like cron), and perform a variety of other tasks.

•Client-side scripting
–Generally refers to the class of computer programs on the web that are executed client-side, by the user's web browser, instead of server-side (on the web server). This type of computer programming is an important part of the Dynamic HTML (DHTML) concept, enabling web pages to be scripted; that is, to have different and changing content depending on user input, environmental conditions (such as the time of day), or other variables.
–Web authors write client-side scripts in languages such as
JavaScript (Client-side JavaScript) or VBScript, which are based on several standards:

O HTML scripting
O HTTP
O Document Object Model

•Client-side scripts are often embedded within an HTML document, but they may also be
Contained in a separate file, which is referenced by the document (or documents) that use it.
Upon request, the necessary files are sent to the user's computer by the web server (or
Servers) on which they reside. The user's web 7yubrowser executes the script, then displays the
document, including any visible output from the script. Client-side scripts may also contain
instructions for the browser to follow if the user interacts with the document in a certain
way, e.g., clicks a certain button. These instructions can be followed without further
communication with the server, though they may require such communication.

•Server-side Scripting
–It is a web server technology in which a user's request is fulfilled by running a script directly on the web server to generate dynamic HTML pages. It is usually used to provide interactive web sites that
interface to databases or other data stores. This is different from client-side scripting where scripts are run by the viewing web browser, usually in JavaScript. The primary advantage to server-side scripting is the ability to highly customize the response based on the user's requirements, access rights, or queries into data stores.
O ASP: Microsoft designed solution allowing various languages (though generally V B script is used) inside a HTML-like outer page, mainly used on Windows but with limited support on other platforms.
O Cold Fusion: Cross platform tag based commercial server side scripting system.
O JSP: A Java-based system for embedding code in HTML pages.
O Lasso: A Data source neutral interpreted programming language and cross platform server.
O SSI: A fairly basic system which is part of the common apache web server. Not a full programming
environment by far but still handy for simple things like including a common menu.
O PHP: Common open source solution based on including code in its own language into an HTML page.
O Server-side JavaScript: A language generally used on the client side but also occasionally on the server side.
O SMX : Lisp like open source language designed to be embedded into an HTML page.




•Common Gateway Interface (CGI)
–is a standard protocol for interfacing external application software with an information server, commonly a web server. This allows the server to pass requests from a client web browser to the external application. The web server can then return the output from the application to the web browser.

•Dynamic Web pages:
–can be defined as: (1) Web pages containing dynamic content (e.g., images, text, form fields, etc.) that can change/move without the Web page being reloaded or (2) Web pages that are produced on-the-fly by server-side programs, frequently based on parameters in the URL or from an HTML form. Web pages that adhere to the first definition are often called Dynamic HTML or DHTML pages. Client-side languages like JavaScript are frequently used to produce these types of dynamic web pages. Web pages that adhere to the second definition are often created with the help of server-side languages such as PHP, Perl, ASP/.NET, JSP, and languages. These server-side languages typically use the Common Gateway Interface (CGI) to produce dynamic web pages.

•Digital Certificates
In cryptography, a public key certificate (or identity certificate) is a certificate which uses a digital signature to bind together a public key with an identity — information such as the name of a person or an organization, their address, and so forth. The certificate can be used to verify that a public key belongs to an individual. In a typical public key infrastructure (PKI) scheme, the signature will be of a
certificate authority (CA). In a web of trust s "endorsements"). In either case, the signatures on a certificate are attestations by the certificate signer that the identity information and the public key belong
together.

Certificates can be used for the large-scale use of public-key cryptography. Securely exchanging secret keys amongst users becomes impractical to the point of effective impossibility for anything other than quite small networks. Public key cryptography provides a way to avoid this problem. In principle, if Alice wants others to be able to send her secret messages, she need only publish her public key. Anyone possessing it can then send her secure information. Unfortunately, David could publish a different public key (for which he knows the related private key) claiming that it is Alice's public key. In so doing, David could intercept and read at least some of the messages meant for Alice. But if Alice builds her public key into a certificate and has it digitally signed by a trusted third party (Trent),anyone who trusts Trent can merely check the certificate to see whether Trent thinks the embedded public key is Alice's. In typical Public-key Infrastructures (PKIs), Trent will be a CA, who is trusted by all participants. In a web of trust, Trent can be any user, and whether to trust that user's attestation that a particular public key belongs to Alice will be up to the person wishing to send a message to Alice.

In large-scale deployments, Alice may not be familiar with Bob's certificate authority (perhaps they each have a different CA — if both use employer CAs, different employers would produce this result), so Bob's certificate may also include his CA's public key signed by a "higher level" CA2, which might be
recognized by Alice. This process leads in general to a hierarchy of certificates, and to even more complex trust relationships. Public key infrastructure refers, mostly, to the software that manages certificates in a large-scale setting. In X.509 PKI systems, the hierarchy of certificates is always a top-down tree, with a root certificate at the top, representing a CA that is 'so central' to the scheme that it
does not need to be authenticated by some trusted third party. A certificate may be revoked if it is discovered that its related private key has been compromised, or if the relationship (between an entity and a public key) embedded in the certificate is discovered to be incorrect or has changed; this
might occur, for example, if a person changes jobs or names. A revocation will likely be a rare occurrence, but the possibility means that when a certificate is trusted, the user should always check its validity. This can be done by comparing it against a certificate revocation list (CRL) — a list of revoked or cancelled certificates.
Ensuring that such a list is up-to-date and accurate is a core function in a centralized PKI, one which requires both staff and budget and one which is therefore sometimes not properly done. To be effective, it must be readily available to any who needs it whenever it is needed and must be updated frequently. he other way to check a certificate validity is to query the certificate authority using the Online certificate Status Protocol (OCSP) to know the status of a specific certificate.
Both of these methods appear to be on the verge of being supplanted by XKMS.

This new standard, however, is yet to see widespread implementation.
A certificate typically includes:

The public key being signed.
A name, which can refer to a person, a computer or an organization.
A validity period.
The location (URL) of a revocation center.
The most common certificate standard is the ITU-T X.509. X.509 is being adapted to the Internet by the IETF PKIX working group. Classes
Verisign introduced the concept of three classes of digital certificates:
Class 1 for individuals, intended for email;
Class 2 for organizations, for which proof of identity is required; and
Class 3 for servers and software signing, for which independent verification and
checking of identity and authority is done by the issuing certificate authority (CA)

•List of HTTP status codes
1xx Informational
Request received, continuing process.
100: Continue
101: Switching Protocols
2xx Success
The action was successfully received, understood, and accepted.
200: OK
201: Created
202: Accepted
203: Non-Authoritative Information
204: No Content
205: Reset Content
206: Partial Content
3xx Redirection
The client must take additional action to complete the request.
300: Multiple Choices
301: Moved Permanently
302: Moved Temporarily (HTTP/1.0)
302: Found (HTTP/1.1)
see 302 Google Jacking
303: See Other (HTTP/1.1)
304: Not Modified
305: Use Proxy

Many HTTP clients (such as Mozilla and Internet Explorer) don't correctly handle responses with this status code.
306: (no longer used, but reserved)
307: Temporary Redirect
4xx Client Error
The request contains bad syntax or cannot be fulfilled.
400: Bad Request
401: Unauthorized
Similar to 403/Forbidden, but specifically for use when authentication is possible
but has failed or not yet been provided. See basic authentication scheme and
digest access authentication.
402: Payment Required
403: Forbidden
404: Not Found
405: Method Not Allowed
406: Not Acceptable
407: Proxy Authentication Required
408: Request Timeout
409: Conflict
410: Gone
411: Length Required
412: Precondition Failed
413: Request Entity Too Large
414: Request-URI Too Long
415: Unsupported Media Type
416: Requested Range Not Satisfiable
417: Expectation Failed
5xx Server Error
The server failed to fulfill an apparently valid request.
500: Internal Server Error
501: Not Implemented
502: Bad Gateway
503: Service Unavailable
504: Gateway Timeout
505: HTTP Version Not Supported 509: Bandwi

What you need to know about BVT (Build Verification Testing)

What is BVT?
Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. These test cases are core functionality test cases that ensure application is stable and can be tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for fix.
BVT is also called
smoke testing or build acceptance testing (BAT)New Build is checked mainly for two things:
Build validation
Build acceptance
Some BVT basics:
It is a subset of tests that verify main functionalities.
The BVT’s are typically run on daily builds and if the BVT fails the build is rejected and a new build is released after the fixes are done.
The advantage of BVT is it saves the efforts of a test team to setup and test a build when major functionality is broken.
Design BVTs carefully enough to cover basic functionality.
Typically BVT should not run more than 30 minutes.
BVT is a type of
regression testing, done on each and every new build.
BVT primarily checks for the project integrity and checks whether all the modules are integrated properly or not. Module integration testing is very important when different teams develop project modules. I heard many cases of application failure due to improper module integration. Even in worst cases complete project gets scraped due to failure in module integration.
What is the main task in build release? Obviously file ‘check in’ i.e. to include all the new and modified project files associated with respective builds. BVT was primarily introduced to check initial build health i.e. to check whether - all the new and modified files are included in release, all file formats are correct, every file version and language, flags associated with each file.These basic checks are worth before build release to test team for testing. You will save time and money by discovering the build flaws at the very beginning using BVT.
Which test cases should be included in BVT?
This is very tricky decision to take before automating the BVT task. Keep in mind that success of BVT depends on which test cases you include in BVT.
Here are some simple tips to include
test cases in your BVT automation suite:
Include only critical test cases in BVT.
All test cases included in BVT should be stable.
All the test cases should have known expected result.
Make sure all included critical functionality test cases are sufficient for application test coverage.
Also do not includes modules in BVT, which are not yet stable. For some under-development features you can’t predict expected behavior as these modules are unstable and you might know some known failures before testing for these incomplete modules. There is no point using such modules or test cases in BVT.
You can make this critical functionality test cases inclusion task simple by communicating with all those involved in project development and testing life cycle. Such process should negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality standards and these standards can be met only by analyzing major project features and scenarios.
Example: Test cases to be included in BVT for Text editor application (Some sample tests only):1) Test case for creating text file.2) Test cases for writing something into text editor3) Test case for copy, cut, paste functionality of text editor4) Test case for opening, saving, deleting text file.
These are some sample test cases, which can be marked as ‘critical’ and for every minor or major changes in application these basic critical test cases should be executed. This task can be easily accomplished by BVT.
BVT automation suits needs to be maintained and modified time-to-time. E.g. include test cases in BVT when there are new stable project modules available.
What happens when BVT suite run:Say Build verification automation test suite executed after any new build.1) The result of BVT execution is sent to all the email ID’s associated with that project.2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of BVT.3) If BVT fails then BVT owner diagnose the cause of failure.4) If the failure cause is defect in build, all the relevant information with failure logs is sent to respective developers.5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is really a bug? And if it’s a bug then what will be his bug-fixing scenario.6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is passed to test team for further detail functionality, performance and other testes.
This process gets repeated for every new build.
Why BVT or build fails?BVT breaks sometimes. This doesn’t mean that there is always bug in the build. There are some other reasons to build fail like test case coding error, automation suite error, infrastructure error, hardware failures etc.You need to troubleshoot the cause for the BVT break and need to take proper action after diagnosis.
Tips for BVT success:1) Spend considerable time writing BVT test cases scripts.2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will help developer team to debug and quickly know the failure cause.3) Select stable test cases to include in BVT. For new features if new critical test case passes consistently on different configuration then promote this test case in your BVT suite. This will reduce the probability of frequent build failure due to new unstable modules and test cases.4) Automate BVT process as much as possible. Right from build release process to BVT result - automate everything.5) Have some penalties for breaking the build Some chocolates or team coffee party from developer who breaks the build will do.
Conclusion:BVT is nothing but a set of regression test cases that are executed each time for new build. This is also called as smoke test. Build is not assigned to test team unless and until the BVT passes. BVT can be run by developer or tester and BVT result is communicated throughout the team and immediate action is taken to fix the bug if BVT fails. BVT process is typically automated by writing scripts for test cases. Only critical test cases are included in BVT. These test cases should ensure application test coverage. BVT is very effective for daily as well as long term builds. This saves significant time, cost, resources and after all no frustration of test team for incomplete build.
If you have some experience in BVT process then please share it

Manual and Automation testing Challenges

Manual and Automation testing Challenges
Software Testing has lot of challenges both in manual as well as in automation. Generally in manual testing scenario developers through the build to test team assuming the responsible test team or tester will pick the build and will come to ask what the build is about? This is the case in organizations not following so-called ‘processes’. Tester is the middleman between developing team and the customers, handling the pressure from both the sides. And I assume most of our readers are smart enough to handle this pressure. Aren’t you?
This is not the case always. Some times testers may add complications in testing process due to their unskilled way of working. In this post I have added most of the testing challenges created due to testing staff, developing staff, testing processes and wrong management decisions.So here we go with the top challenges:


1) Testing the complete application: Is it possible? I think impossible. There are millions of test combinations. It’s not possible to test each and every combination both in manual as well as in automation testing. If you try all these combinations you will never ship the product

2) Misunderstanding of company processes:Some times you just don’t pay proper attention what the company-defined processes are and these are for what purposes. There are some myths in testers that they should only go with company processes even these processes are not applicable for their current testing scenario. This results in incomplete and inappropriate application testing.

3) Relationship with developers:Big challenge. Requires very skilled tester to handle this relation positively and even by completing the work in testers way. There are simply hundreds of excuses developers or testers can make when they are not agree with some points. For this tester also requires
good communication
troubleshooting and analyzing skill.

4)
Regression testing :When project goes on expanding the regression testing work simply becomes uncontrolled. Pressure to handle the current functionality changes, previous working functionality checks and bug tracking.

5) Lack of
skilled testers:I will call this as ‘wrong management decision’ while selecting or training testers for their project task in hand. These unskilled fellows may add more chaos than simplifying the testing work. This results into incomplete, insufficient and ad-hoc testing throughout the testing life cycle

6)
Testing always under time constraint Hey tester, we want to ship this product by this weekend, are you ready for completion? When this order come s from boss, tester simply focuses on task completion and not on the test coverage and quality of work. There is huge list of tasks that you need to complete within specified time. This includes writing, executing, automating and reviewing the test cases.

7) Which tests to execute first?If you are facing the challenge stated in point no 6, then how will you take decision which test cases should be executed and with what priority? Which tests are important over others? This requires good experience to work under pressure.

8 ) Understanding the requirements:Some times testers are responsible for communicating with customers for understanding the requirements. What if tester fails to understand the requirements? Will he be able to test the application properly? Definitely No! Testers require good listening and understanding capabilities.
9)
Automation testing Many sub challenges - Should automate the testing work? Till what level automation should be done? Do you have sufficient and skilled resources for automation? Is time permissible for automating the test cases? Decision of automation or manual testing will need to address the pros and cons of each process.
10) Decision to stop the testing:When to stop testing? Very difficult decision. Requires core judgment of testing processes and importance of each process. Also requires ‘on the fly’ decision ability.


11) One test team under multiple projects:Challenging to keep track of each task. Communication challenges. Many times results in failure of one or both the projects.

12) Reuse of Test scripts:Application development methods are changing rapidly, making it difficult to manage the test tools and test scripts. Test script migration or reuse is very essential but difficult task.

13) Testers focusing on finding easy bugs:If organization is rewarding testers based on number of bugs (very bad approach to judge
testers performance ) then some testers only concentrate on finding easy bugs those don’t require deep understanding and testing. A hard or subtle bug remains unnoticed in such testing approach.

14) To cope with attrition:Increasing salaries and benefits making many employees leave the company at very short career intervals. Managements are facing hard problems to cope with attrition rate. Challenges - New testers require project training from the beginning, complex projects are difficult to understand, delay in shipping date!
These are some top software testing challenges we face daily. Project success or failure depends largely on how you address these basic issues.
For further reference and detailed solutions on these challenges refer book “Surviving the Top Ten challenges of Software Testing” written by William E. Perry and Randall W. Rice

Many of you are working in manual and/or automation testing field.
I want your views on handling these software testing challenges. Feel free to express your views in comment section below.

Check your eligibility for CSTE certification. Take this sample CSTE examination

Here is one more ’sample exam questions’ article on CSTE certification. CSTE testing certification is the basic certification to check testers skill and understanding of software testing theory and software testing practices.
If you are applying for CSTE certification check if you can answer at least 75% of the following test questions. Four and half hour CSTE exam consist of 4 parts, Two multiple choice parts and two essay parts.
Below you will find 20 multiple choice questions from all skill categories. There are around 10 skill categories and I have included 2 questions from each category.
Skill categories:
Software Testing Principles and Concepts
Building the Test Environment
Managing the Test Project
Test Planning
Executing the Test Plan
Test Reporting Process
User Acceptance Testing
Testing Software Developed by Contractors
Testing Internal Control
Testing New Technologies
These are the latest sample questions from the CSTE CBOK.
Mark the answers somewhere so that you can check the score at the end of the test.
1. The customer’s view of quality means:
a. Meeting requirements
b. Doing it the right way
c. Doing it right the first timed.
d Fit for usee.
e Doing it on time


2. The testing of a single program, or function, usually performed by the developer is called:
a. Unit testing
b. Integration testing
c. System testing
d. Regression testing
e. Acceptance testing

3. The measure used to evaluate the correctness of a product is called the product:
a. Policy
b. Standard
c. Procedure to do work
d. Procedure to check work
e. Guideline

4. Which of the four components of the test environment is considered to be the most important component of the test environment:
a. Management support
b. Tester competency
c. Test work processes
d. Testing techniques and tools


5. Effective test managers are effective listeners. The type of listening in which the tester is performing an analysis of what the speaker is saying is called:
a. Discriminative listening
b. Comprehensive listening
c. Therapeutic listening
d. Critical listening
e. Appreciative listening

6. To become a CSTE, an individual has a responsibility to accept the standards of conduct defined by the certification board. These standards of conduct are called:
a. Code of ethics
b. Continuing professional education requirement
c. Obtaining references to support experience
d. Joining a professional testing chapter
e. Following the common body of knowledge in the practice of software testing

7. Which of the following are risks that testers face in performing their test activities:
a. Not enough training
b. Lack of test tools
c. Not enough time for testing
d. Rapid changee. All of the above

8. All of the following are methods to minimize loss due to risk. Which one is not a method to minimize loss due to risk:
a. Reduce opportunity for error
b. Identify error prior to loss
c. Quantify loss
d. Minimize losse. Recover loss

9. Defect prevention involves which of the following steps:
a. Identify critical tasks
b. Estimate expected impact
c. Minimize expected impact
d. a, b and ce. a and b

10. The first step in designing use case is to:
a. Build a system boundary diagram
b. Define acceptance criteria
c. Define use cases
d. Involve userse. Develop use cases


11. The defect attribute that would help management determine the importance of the defect is called:
a. Defect type
b. Defect severity
c. Defect name
d. Defect locatione. Phase in which defect occurred


12. The system test report is normally written at what point in software development:
a. After unit testing
b. After integration testing
c. After system testing
d. After acceptance testing

13. The primary objective of user acceptance testing is to:
a. Identify requirements defects
b. Identify missing requirements
c. Determine if software is fit for use
d. Validate the correctness of interfaces to other software systemse. Verify that software is maintainable

14. If IT establishes a measurement team to create measures and metrics to be used in status reporting, that team should include individuals who have:
a. A working knowledge of measures
b. Knowledge in the implementation of statistical process control tools
c. A working understanding of benchmarking techniques
d. Knowledge of the organization’s goals and objectivese. All of the above

15. What is the difference between testing software developed by a contractor outside your country, versus testing software developed by a contractor within your country:
a. Does not meet people needs
b. Cultural differences
c. Loss of control over reallocation of resources
d. Relinquishment of controle. Contains extra features not specified

16. What is the definition of a critical success factor:
a. A specified requirement
b. A software quality factor
c. Factors that must be present
d. A software metrice. A high cost to implement requirement

17. The condition that represents a potential for loss to an organization is called:
a. Risk
b. Exposure
c. Threat
d. Controle. Vulnerability

18. A flaw in a software system that may be exploited by an individual for his or her advantage is called:
a. Risk
b. Risk analysis
c. Threat
d. Vulnerabilitye. Control

19. The conduct of business of the Internet is called:
a. e-commerce
b. e-business
c. Wireless applications
d. Client-server system
e. Web-based applications

20. The following is described as one of the five levels of maturing a new technology into an IT organization’s work processes. The “People-dependent technology” level is equivalent to what level in SEI’s compatibility maturity model:
a. Level 1
b. Level 2
c. Level 3
d. Level 4
e. Level 5

Answers

1. (d) Fit for use

2. (a) Unit testing

3. (b) Standard

4. (a) Management support

5. (d) Critical listening

6. (a) Code of ethics

7. (e) All of the above

8. (c) Quantify loss

9. (d) a, b and c

10. (a) Build a system boundary diagram

11. (b) Defect severity

12. (c) After system testing

13. (c) Determine if software is fit for use

14. (e) All of the above

15 (b) Cultural differences

16. (c) Factors that must be present

17. (a) Risk

18. (d) Vulnerability

19. (b) e-business

20. (a) Level 1


In coming articles I will emphasize more on sample CSTE essay papers and how to answer multiple choice and essay type questions.

How Much Does H1b cost?

If you are wondering how much a H1B will cost for an organization, USCIS has offered the details of H1B filing fees for H1B 2009. Applications will be accepted from 1st of April, 2008. ( just 33 more days… look at the count down timer at the top of the page)
>>
Link to USCIS Page
Base Filing Fee : $320
ACWIA Fee: $750 (1-25 Employees) or $1500 (more than 25 Employees)
Fraud Fee: $500
Premium Processing: $1000
Note that these are the fees to be paid to USCIS. Many organizations will incur other expenses towards the filing like Attorney fees.
USCIS has also offered helpful information for Organizations that are looking to file for H1B petitions this year.
Helpful Hints for Filing a FY 2009 H-1B Cap Case: Quick Tips
What are the main errors in an H-1B petition that can cause USCIS to have to reject or deny the petition?
Frequently Asked Questions on Completing and Submitting a FY 2009 H-1B Cap Case
Condensed Q and A For Completing and Submitting a FY 2009 H-1B Cap Case (56KB PDF)
Though many are expecting a similar rush as in last year, going by the initial indications from top consulting companies, we may not see such a mad rush this year. Last year this time, many firms did not accept any new application beyond February. But this year, firms are still looking out for prospective Professionals.
Related Posts:
Thought Garage Tops Google Search for H1B 2009 - Encore!
H1B 2009 : Day 1 of 5 Will 2007 repeat in 2008?
H1B 2009 : H1B Visa Cap Reached, Lottery Again in 2008 !!

Indian tech professionals to benefit from increased H1B visas in USA

Us immigration has decided to allocate 65,000 new H1 B visas by lottery instead of quota systm traditionally followed ,as demand far outstrips supply.A majority of them is again expected to be be cornered by Indian high-tech professionals, according to immigration attorneys.
The US Citizenship and Immigration Services (USCIS) is expected to pick the lottery within a week, but the anxious wait for the applicants may continue for months as the department starts returning unsuccessful applications and sends receipts for the others. Those who get the three-year visa for skilled professionals can start work from october.While there were about 124,000 applications last year, the number this year may cross 150,000 . Many will also be vying for the 20,000 H-1B visas meant for foreigners with US-earned masters' or higher degrees. Since out 60-70 percent of all applications are expected to be on behalf of Indians they are sure to benefit from the proposed lottery system.Morever this flow is unaffected by the recent downturn in the US economy or improved economic opportunities in India.
There is an increasing need in the US to crying need to raise the H-1B cap as American businesses will benefit from hiring foreign highly skilled workers. A new bill has been already introduced in the Congress aiming to raise the cap to 195,000, and another bill seeks to boost the cap as well as exempt foreigners educated at US institutions from the quota. But no progress is expected before the next president takes over in early 2009. Earlier s there has been criticism over H1B v snatching away local technological jobs as it provides cheap labour but a recent study by the National Foundation for American Policy that found that on an average every foreign national on an one H1B visa generates another five to 7.5 jobs.

Indian outsourcing companies have also attracted criticism recently when the federal government released data showing that they accounted for nearly 80 percent of the visa petitions approved last year for the top 10 participants in the H1B programme.Currently Infosys had 4,559 and Wipro 2,567 approved visa petitions in the programme, which was initially set up to allow companies in the US to import the best and brightest in technology, engineering, and other fields when such workers are in short supply in America

IT recession plagues IITians

Mumbai: The recession in IT seems to have been having a roundabout effect. Just when you thought the only sufferers were the employees who have been given pink slips, IITs too have started feeling the brunt of it. The dip in the IIT campus recruitment figures of major Indian and foreign IT firms have just fuelled the concerns over the industry slowdown, reported Business Standard.While hiring by India's major IT services providers TCS, Wipro and Infosys substantially dropped, firms like IBM, HCL, Hughes Software and CSC just opted out of placements this year.
"While many companies say they have a particular number in mind and would recruit likewise, our alumni network at these companies informs us that these IT giants are exercising restraint in recruiting trainees due to a slowdown," said a placement official from IIT Roorkee.Recruitment by IT companies at IIT Kanpur has gone down from 130 students in 2007 to 72 in 2008. A placement official from IIT Kanpur agreed to the fact and said, "Like every year, the institute offered the regular number of students to these IT companies for placements but they did not pick as many students.""Clients from IT firms are increasingly in the process of utilizing their bench strength," said Monisha Advani, managing director, Randstad India, an HR consultancy firm.

Good News for Adjustment of Status Applicants

In the last few days, there have been two developments that will make many applicants for Adjustment Status quite happy.
As many of you know, over the past few years there have been many Adjustment petitions that have been held up simply because the FBI has not been able to provide a timely “name check” report to USCIS.
On February 4th, USCIS issued a memorandum directing officers to approve I-485 petitions if they are otherwise approvable and the name check has been pending for more than 180 days. Quite simply, this means that if the petition is approvable, the priority dates are current and the FBI check has been pending for more than 180 days, the officers may now go ahead and approve the petition without waiting for the FBI name check. The memorandum then goes to explain that if adverse information is later received from the FBI, USCIS will determine whether rescission of the approval or deportation are in order.
USCIS has already ordered that its offices do searches for files that are approvable but for the FBI name check. Therefore, if you have been held by the name check, there is a good chance that your petition will be adjudicated in the near future.
The other bit of good news for Adjustment of Status applicants concerns the priority dates. If you have not looked, you should take a look because the priority dates for March of 2008 have moved forward quit a bit. You may view the most recent priority dates at following URL: http://travel.state.gov/visa/frvi/bulletin/bulletin_3953.html

Important H-1B Proposals Introduced in Congress

Rep. Smith (R-TX) introduced H.R. 5642 which would increase the numerical limitation with respect to H–1B non-immigrants for fiscal years 2008 and 2009.

Rep. Kennedy (D-RI) introduced H.R. 5634 which would exempt from numerical limitations any alien who has received a Ph.D. from an institution of higher education within the 3-year period preceding such alien’s petition for special immigrant status

Announcement on new H-1B rules to take effect shortly

USCIS has announced that it has transmitted a rule to the Federal Register that would accomplish the following:

Change from 2 days to 5 days the period of time during which cap-subject H-1Bs can be received to be included in any "lottery" that would occur if, as expected, the number of petitions exceeds the quota.
Prohibit multiple filings from the same employer for the same employee, even if the filings are for different jobs. The one exception would be that related employers could file separate petitions for the same employee.
Result in the denial or revocation (without refund of fees) of any petition found to have been a multiple filing.
Change the lottery system so that the 20,000 U.S. advanced degree cap cases are selected first. If any advanced degree cases are left after that process, they would go into the overall 65,000 pool.
State that no refunds will be made on cases where someone incorrectly claims a cap exemption.

USCIS also indicated that it will continue to accept letters from authorized officials of schools indicating that a student has completed the requirements for a degree (i.e., all papers, exams, etc.) and is merely awaiting official conferral of the degree


The H-1B Cap and the New OPT Rule

A couple of very important developments have occurred in the past few days that affect students on OPT and those who just recently filed for an H-1B petition for the upcoming fiscal year.
The first development is that USCIS has just announced that the H-1B cap has been reached. Although that is not a surprise, the big surprise is that the Masters cap of 20,000 has also been reached. At this point, USCIS has not released any numbers as to exactly how many regular petitions and how many Masters petitions were received.
USCIS will be running a random lottery for the Masters cases. Once the 20,000 Masters cases are selected, the remaining Masters cases will be added to the pool of petitions in the regular H-1B cap. At that time, USCIS will then run a second lottery for the general H-1B quota of 65,000 (it is actually less than 65,000 due to some set aside numbers for Chile and Singapore.) This effectively means that Masters degree holders will have two shots at an H-1B number. Again, at this point we have no idea as to how many petitions were received, but based upon the fact that the Masters cap was reached so fast, we can only guess that the total number of H-1B petitions filed is probably substantially higher than the 130,000 petitions received by USCIS last year.
The second important news occurred on Friday, April 4th. On that date, the government suddenly issued a rule that may extend the OPT and solve the cap-gap problem for some individuals. For those of you who do not know what the “cap-gap” is, this happens when the OPT expires prior to October 1st. The time period between the expiration of the OPT (plus the 60 day grace period) and October 1st is know as the cap-gap.
Here is what this new rule accomplishes:
Extension of OPT for certain students - students with degrees in Computer Science Applications, Actuarial Science, Engineering, Engineering Technologies, Life Sciences, Mathematics, Military Technologies and Physical Sciences will be able to apply for a 17 month extension of the OPT.
In order to be eligible for this extension, two further requirements must be met: (1) the student’s employer must be enrolled in the E-Verify program; and (2) the student must apply for the OPT extension at least 90 days before the current-post completion OPT expires.
Two quick comments concerning the above:
if the employer is not involved in the E-Verify program, a program to help ensure that illegal aliens are not being hired, it is unlikely that an attorney would recommend that a corporation enroll into E-Verify simply to help its OPT students. The reason is that the E-Verify program is a flawed program that adds to a company’s legal exposure and liabilities. Companies will make their own decisions on this, but you should know that most attorneys would not be in favor of a company enrolling in E-Verify only to enable its OPT employees to extend the OPT. Obviously, each company will come to its own conclusions depending on their needs, but as an OPT, you should be aware of the difficult choice faced by companies.
the requirement that the OPT extension be filed with 90 days of anticipation will probably put the extension out of the reach of many who have an OPT expiring in the very near future.
Cap-gap Extension – all students on post-completion OPT who have filed a change of status petition for the upcoming fiscal year may continue to stay in the U.S. and work once the H-1B cap has been reached.
Once the lotteries are conducted, there are two outcomes. If the student’s H-1B is not selected under the cap, the automatic extension of the OPT terminates. If the H-1B petition is selected during one of the lotteries, the student may remain in the U.S. and continue working until the H-1B becomes effective.
Comments concerning the Cap-gap:
Notice that this applies to all fields, it is not limited simply to those with Computer Science Applications, Actuarial Science degrees, etc;
as of this moment, the rule does not appear to allow for the cap-gap relief for those students who filed petitions for consular notification. In other words, those of you who could not remain in the U.S. until October 1st because your OPT expired and you did not want to re-enroll do not appear to benefit from the cap-gap relief as of this time. Currently, the American Immigration Lawyers Association and other groups are trying to get the government to provide a solution for individuals in this situation. That being said, if you filed for a change of status and the time to leave the U.S. is here, you should follow through with your plans. In other words, do not overstay your grace period hoping that the rule will cover you;
only students who have maintained their status are eligible for the automatic extension of the OPT during cap-gap period.
This is a highly fluid situation and there are many clarifications that the government will need to make. For those of you who would like to read the
FAQ from USCIS, please follow this link: http://www.usvisanews.com/downloads/faq_on_new_opt_rule.pdf. For those of you who would like to read the entire rule, follow this link: http://www.usvisanews.com/downloads/text_of_new_opt_rule.pdf.
Test plan template, based on IEEE 829 format
Test Plan Identifier (TPI).
References
Introduction
Test Items
Software Risk Issue
Features to be Tested
Features not to be Tested
Approach
Item Pass/Fail Criteria
Entry & Exit Criteria
Suspension Criteria and Resumption Requirements
Test Deliverables
Remaining Test Tasks
Environmental Needs
Staffing and Training Needs
Responsibilities
Planning Risks and Contingencies
Approvals

Test plan identifier
For example: "Master plan for 3A USB Host Mass Storage Driver TP_3A1.0"
Some type of unique company generated number to identify this test plan, its level and the level of software that it is related to. Preferably the test plan level will be the same as the related software level. The number may also identify whether the test plan is a Master plan, a Level plan, an integration plan or whichever plan level it represents. This is to assist in coordinating software and testware versions within configuration management.
Keep in mind that test plans are like other software documentation, they are dynamic in nature and must be kept up to date. Therefore, they will have revision numbers.
You may want to include author and contact information including the revision history information as part of either the identifier section of as part of the introduction...

References
List all documents that support this test plan
Documents that are referenced include:
Project Plan.
System Requirements specifications.
High Level design document.
Detail design document.
Development and Test process standards.
Methodology.
Low level design.

Introduction
State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is essentially the executive summary part of the plan.
You may want to include any references to other plans, documents or items that contain information relevant to this project/process.
Identify the objective of the plan or scope of the plan in relation to the Software Project plan that it relates to. Other items may include, resource budgetconstraints, scope of the testing effort, how testing relates to other evaluation activities (Analysis & Reviews), and possible the process to be used for change control and communication and coordination of key activities.
As this is the "Executive Summary" keep information brief and to the point.
Intention of this project has to be included

Test items (functions)
These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of what is to be tested. This can be developed from the software application inventories as well as other sources of documentation and information.
This can be controlled on a local Configuration Management (CM) process if you have one. This information includes version numbers, configuration requirements where needed, (especially if multiple versions of the product are supported). It may also include key delivery schedule issues for critical elements.
This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels it may be by program, unit, module or build.

Software risk issues.
Identify what software is to be tested and what the critical areas are, such as:
Delivery of a third party product.
New version of interfacing software.
Ability to use and understand a new package/tool, etc.
Extremely complex functions.
Modifications to components with a past history of failure.
Poorly documented modules or change requests.
There are some inherent software risks such as complexity; these need to be identified.
Safety.
Multiple interfaces.
Impacts on Client.
Government regulations and rules.
Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster and clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.
One good approach to define where the risks are is to have several [brainstorming] sessions.
Start with ideas, such as, what worries me about this project/application.

Features to be tested
This is a listing of what is to be tested from the
user's viewpoint of what the system does. This is not a technical description of the software, but a USER'S view of the functions.
Set the level of risk for each feature. Use a simple rating scale such as High, Medium and Low(H, M, L). These types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.
Sections 4 and 6 are very similar, and the only true difference is the point of view. Section 4 is a technical type description including version numbers and other technical information and Section 6 is from the User’s viewpoint. Users do not understand technical software terminology; they understand functions and processes as they relate to their jobs. well good the root directories

Features not to be tested
This is a listing of what is 'not' to be tested from both the user's viewpoint of what the system does and a configuration management/version control view. This is not a technical description of the software, but a user's view of the functions.
Identify why the feature is not to be tested, there can be any number of reasons.
Not to be included in this release of the Software.
Low risk, has been used before and was considered stable.
Will be released but not tested or documented as a functional part of the release of this version of the software.
Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are directly affected by the levels of acceptable risk within the project, and what does not get tested affects the level of risk of the project.

Approach (strategy)
This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified. It is important to have instruction as to what is necessary in a test plan before trying to create one's own strategy. Make sure that you are apprenticed in this area before trying to teach yourself this important step in engineering.
Are any special tools to be used and what are they?
Will the tool require special training?
What metrics will be collected?
Which level is each metric to be collected at?
How is Configuration Management to be handled?
How many different configurations will be tested?
Hardware
Software
Combinations of HW, SW and other vendor packages
What levels of regression testing will be done and how much at each test level?
Will regression testing be based on severity of defects detected?
How will elements in the requirements and design that do not make sense or are untestable be processed?
If this is a master test plan the overall project testing approach and coverage requirements must also be identified.
Specify if there are special requirements for the testing.
Only the full component will be tested.
A specified segment of grouping of features/components must be tested together.
Other information that may be useful in setting the approach are:
MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and if the data is available.
SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.
How will meetings and other organizational processes be handled?


Item pass/fail criteria
Show stopper issues. Specify the criteria to be used to determine whether each test item has passed or failed. Show Stopper severity requires definition within each testing context.

Entry & exit criteria
Specify the criteria to be used to start testing and how you know when to stop the testing process.

Suspension criteria & resumption requirements
Suspension criteria specify the criteria to be used to suspend all or a portion of the testing activities while resumption criteria specify when testing can resume after it has been suspended.
Unavailability of external dependent systems during execution.
When a defect is introduced that cannot allow any further testing.
Critical path deadline is missed so that the client will not accept delivery even if all testing is completed.
A specific holiday shuts down both development and testing.
System Integration Testing in the Integration environment may be resumed under the following circumstances:
When the external dependent systems become available again.
When a fix is successfully implemented and the Testing Team is notified to continue testing.
The contract is renegotiated with the client to extend delivery.
The holiday period ends.
Suspension criteria assumes that testing cannot go forward and that going backward is also not possible. A failed build would not suffice as you could generally continue to use the previous build. Most major or critical defects would also not constituted suspension criteria as other areas of the system could continue to be tested.

Test deliverables
List documents, reports, charts, that will be presented to stakeholders on a regular basis during testing and when testing has been completed.

Remaining test tasks
If this is a multi-phase process or if the application is to be released in increments there may be parts of the application that this plan does not address. These areas need to be identified to avoid any confusion should defects be reported back on those future functions. This will also allow the users and testers to avoid incomplete functions and prevent waste of resources chasing non-defects.
If the project is being developed as a multi-party process, this plan may only cover a portion of the total functions/features. This status needs to be identified so that those other areas have plans developed for them and to avoid wasting resources tracking defects that do not relate to this plan.
When a third party is developing the software, this section may contain descriptions of those test tasks belonging to both the internal groups and the external groups..

Environmental needs
Are there any special requirements for this test plan, such as:
Special hardware such as simulators, static generators etc.
How will test data be provided. Are there special collection requirements or specific ranges of data that must be provided?
How much testing will be done on each component of a multi-part feature?
Special power requirements.
An environment where there is more feedback than needs improvement and meets expectations
Specific versions of other supporting software.
Restricted use of the system during testing.

Staffing and training needs
Training on the application/system.
Training for any test tools to be used.
The Test Items and Responsibilities sections affect this section. What is to be tested and who is responsible for the testing and training.

Responsibilities
Who is in charge?
Don't leave people in charge of the test plan who have never done anything resembling a test plan before; This is vital, they will learn nothing from it and the test will fail.
This issue includes all areas of the plan. Here are some examples:
Setting risks.
Selecting features to be tested and not tested.
Setting overall strategy for this level of plan.
Ensuring all required elements are in place for testing.
Providing for resolution of scheduling conflicts, especially, if testing is done on the production system.
Who provides the required training?
Who makes the critical go/no go decisions for items not covered in the test plans?
Who is responsible for this risk.

Planning risks and contingencies
What are the overall risks to the project with an emphasis on the testing process?
Lack of personnel resources when testing is to begin.
Lack of availability of required hardware, software, data or tools.
Late delivery of the software, hardware or tools.
Delays in training on the application and/or tools.
Changes to the original requirements or designs.
Complexities involved in testing the applications
Specify what will be done for various events, for example:
Requirements definition will be complete by January 1, 20XX, and, if the requirements change after that date, the following actions will be taken:
The test schedule and development schedule will move out an appropriate number of days. This rarely occurs, as most projects tend to have fixed delivery dates.
The number of tests performed will be reduced.
The number of acceptable defects will be increased.
These two items could lower the overall quality of the delivered product.
Resources will be added to the test team.
The test team will work overtime (this could affect team morale).
The scope of the plan may be changed.
There may be some optimization of resources. This should be avoided, if possible, for obvious reasons.
Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in the past.
The important thing to remember is that, if you do nothing at all, the usual result is that testing is cut back or omitted completely, neither of which should be an acceptable option.

Approvals
Who can approve the process as complete and allow the project to proceed to the next level (depending on the level of the plan)?
At the master test plan level, this may be all involved parties.
When determining the approval process, keep in mind who the audience is:
The audience for a unit test level plan is different from that of an integration, system or master level plan.
The levels and type of knowledge at the various levels will be different as well.
Programmers are very technical but may not have a clear understanding of the overall business process driving the project.
Users may have varying levels of business acumen and very little technical skills.
Always be wary of users who claim high levels of technical skills and programmers that claim to fully understand the business process. These types of individuals can cause more harm than good if they do not have the skills they believe they possess

What Is The Difference Between Quality Assurance, Quality Control, And Testing?

Many people and organizations are confused about the difference between quality assurance (QA), quality control (QC), and testing. They are closely related, but they are different concepts. Since all three are necessary to effectively manage the risks of developing and maintaining software, it is important for software managers to understand the differences. They are defined below:
Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.


Quality Control:
A set of activities designed to evaluate a developed work product.

Testing:
The process of executing a system with the intent of finding defects. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.) QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project - e.g., are requirements being defined at the proper level of detail. In contrast, QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements. Testing is one example of a QC activity, but there are others such as inspections. Both QA and QC activities are generally required for successful software development.
Controversy can arise around who should be responsible for QA and QC activities -- i.e., whether a group external to the project management structure should have responsibility for either QA or QC. The correct answer will vary depending on the situation, but Mosaic's experience suggests that:

While line management should have the primary responsibility for implementing the appropriate QA, QC and testing activities on a project, an external QA function can provide valuable expertise and perspective.
The amount of external QA/QC should be a function of the project risk and the process maturity of an organization. As organizations mature, management and staff will implement the proper QA and QC approaches as a matter of habit. When this happens only minimal external guidance and review are needed.

Identify the 5 valid roles in an inspection?

Author
Coordinator
Recorder
Moderator
Reader
Inspectors


Author(s)
Person or persons primarily responsible for creating a work product. The member of the inspection team that provides information about the work product during all stages of the inspection process and corrects defects during the rework stage. (Also known as (AKA) "Owner(s)".)


Inspection Team
A small group of peers who have a vested interest in the quality of the inspected work product and perform the inspection. This group usually ranges in size from 3 to 8 people and can be selected from various areas of the development life cycle (requirements, design, implementation, testing, quality assurance, user, etc.). Selected members of the inspection team fulfill the roles of moderator, author, reader, and recorder.


Inspector
A person whose responsibilities include reviewing work products created by others. All members should be considered inspectors in an inspection team.


Moderator
Person who is primarily responsible for facilitating and coordinating an inspection. When there is no "Reader" in the inspection process, the moderator also controls the pace of review of the work product during the inspection meeting.


Reader
Person who guides the team during the inspection meeting by reading or paraphrasing the work product. The role of the reader is usually fulfilled by a member of the inspection team other than the author(s). All inspection methods don't use this role
.

Recorder
Person who records, in writing, each defect found and its related information (severity, type, etc.) during the inspection meeting. (AKA "Scribe".)


Process
Formal Inspection
An inspection with the following characteristics:
Performed routinely and according to established procedures and schedules, with the expectation that all major defects found will be addressed.
Inspection data is collected and used for project management, quality evaluation, and process improvement.
Checklists are used to facilitate finding of defects and to help in classifying defects.
Inspection team members have received training in the inspection process.
Company or project meeting rate guidelines are followed in accordance with the given type of inspection.


Inspection
Two definitions currently in use are:
A static analysis technique that relies on visual examination of development products to detect defects, violations of development standards, and other problems.
An inspection is a formal review of a work product by the work product owner and a team of peers looking for errors, omissions, inconsistencies, and areas of confusion in the work product.


Inspection Stages
The sequential periods of time that break an inspection into component tasks. An inspection can include the following stages:
Planning Stage
Overview Meeting Stage (AKA Kickoff Stage)
Preparation Stage
Inspection Meeting Stage
Third Hour Stage (some processes use the Causal Analysis Stage instead)
Rework Stage
Follow-up Stage


Planning Stage
Period of time in which details for an inspection are decided and necessary arrangements are made. These usually include; checking to ensure that entry criteria have been met, selection of an inspection team, finding a time and place for the inspection meeting, and deciding whether an overview meeting is needed.


Overview Meeting Stage
Meeting where the author(s) present background information on the work product for the inspection team. An overview meeting is held only when the inspection team needs background information to efficiently and effectively examine the work product. (AKA Kickoff Stage.)


Kickoff Stage
The kickoff stage is used to brief the inspection team on the contents of the inspection packet, inspection objective(s), inspector's defect finding role(s), logistics for the inspection meeting, recommended preparation time and preparation stage data to be collected. The moderator can elect to hold a short (5 to 30 minutes) meeting or may use any other method that will accomplish briefing the team.


Preparation Stage
Period of time inspectors individually study and examine the work product. Usually a checklist is used to suggest potential defects in the work product.Inspection


Meeting Stage
Meeting where the work product is examined for defects by the entire inspection team. The results of this meeting are recorded in a defect list (defect log).


Third Hour Stage
Time allotted for members of the inspection team to resolve open issues and suggest solutions to known defects (or Causal Analysis Stage).


Causal Analysis Stage
Time allotted for the inspection team to analyze defect causes and/or inspection process problems and, if possible, to determine solutions for those problems.


Rework Stage
Time allotted for the author(s) to correct defects found by the inspection team.


Follow-Up Stage
Short meeting between the author(s) and moderator to verify that defects found during the inspection meeting have been corrected and that exit criteria has been met. When exit criteria has not been met, the inspection team repeats inspection stages described under "process"

Inspections During the Software Life Cycle

Formal inspections are in-process peer reviews conducted within the phase of the life cycle in which the product is developed. The period of time that starts when a software product is conceived and ends when the product is no longer available for use. The software life cycle typically traditionally includes the following eight phases:
Concept and Initiation Phase Requirements Phase Architectural Design Phase Detailed Design Phase Implementation Phase Integration and Test Phase Acceptance and Delivery Phase Sustaining Engineering and Operations Phase.
This tutorial emphasizes inspections in the phases of the requirements, design, and implementation in software development and suggests products that may be inspected during each phase. The software life cycle used is the NASA standard waterfall model. The following sections describe the inspections to be conducted in the three phases.

What are paybacks from inspections

The key reason for inspections is to obtain a significant improvement in software quality, as measured by defects that are found in the product when it is used. A project example from AT&T the Integrated Corporate Information System shows the inspections for a portion of ICIS.
Project:
Data base accounting
14 persons
3 month duration
20 modules; 7,000 LOC
Inspection Results:
Analysis 23 defects High-level design 83 Detail design 85 Code 77 ----------------------------------- Totals: 268 defects 37 inspectionsEvaluation:
Product delivered on-time, within budget
Only 4 defects found in product
Stability Index* of 0.2% vs. 15% expected
Development personnel very favorable where Stability Index = % LOC Maintained/Total LOCThe evaluation indicates the high quality of the resulting product. Nevertheless, as Mike Fagan pointed out in 1976 that inspections shorten the development schedule, the productivity will increase as the inspections perform formally. In addition, it should be expected that the development timescale, testing cost, and lifetime cost will be reduced and manageability of the development process will be improved.

What are the differences among inspections, walkthroughs and reviews

In the methods of quality control, inspection is a mechanism that has proven extremely effective for the specific objective of product verification in many development activities. It is a structured method of quality control, as it must follow a specified series of steps that define what can be inspected, when it can be inspected, who can inspect it, what preparation is needed for the inspection, how the inspection is to be conducted, what data is to be collected, and what the follow-up to be the inspection is. Thus the result of inspections on a project has the performance of close procedural control and repeatability. However, reviews and walkthroughs have less structured procedures. They can have many purposes and formats. Reviews can be used to form decisions and resolve issues of design and development. They can also be used as a forum for information swapping or brainstorming. Walkthroughs are used for the resolution of design or implementation issues. Both methods can range from being formalized and following a predefined set of procedures to completely informal. Thus they lacks the close procedural control and repeatability.

Why use inspections

Any efforts are expected without waste to be turned into contribution to increase quality, productivity, and customer satisfaction. However, basically it is the cost of doing things over, possibly several times, until the things are done correctly. In addition, the costs of lost time, lost productivity, lost customers, and lost business are real costs because of no return on them. The cost of quality, however, is not such a negative cost. Experience with inspections shows that time added to the development cycle to accommodate the inspection process is more than gained back in the testing and manufacturing cycles, and in the cost of redevelopment that doesn't need to be done [STRAUSS].