Friday, April 11, 2008

Web Terminologies: Useful for web application testers

This article will help you to learn basic web terminologies. While testing web applications it’s very necessary to know all web technologies. This will increase the test coverage and also the capabilities of web application tester.
This web terminology article is compiled by Meenakshi M. She is working as a Test Engineer and having 3+yrs of experience in Manual and Automation (QTP) testing.
This article basically covers following terminologies:
What is: Internet, www, TCP/IP, HTTP protocol, SSL (Secure socket layer), HTTPS, HTML, Web servers, Web client, Proxy server, Caching, Cookies, Application server, Thin client, Thick client, Daemon, Client side scripting, Server side scripting, CGI, Dynamic web pages, Digital certificates and list of HTTP status codes


Web technology Guide
If you are working on web application testing then you should be aware of different web terminologies. This page will help you to learn all basic and advanced web terminologies that will definitely help you
to test your web projects.

Web terminologies covered in this page are:
What is internet, www, TCP/IP, HTTP protocol, SSL (Secure socket layer), HTTPS, HTML, Web server, Web client, Proxy server, Caching, Cookies, Application server, Thin client, Daemon, Client side scripting, Server side scripting, CGI, Dynamic web pages, Digital certificates and list of HTTP status
codes.

•Internet
–A global network connecting millions of computers.

•World Wide Web (the Web)
–An information sharing model that is built on top of the Internet, utilizes HTTP protocol and browsers (such as Internet Explorer) to access Web pages formatted in HTML that are linked via hyperlinks
and the Web is only a subset of the Internet (other uses of the Internet include email (via SMTP), Usenet, instant messaging and file transfer (via FTP)

•URL (Uniform Resource Locator)
–The address of documents and other content on the Web. It is consisting of protocol, domain and the file. Protocol can be HTTP, FTP, Telnet, News etc., domain name is the DNS name of
the server and file can be Static HTML, DOC, Jpeg, etc., . In other words URLs are strings that uniquely identify resources on internet.

•TCP/IP
–TCP/IP protocol suite used to send data over the Internet. TCP/IP consists of only 4 layers - Application layer, Transport layer, Network layer & Link layer


Internet Protocols:
Application Layer - DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, Bit Torrent, RTP, rlogin.

Transport Layer- TCP, UDP, DCCP, SCTP, IL, RUDP,

Network Layer - IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP,

Link Ethernet Layer- Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM,
DTM, Frame Relay, SMDS,

•TCP (Transmission Control Protocol)
–Enables two devices to establish a connection and exchange data.

–In the Internet protocol suite, TCP is the intermediate layer between the Internet Protocol below it, and an application above it. Applications often need reliable pipe-like connections to each other,
whereas the Internet Protocol does not provide such streams, but rather only unreliable packets. TCP does the task of the transport layer in the simplified OSI model of computer networks.

–It is one of the core protocols of the Internet protocol suite. Using TCP, applications on networked hosts can create connections to one another, over which they can exchange data or packets. The
protocol guarantees reliable and in-order delivery of sender to receiver data. TCP also distinguishes data for multiple, concurrent applications (e.g. Web server and e-mail server) running on the
same host.

•IP

Specifies the format of data packets and the addressing protocol.The Internet Protocol (IP) is a data-oriented protocol used for communicating data across a packet-switched internet work. IP is a network layer protocol in the internet protocol suite. Aspects of IP are IP addressing and routing. Addressing refers to how end hosts become assigned IP addresses. IP routing is performed by all hosts,
but most importantly by internetwork routers

•IP Address

–A unique number assigned to each connected device, often assigned dynamically to users by an ISP on a session-by-session basis
– dynamic IP address. Increasingly becoming dedicated, particularly with always-on broadband connections
– static IP address.

•Packet
–A portion of a message sent over a TCP/IP Network. It contains content and destination
•HTTP (Hypertext Transfer Protocol)
–Underlying protocol of the World Wide Web. Defines how messages are formatted and transmitted over a TCP/IP network for Web sites. Defines what actions Web servers and Web browsers take in
response to various commands.
–HTTP is stateless. The advantage of a stateless protocol is that hosts don't need to retain information about users between requests, but this forces the use of alternative methods for maintaining users'
state, for example, when a host would like to customize content for a user who has visited before. The common method for solving this problem involves the use of sending and requesting cookies. Other
methods are session control, hidden variables, etc

–example: when you enter a URL in your browser, an HTTP command is sent to the Web server telling to fetch and transmit the requested Web page oHEAD: Asks for the response identical to the one that
would correspond to a GET request, but without the response body. This is useful for retrieving meta-
information written in response headers, without having to transport the entire content.

oGET : Requests a representation of the specified resource. By far the most common method used on
the Web today.

oPOST : Submits user data (e.g. from a HTML form) to the identified resource. The data is included in the body of the request.


oPUT: Uploads a representation of the specified resource.

oDELETE: Deletes the specified resource (rarely implemented).

oTRACE: Echoes back the received request, so that aclient can see what intermediate servers are adding or changing in the request.

oOPTIONS:

oReturns the HTTP methods that the server supports. This can be used to check the functionality of a web server.

oCONNECT: For use with a proxy that can change to being an SSL tunnel.


•HTTP pipelining
–appeared in HTTP/1.1. It allows clients to send multiple requests at once, without waiting for an answer. Servers can also send multiple answers without closing their socket. This results in fewer
roundtrips and faster load times. This is particularly useful for satellite Internet connections and other connections with high latency as separate requests need not be made for each file. Since it
is possible to fit several HTTP requests in the same TCP packet, HTTP pipelining allows fewer TCP packets to be sent over the network, reducing network load. HTTP pipelining requires both the
client and the server to support it. Servers are required to support it in order to be HTTP/1.1 compliant, although they are not required to pipeline responses, just to accept pipelined requests.

•HTTP-Tunnel
–technology allows users to perform various Internet tasks despite the restrictions imposed by firewalls. This is made possible by sending data through HTTP (port 80). Additionally, HTTP-Tunnel
Technology is very secure, making it indispensable for both average and business communications. The HTTP-Tunnel client is an application that runs in your system tray acting as a SOCKS server,
Managing all data transmissions between the computer and the network.

•HTTP streaming
–It is a mechanism for sending data from a Web server to a Web browser in response to an event. HTTP Streaming is achieved through several common mechanisms. In one such mechanism the
web server does not terminate the response to the client after data has been served. This differs from the typical HTTP cycle in which the response is closed immediately following data transmission.
The web server leaves the response open such that if an event is received, it can immediately be sent to the client. Otherwise the data would have to be queued until the client's next request is made
to the web server. The act of repeatedly queing and re-requesting information is known as a Polling mechanism. Typical uses for HTTP Streaming include market data distribution (stock tickers),
live chat/messaging systems, online betting and gaming, sport results, monitoring consoles and Sensor network monitoring.

•HTTP referrer
–It signifies the webpage which linked to a new page on the Internet. By checking the referer, the new page can see where the request came from. Referer logging is used to allow websites and web
servers to identify where people are visiting them from, for promotional or security purposes. Since the referer can easily be spoofed (faked), however, it is of limited use in this regard except on a casual basis. A dereferer is a means to strip the details of the referring website from a link request so that the target website cannot identify the page which was clicked on to originate a request. Referer is a common misspelling of the word referrer. It is so common, in fact that it made it into the official specification of HTTP – the communication protocol of the World Wide Web – and has therefore become the standard industry spelling when discussing HTTP referers.

•SSL (Secure Sockets Layer)
–Protocol for establishing a secure connection for transmission, it uses the HTTPS convention
–SSL provides endpoint authentication and communications privacy over the Internet using cryptography. In typical use, only the server is authenticated (i.e. its identity is ensured) while the client remains unauthenticated; mutual authentication requires public key infrastructure (PKI) deployment to clients. The protocols allow client/server applications to communicate in a way designed to prevent eavesdropping, tampering, and message forgery.

–SSL involves a number of basic phases:
oPeer negotiation for algorithm support
oPublic key encryption-based key exchange and certificate-based authentication
oSymmetric cipher-based traffic encryption
during the first phase, the client and server negotiate



Which cryptographic algorithms will be used. Current implementations support the following choices:
of or public-key cryptography:

RSA, Diffie-Hellman,

DSA or Fortezza; of or symmetric ciphers: RC2, RC4, IDEA, DES, Triple DES or AES; oFor one-way hash functions: MD5 or SHA.

HTTPS
–is a URI scheme which is syntactically identical to the http: scheme normally used for accessing resources using HTTP. Using an https: URL indicates that HTTP is to be used, but with a different default port and an additional encryption/authentication layer between HTTP and TCP. This system was invented by Netscape Communications Corporation to provide authentication and encrypted communication and is widely used on the Web for security-sensitive communication, such as payment transactions.
•HTML (Hypertext Markup Language)
–The authoring language used to create documents on the World Wide Web

–Hundreds of tags can be used to format and layout a Web page’s content and to hyperlink to other Web content.

•Hyperlink
–Used to connect a user to other parts of a web site and to other web sites and web-enabled services.

•Web server
–A computer that is connected to the Internet. Hosts Web content and is configured to share that content.
–Web server is responsible for accepting HTTP requests from clients, which are known as Web browsers, and serving them Web pages, which are usually HTML documents and linked objects (images, etc.).

•Examples:
oApache HTTP Server from the Apache Software Foundation.
oInternet Information Services (IIS) from Microsoft.
oSun Java System Web Server from Sun Microsystems, formerly Sun ONE Web Server, iPlanet Web Server, and Netscape Enterprise Server.
oZeus Web Server from Zeus Technology
•Web client

–Most commonly in the form of Web browser software such as Internet Explorer or Netscape

–Used to navigate the Web and retrieve Web content from Web servers for viewing.

•Proxy server
–An intermediary server that provides a gateway to the Web (e.g., employee access to the Web most often goes through a proxy)
–Improves performance through caching and filters the Web
–The proxy server will also log each user interaction.

•Caching
–Web browsers and proxy servers save a local copy of the downloaded content – pages that display personal information should be set to prohibit caching.

•Web form
–A portion of a Web page containing blank fields that users can fill in with data (including personal info) and submits for Web server to process it.
•Web server log
–Every time a Web page is requested, the Web server may automatically logs the following information:
Other IP address of the visitor

Date and time of the request other URL of the requested file

The URL the visitor came from immediately before (referrer URL) othe visitor’s Web browser type and operating system

•Cookies
–A small text file provided by a Web server and stored on a users PC the text can be sent back to the server every time the browser requests a page from the server. Cookies are used to identify a user as they navigate through a Web site and/or return at a later time. Cookies enable a range of functions including personalization of content.

•Session vs. persistent cookies
–A Session is a unique ID assigned to the client browser by a web server to identify the state of the client because web servers are stateless.

–A session cookie is stored only while the user is connected to the particular Web server – the cookie is deleted when the user disconnects

–Persistent cookies are set to expire at some point in the future – many are set to expire a number of years forward

•Socket
–A socket is a network communications endpoint.

•Application Server
–An application server is a server computer in a computer network dedicated to running certain software applications. The term also refers to the software installed on such a computer to facilitate the
serving of other applications. Application server products typically bundle middleware to enable applications to intercommunicate with various qualities of service — reliability, security, non-repudiation, and so on. Application servers also provide an API to programmers, so that they don't have to be concerned with the operating system or the huge array of interfaces required of a modern web-based application. Communication occurs through the web in the form of HTML and XML, as a link to various databases, and, quite often, as a link to systems and devices ranging from huge legacy applications to small information devices, such as an atomic clock or a home appliance

–An application server exposes business logic to client applications through various protocols, possibly including HTTP. the server exposes this business logic through a component API, such as the EJB (Enterprise Java Bean) component model found on J2EE (Java 2 Platform, Enterprise Edition) application servers. Moreover, the application server manages its own resources. Such gate-keeping
duties include security, transaction processing, resource pooling, and messaging –

Ex: J Boss (Red Hat), Web Sphere (IBM), Oracle Application Server 10g (Oracle Corporation) and Web Logic (BEA

•Thin Client
–A thin client is a computer (client) in client-server architecture networks which has little or no application logic, so it has to depend primarily on the central server for processing activities. It is
Designed to be especially small so that the bulk of the data processing occurs on the server.
•Thick client –It is a client that performs the bulk of any data processing operations itself, and relies on the server it is associated with primarily for data storage.


•Daemon
–It is a computer program that runs in the background, rather than under the direct control of a user; they are usually instantiated as processes. Typically daemons have names that end with the letter
"d"; for example, sys logd is the daemon which handles the system log. Daemons typically do not have any existing parent process, but reside directly under init in the process hierarchy. Daemons usually
become daemons by forking a child process and then making the parent process kill itself, thus making init adopt the child. This practice is commonly known as "fork off and die." Systems often start (or "launch") daemons at boot time: they often serve the function of responding to network requests, hardware activity, or other programs by performing some task. Daemons can also configure hardware (like devfsd on some Linux systems), run scheduled tasks (like cron), and perform a variety of other tasks.

•Client-side scripting
–Generally refers to the class of computer programs on the web that are executed client-side, by the user's web browser, instead of server-side (on the web server). This type of computer programming is an important part of the Dynamic HTML (DHTML) concept, enabling web pages to be scripted; that is, to have different and changing content depending on user input, environmental conditions (such as the time of day), or other variables.
–Web authors write client-side scripts in languages such as
JavaScript (Client-side JavaScript) or VBScript, which are based on several standards:

O HTML scripting
O HTTP
O Document Object Model

•Client-side scripts are often embedded within an HTML document, but they may also be
Contained in a separate file, which is referenced by the document (or documents) that use it.
Upon request, the necessary files are sent to the user's computer by the web server (or
Servers) on which they reside. The user's web 7yubrowser executes the script, then displays the
document, including any visible output from the script. Client-side scripts may also contain
instructions for the browser to follow if the user interacts with the document in a certain
way, e.g., clicks a certain button. These instructions can be followed without further
communication with the server, though they may require such communication.

•Server-side Scripting
–It is a web server technology in which a user's request is fulfilled by running a script directly on the web server to generate dynamic HTML pages. It is usually used to provide interactive web sites that
interface to databases or other data stores. This is different from client-side scripting where scripts are run by the viewing web browser, usually in JavaScript. The primary advantage to server-side scripting is the ability to highly customize the response based on the user's requirements, access rights, or queries into data stores.
O ASP: Microsoft designed solution allowing various languages (though generally V B script is used) inside a HTML-like outer page, mainly used on Windows but with limited support on other platforms.
O Cold Fusion: Cross platform tag based commercial server side scripting system.
O JSP: A Java-based system for embedding code in HTML pages.
O Lasso: A Data source neutral interpreted programming language and cross platform server.
O SSI: A fairly basic system which is part of the common apache web server. Not a full programming
environment by far but still handy for simple things like including a common menu.
O PHP: Common open source solution based on including code in its own language into an HTML page.
O Server-side JavaScript: A language generally used on the client side but also occasionally on the server side.
O SMX : Lisp like open source language designed to be embedded into an HTML page.




•Common Gateway Interface (CGI)
–is a standard protocol for interfacing external application software with an information server, commonly a web server. This allows the server to pass requests from a client web browser to the external application. The web server can then return the output from the application to the web browser.

•Dynamic Web pages:
–can be defined as: (1) Web pages containing dynamic content (e.g., images, text, form fields, etc.) that can change/move without the Web page being reloaded or (2) Web pages that are produced on-the-fly by server-side programs, frequently based on parameters in the URL or from an HTML form. Web pages that adhere to the first definition are often called Dynamic HTML or DHTML pages. Client-side languages like JavaScript are frequently used to produce these types of dynamic web pages. Web pages that adhere to the second definition are often created with the help of server-side languages such as PHP, Perl, ASP/.NET, JSP, and languages. These server-side languages typically use the Common Gateway Interface (CGI) to produce dynamic web pages.

•Digital Certificates
In cryptography, a public key certificate (or identity certificate) is a certificate which uses a digital signature to bind together a public key with an identity — information such as the name of a person or an organization, their address, and so forth. The certificate can be used to verify that a public key belongs to an individual. In a typical public key infrastructure (PKI) scheme, the signature will be of a
certificate authority (CA). In a web of trust s "endorsements"). In either case, the signatures on a certificate are attestations by the certificate signer that the identity information and the public key belong
together.

Certificates can be used for the large-scale use of public-key cryptography. Securely exchanging secret keys amongst users becomes impractical to the point of effective impossibility for anything other than quite small networks. Public key cryptography provides a way to avoid this problem. In principle, if Alice wants others to be able to send her secret messages, she need only publish her public key. Anyone possessing it can then send her secure information. Unfortunately, David could publish a different public key (for which he knows the related private key) claiming that it is Alice's public key. In so doing, David could intercept and read at least some of the messages meant for Alice. But if Alice builds her public key into a certificate and has it digitally signed by a trusted third party (Trent),anyone who trusts Trent can merely check the certificate to see whether Trent thinks the embedded public key is Alice's. In typical Public-key Infrastructures (PKIs), Trent will be a CA, who is trusted by all participants. In a web of trust, Trent can be any user, and whether to trust that user's attestation that a particular public key belongs to Alice will be up to the person wishing to send a message to Alice.

In large-scale deployments, Alice may not be familiar with Bob's certificate authority (perhaps they each have a different CA — if both use employer CAs, different employers would produce this result), so Bob's certificate may also include his CA's public key signed by a "higher level" CA2, which might be
recognized by Alice. This process leads in general to a hierarchy of certificates, and to even more complex trust relationships. Public key infrastructure refers, mostly, to the software that manages certificates in a large-scale setting. In X.509 PKI systems, the hierarchy of certificates is always a top-down tree, with a root certificate at the top, representing a CA that is 'so central' to the scheme that it
does not need to be authenticated by some trusted third party. A certificate may be revoked if it is discovered that its related private key has been compromised, or if the relationship (between an entity and a public key) embedded in the certificate is discovered to be incorrect or has changed; this
might occur, for example, if a person changes jobs or names. A revocation will likely be a rare occurrence, but the possibility means that when a certificate is trusted, the user should always check its validity. This can be done by comparing it against a certificate revocation list (CRL) — a list of revoked or cancelled certificates.
Ensuring that such a list is up-to-date and accurate is a core function in a centralized PKI, one which requires both staff and budget and one which is therefore sometimes not properly done. To be effective, it must be readily available to any who needs it whenever it is needed and must be updated frequently. he other way to check a certificate validity is to query the certificate authority using the Online certificate Status Protocol (OCSP) to know the status of a specific certificate.
Both of these methods appear to be on the verge of being supplanted by XKMS.

This new standard, however, is yet to see widespread implementation.
A certificate typically includes:

The public key being signed.
A name, which can refer to a person, a computer or an organization.
A validity period.
The location (URL) of a revocation center.
The most common certificate standard is the ITU-T X.509. X.509 is being adapted to the Internet by the IETF PKIX working group. Classes
Verisign introduced the concept of three classes of digital certificates:
Class 1 for individuals, intended for email;
Class 2 for organizations, for which proof of identity is required; and
Class 3 for servers and software signing, for which independent verification and
checking of identity and authority is done by the issuing certificate authority (CA)

•List of HTTP status codes
1xx Informational
Request received, continuing process.
100: Continue
101: Switching Protocols
2xx Success
The action was successfully received, understood, and accepted.
200: OK
201: Created
202: Accepted
203: Non-Authoritative Information
204: No Content
205: Reset Content
206: Partial Content
3xx Redirection
The client must take additional action to complete the request.
300: Multiple Choices
301: Moved Permanently
302: Moved Temporarily (HTTP/1.0)
302: Found (HTTP/1.1)
see 302 Google Jacking
303: See Other (HTTP/1.1)
304: Not Modified
305: Use Proxy

Many HTTP clients (such as Mozilla and Internet Explorer) don't correctly handle responses with this status code.
306: (no longer used, but reserved)
307: Temporary Redirect
4xx Client Error
The request contains bad syntax or cannot be fulfilled.
400: Bad Request
401: Unauthorized
Similar to 403/Forbidden, but specifically for use when authentication is possible
but has failed or not yet been provided. See basic authentication scheme and
digest access authentication.
402: Payment Required
403: Forbidden
404: Not Found
405: Method Not Allowed
406: Not Acceptable
407: Proxy Authentication Required
408: Request Timeout
409: Conflict
410: Gone
411: Length Required
412: Precondition Failed
413: Request Entity Too Large
414: Request-URI Too Long
415: Unsupported Media Type
416: Requested Range Not Satisfiable
417: Expectation Failed
5xx Server Error
The server failed to fulfill an apparently valid request.
500: Internal Server Error
501: Not Implemented
502: Bad Gateway
503: Service Unavailable
504: Gateway Timeout
505: HTTP Version Not Supported 509: Bandwi

What you need to know about BVT (Build Verification Testing)

What is BVT?
Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. These test cases are core functionality test cases that ensure application is stable and can be tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for fix.
BVT is also called
smoke testing or build acceptance testing (BAT)New Build is checked mainly for two things:
Build validation
Build acceptance
Some BVT basics:
It is a subset of tests that verify main functionalities.
The BVT’s are typically run on daily builds and if the BVT fails the build is rejected and a new build is released after the fixes are done.
The advantage of BVT is it saves the efforts of a test team to setup and test a build when major functionality is broken.
Design BVTs carefully enough to cover basic functionality.
Typically BVT should not run more than 30 minutes.
BVT is a type of
regression testing, done on each and every new build.
BVT primarily checks for the project integrity and checks whether all the modules are integrated properly or not. Module integration testing is very important when different teams develop project modules. I heard many cases of application failure due to improper module integration. Even in worst cases complete project gets scraped due to failure in module integration.
What is the main task in build release? Obviously file ‘check in’ i.e. to include all the new and modified project files associated with respective builds. BVT was primarily introduced to check initial build health i.e. to check whether - all the new and modified files are included in release, all file formats are correct, every file version and language, flags associated with each file.These basic checks are worth before build release to test team for testing. You will save time and money by discovering the build flaws at the very beginning using BVT.
Which test cases should be included in BVT?
This is very tricky decision to take before automating the BVT task. Keep in mind that success of BVT depends on which test cases you include in BVT.
Here are some simple tips to include
test cases in your BVT automation suite:
Include only critical test cases in BVT.
All test cases included in BVT should be stable.
All the test cases should have known expected result.
Make sure all included critical functionality test cases are sufficient for application test coverage.
Also do not includes modules in BVT, which are not yet stable. For some under-development features you can’t predict expected behavior as these modules are unstable and you might know some known failures before testing for these incomplete modules. There is no point using such modules or test cases in BVT.
You can make this critical functionality test cases inclusion task simple by communicating with all those involved in project development and testing life cycle. Such process should negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality standards and these standards can be met only by analyzing major project features and scenarios.
Example: Test cases to be included in BVT for Text editor application (Some sample tests only):1) Test case for creating text file.2) Test cases for writing something into text editor3) Test case for copy, cut, paste functionality of text editor4) Test case for opening, saving, deleting text file.
These are some sample test cases, which can be marked as ‘critical’ and for every minor or major changes in application these basic critical test cases should be executed. This task can be easily accomplished by BVT.
BVT automation suits needs to be maintained and modified time-to-time. E.g. include test cases in BVT when there are new stable project modules available.
What happens when BVT suite run:Say Build verification automation test suite executed after any new build.1) The result of BVT execution is sent to all the email ID’s associated with that project.2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of BVT.3) If BVT fails then BVT owner diagnose the cause of failure.4) If the failure cause is defect in build, all the relevant information with failure logs is sent to respective developers.5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is really a bug? And if it’s a bug then what will be his bug-fixing scenario.6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is passed to test team for further detail functionality, performance and other testes.
This process gets repeated for every new build.
Why BVT or build fails?BVT breaks sometimes. This doesn’t mean that there is always bug in the build. There are some other reasons to build fail like test case coding error, automation suite error, infrastructure error, hardware failures etc.You need to troubleshoot the cause for the BVT break and need to take proper action after diagnosis.
Tips for BVT success:1) Spend considerable time writing BVT test cases scripts.2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will help developer team to debug and quickly know the failure cause.3) Select stable test cases to include in BVT. For new features if new critical test case passes consistently on different configuration then promote this test case in your BVT suite. This will reduce the probability of frequent build failure due to new unstable modules and test cases.4) Automate BVT process as much as possible. Right from build release process to BVT result - automate everything.5) Have some penalties for breaking the build Some chocolates or team coffee party from developer who breaks the build will do.
Conclusion:BVT is nothing but a set of regression test cases that are executed each time for new build. This is also called as smoke test. Build is not assigned to test team unless and until the BVT passes. BVT can be run by developer or tester and BVT result is communicated throughout the team and immediate action is taken to fix the bug if BVT fails. BVT process is typically automated by writing scripts for test cases. Only critical test cases are included in BVT. These test cases should ensure application test coverage. BVT is very effective for daily as well as long term builds. This saves significant time, cost, resources and after all no frustration of test team for incomplete build.
If you have some experience in BVT process then please share it

Manual and Automation testing Challenges

Manual and Automation testing Challenges
Software Testing has lot of challenges both in manual as well as in automation. Generally in manual testing scenario developers through the build to test team assuming the responsible test team or tester will pick the build and will come to ask what the build is about? This is the case in organizations not following so-called ‘processes’. Tester is the middleman between developing team and the customers, handling the pressure from both the sides. And I assume most of our readers are smart enough to handle this pressure. Aren’t you?
This is not the case always. Some times testers may add complications in testing process due to their unskilled way of working. In this post I have added most of the testing challenges created due to testing staff, developing staff, testing processes and wrong management decisions.So here we go with the top challenges:


1) Testing the complete application: Is it possible? I think impossible. There are millions of test combinations. It’s not possible to test each and every combination both in manual as well as in automation testing. If you try all these combinations you will never ship the product

2) Misunderstanding of company processes:Some times you just don’t pay proper attention what the company-defined processes are and these are for what purposes. There are some myths in testers that they should only go with company processes even these processes are not applicable for their current testing scenario. This results in incomplete and inappropriate application testing.

3) Relationship with developers:Big challenge. Requires very skilled tester to handle this relation positively and even by completing the work in testers way. There are simply hundreds of excuses developers or testers can make when they are not agree with some points. For this tester also requires
good communication
troubleshooting and analyzing skill.

4)
Regression testing :When project goes on expanding the regression testing work simply becomes uncontrolled. Pressure to handle the current functionality changes, previous working functionality checks and bug tracking.

5) Lack of
skilled testers:I will call this as ‘wrong management decision’ while selecting or training testers for their project task in hand. These unskilled fellows may add more chaos than simplifying the testing work. This results into incomplete, insufficient and ad-hoc testing throughout the testing life cycle

6)
Testing always under time constraint Hey tester, we want to ship this product by this weekend, are you ready for completion? When this order come s from boss, tester simply focuses on task completion and not on the test coverage and quality of work. There is huge list of tasks that you need to complete within specified time. This includes writing, executing, automating and reviewing the test cases.

7) Which tests to execute first?If you are facing the challenge stated in point no 6, then how will you take decision which test cases should be executed and with what priority? Which tests are important over others? This requires good experience to work under pressure.

8 ) Understanding the requirements:Some times testers are responsible for communicating with customers for understanding the requirements. What if tester fails to understand the requirements? Will he be able to test the application properly? Definitely No! Testers require good listening and understanding capabilities.
9)
Automation testing Many sub challenges - Should automate the testing work? Till what level automation should be done? Do you have sufficient and skilled resources for automation? Is time permissible for automating the test cases? Decision of automation or manual testing will need to address the pros and cons of each process.
10) Decision to stop the testing:When to stop testing? Very difficult decision. Requires core judgment of testing processes and importance of each process. Also requires ‘on the fly’ decision ability.


11) One test team under multiple projects:Challenging to keep track of each task. Communication challenges. Many times results in failure of one or both the projects.

12) Reuse of Test scripts:Application development methods are changing rapidly, making it difficult to manage the test tools and test scripts. Test script migration or reuse is very essential but difficult task.

13) Testers focusing on finding easy bugs:If organization is rewarding testers based on number of bugs (very bad approach to judge
testers performance ) then some testers only concentrate on finding easy bugs those don’t require deep understanding and testing. A hard or subtle bug remains unnoticed in such testing approach.

14) To cope with attrition:Increasing salaries and benefits making many employees leave the company at very short career intervals. Managements are facing hard problems to cope with attrition rate. Challenges - New testers require project training from the beginning, complex projects are difficult to understand, delay in shipping date!
These are some top software testing challenges we face daily. Project success or failure depends largely on how you address these basic issues.
For further reference and detailed solutions on these challenges refer book “Surviving the Top Ten challenges of Software Testing” written by William E. Perry and Randall W. Rice

Many of you are working in manual and/or automation testing field.
I want your views on handling these software testing challenges. Feel free to express your views in comment section below.

Check your eligibility for CSTE certification. Take this sample CSTE examination

Here is one more ’sample exam questions’ article on CSTE certification. CSTE testing certification is the basic certification to check testers skill and understanding of software testing theory and software testing practices.
If you are applying for CSTE certification check if you can answer at least 75% of the following test questions. Four and half hour CSTE exam consist of 4 parts, Two multiple choice parts and two essay parts.
Below you will find 20 multiple choice questions from all skill categories. There are around 10 skill categories and I have included 2 questions from each category.
Skill categories:
Software Testing Principles and Concepts
Building the Test Environment
Managing the Test Project
Test Planning
Executing the Test Plan
Test Reporting Process
User Acceptance Testing
Testing Software Developed by Contractors
Testing Internal Control
Testing New Technologies
These are the latest sample questions from the CSTE CBOK.
Mark the answers somewhere so that you can check the score at the end of the test.
1. The customer’s view of quality means:
a. Meeting requirements
b. Doing it the right way
c. Doing it right the first timed.
d Fit for usee.
e Doing it on time


2. The testing of a single program, or function, usually performed by the developer is called:
a. Unit testing
b. Integration testing
c. System testing
d. Regression testing
e. Acceptance testing

3. The measure used to evaluate the correctness of a product is called the product:
a. Policy
b. Standard
c. Procedure to do work
d. Procedure to check work
e. Guideline

4. Which of the four components of the test environment is considered to be the most important component of the test environment:
a. Management support
b. Tester competency
c. Test work processes
d. Testing techniques and tools


5. Effective test managers are effective listeners. The type of listening in which the tester is performing an analysis of what the speaker is saying is called:
a. Discriminative listening
b. Comprehensive listening
c. Therapeutic listening
d. Critical listening
e. Appreciative listening

6. To become a CSTE, an individual has a responsibility to accept the standards of conduct defined by the certification board. These standards of conduct are called:
a. Code of ethics
b. Continuing professional education requirement
c. Obtaining references to support experience
d. Joining a professional testing chapter
e. Following the common body of knowledge in the practice of software testing

7. Which of the following are risks that testers face in performing their test activities:
a. Not enough training
b. Lack of test tools
c. Not enough time for testing
d. Rapid changee. All of the above

8. All of the following are methods to minimize loss due to risk. Which one is not a method to minimize loss due to risk:
a. Reduce opportunity for error
b. Identify error prior to loss
c. Quantify loss
d. Minimize losse. Recover loss

9. Defect prevention involves which of the following steps:
a. Identify critical tasks
b. Estimate expected impact
c. Minimize expected impact
d. a, b and ce. a and b

10. The first step in designing use case is to:
a. Build a system boundary diagram
b. Define acceptance criteria
c. Define use cases
d. Involve userse. Develop use cases


11. The defect attribute that would help management determine the importance of the defect is called:
a. Defect type
b. Defect severity
c. Defect name
d. Defect locatione. Phase in which defect occurred


12. The system test report is normally written at what point in software development:
a. After unit testing
b. After integration testing
c. After system testing
d. After acceptance testing

13. The primary objective of user acceptance testing is to:
a. Identify requirements defects
b. Identify missing requirements
c. Determine if software is fit for use
d. Validate the correctness of interfaces to other software systemse. Verify that software is maintainable

14. If IT establishes a measurement team to create measures and metrics to be used in status reporting, that team should include individuals who have:
a. A working knowledge of measures
b. Knowledge in the implementation of statistical process control tools
c. A working understanding of benchmarking techniques
d. Knowledge of the organization’s goals and objectivese. All of the above

15. What is the difference between testing software developed by a contractor outside your country, versus testing software developed by a contractor within your country:
a. Does not meet people needs
b. Cultural differences
c. Loss of control over reallocation of resources
d. Relinquishment of controle. Contains extra features not specified

16. What is the definition of a critical success factor:
a. A specified requirement
b. A software quality factor
c. Factors that must be present
d. A software metrice. A high cost to implement requirement

17. The condition that represents a potential for loss to an organization is called:
a. Risk
b. Exposure
c. Threat
d. Controle. Vulnerability

18. A flaw in a software system that may be exploited by an individual for his or her advantage is called:
a. Risk
b. Risk analysis
c. Threat
d. Vulnerabilitye. Control

19. The conduct of business of the Internet is called:
a. e-commerce
b. e-business
c. Wireless applications
d. Client-server system
e. Web-based applications

20. The following is described as one of the five levels of maturing a new technology into an IT organization’s work processes. The “People-dependent technology” level is equivalent to what level in SEI’s compatibility maturity model:
a. Level 1
b. Level 2
c. Level 3
d. Level 4
e. Level 5

Answers

1. (d) Fit for use

2. (a) Unit testing

3. (b) Standard

4. (a) Management support

5. (d) Critical listening

6. (a) Code of ethics

7. (e) All of the above

8. (c) Quantify loss

9. (d) a, b and c

10. (a) Build a system boundary diagram

11. (b) Defect severity

12. (c) After system testing

13. (c) Determine if software is fit for use

14. (e) All of the above

15 (b) Cultural differences

16. (c) Factors that must be present

17. (a) Risk

18. (d) Vulnerability

19. (b) e-business

20. (a) Level 1


In coming articles I will emphasize more on sample CSTE essay papers and how to answer multiple choice and essay type questions.

How Much Does H1b cost?

If you are wondering how much a H1B will cost for an organization, USCIS has offered the details of H1B filing fees for H1B 2009. Applications will be accepted from 1st of April, 2008. ( just 33 more days… look at the count down timer at the top of the page)
>>
Link to USCIS Page
Base Filing Fee : $320
ACWIA Fee: $750 (1-25 Employees) or $1500 (more than 25 Employees)
Fraud Fee: $500
Premium Processing: $1000
Note that these are the fees to be paid to USCIS. Many organizations will incur other expenses towards the filing like Attorney fees.
USCIS has also offered helpful information for Organizations that are looking to file for H1B petitions this year.
Helpful Hints for Filing a FY 2009 H-1B Cap Case: Quick Tips
What are the main errors in an H-1B petition that can cause USCIS to have to reject or deny the petition?
Frequently Asked Questions on Completing and Submitting a FY 2009 H-1B Cap Case
Condensed Q and A For Completing and Submitting a FY 2009 H-1B Cap Case (56KB PDF)
Though many are expecting a similar rush as in last year, going by the initial indications from top consulting companies, we may not see such a mad rush this year. Last year this time, many firms did not accept any new application beyond February. But this year, firms are still looking out for prospective Professionals.
Related Posts:
Thought Garage Tops Google Search for H1B 2009 - Encore!
H1B 2009 : Day 1 of 5 Will 2007 repeat in 2008?
H1B 2009 : H1B Visa Cap Reached, Lottery Again in 2008 !!

Indian tech professionals to benefit from increased H1B visas in USA

Us immigration has decided to allocate 65,000 new H1 B visas by lottery instead of quota systm traditionally followed ,as demand far outstrips supply.A majority of them is again expected to be be cornered by Indian high-tech professionals, according to immigration attorneys.
The US Citizenship and Immigration Services (USCIS) is expected to pick the lottery within a week, but the anxious wait for the applicants may continue for months as the department starts returning unsuccessful applications and sends receipts for the others. Those who get the three-year visa for skilled professionals can start work from october.While there were about 124,000 applications last year, the number this year may cross 150,000 . Many will also be vying for the 20,000 H-1B visas meant for foreigners with US-earned masters' or higher degrees. Since out 60-70 percent of all applications are expected to be on behalf of Indians they are sure to benefit from the proposed lottery system.Morever this flow is unaffected by the recent downturn in the US economy or improved economic opportunities in India.
There is an increasing need in the US to crying need to raise the H-1B cap as American businesses will benefit from hiring foreign highly skilled workers. A new bill has been already introduced in the Congress aiming to raise the cap to 195,000, and another bill seeks to boost the cap as well as exempt foreigners educated at US institutions from the quota. But no progress is expected before the next president takes over in early 2009. Earlier s there has been criticism over H1B v snatching away local technological jobs as it provides cheap labour but a recent study by the National Foundation for American Policy that found that on an average every foreign national on an one H1B visa generates another five to 7.5 jobs.

Indian outsourcing companies have also attracted criticism recently when the federal government released data showing that they accounted for nearly 80 percent of the visa petitions approved last year for the top 10 participants in the H1B programme.Currently Infosys had 4,559 and Wipro 2,567 approved visa petitions in the programme, which was initially set up to allow companies in the US to import the best and brightest in technology, engineering, and other fields when such workers are in short supply in America

IT recession plagues IITians

Mumbai: The recession in IT seems to have been having a roundabout effect. Just when you thought the only sufferers were the employees who have been given pink slips, IITs too have started feeling the brunt of it. The dip in the IIT campus recruitment figures of major Indian and foreign IT firms have just fuelled the concerns over the industry slowdown, reported Business Standard.While hiring by India's major IT services providers TCS, Wipro and Infosys substantially dropped, firms like IBM, HCL, Hughes Software and CSC just opted out of placements this year.
"While many companies say they have a particular number in mind and would recruit likewise, our alumni network at these companies informs us that these IT giants are exercising restraint in recruiting trainees due to a slowdown," said a placement official from IIT Roorkee.Recruitment by IT companies at IIT Kanpur has gone down from 130 students in 2007 to 72 in 2008. A placement official from IIT Kanpur agreed to the fact and said, "Like every year, the institute offered the regular number of students to these IT companies for placements but they did not pick as many students.""Clients from IT firms are increasingly in the process of utilizing their bench strength," said Monisha Advani, managing director, Randstad India, an HR consultancy firm.