Monday, July 7, 2014

Dynamic parameterization

Problem is to get dynamic proxy name and details (username,password) based on the different load generator machine. These machines could be in different zones with having different proxy servers.

Following code helps in getting the proxy details based on the current machine. Same solution can be used when we have to pick the substitution from the parameter list based on any condition. One example could be a search term. Say for Example I have n number of search term some search terms are invalid. Our requirement could be if a search term is invalid run the search query for same user with different search term.


int i, result;
char *current_host;
current_host = lr_get_host_name();
lr_output_message("The Actual Host is %s", current_host);
//Run the loop based on the number hosts you have in the parameter list
for(i=0;i<=20;i++)
{
lr_output_message("Current Host being verified is %s"lr_eval_string("{Host_Name}"));


if(strcmp(current_host,lr_eval_string("{Host_Name}"))==0)
{
lr_output_message("Setting the Username (%s), Password (%s) & Domain Name (%s) related to Host Name (%s)"lr_eval_string("{User_Name}"),lr_eval_string("{Password}"), lr_eval_string("{Domain}"), lr_eval_string("{Host_Name}"));
web_set_user("{User_Name}""{Password}""{Domain}");
break;
}
else
{
lr_output_message("Current Host evaluated is (%s) not matching with actual host"lr_eval_string("{Host_Name}"));
lr_advance_param("Host_Name");
}

Friday, July 4, 2014

Replay Engine in LoadRunner


Loadrunner have support for Socket and Winlnet replay engines for replaying the script. We can control this setting from Replay>Run Time Settings.


In above screenshot we can see there is an option to change replay choice between Sockets and WinInet. By default Loadrunner use the socket options and we should also use Socket only unless we WinInet is the only choice.
Socket is a scalable approach which used Loadrunner proprietary interface to communicate with the network whereas WinInet uses WinInet API which is used by Internet Explorer to communicate with the network. WinInet helps in resolving issues with the playback but it is not recommended for running the load test as it comes with the limitation of scalability.
Switching to winInet option is a good step to troubleshoot the replay errors.


Friday, August 5, 2011

Dealing with performance issue in Test Automation

Sometime while running automation test cases we faced performance issues with AUT. Page or object fails to load before timeout which leads to false failure. Though page is not getting loaded before timeout period is an issue but we don't want our test cases to fail just because page fails to load.

Following code will wait for page to load indefinitely. Though this may not be a good approach to wait indefinitely for page to load but it helps when we are experiencing serious problem with application performance

while (true)

{

try

{

Selenium.WaitForPageLoad("5000");

break; //executed if page finished loading

}

catch (Exception)

{

//ignore the timeout exception

}

}

This could be enhanced by adding stopwatch class object to get exact time taken by page to load and have page load time in log/result file to ensure if page is taking much time to load it does not get unnoticed.

Thursday, May 13, 2010

Paired testing learnings from Ajay

On 22nd April I received an invitation for paired testing from Ajay. It was a pleasant surprise as this was straight from one of the most eminent tester and contributor to the testing community. But we could not finalize any date/time for this exercise. I was reluctant to approach Ajay for this due to multiple reasons. On 9th May at 7:00 AM received a message from Ajay to have one hour time for testing together. So we decided 5:00 PM IST to meet on Skype.


I went online at 5 PM believing the next 1 hour to be a nice learning experience. Ajay was also present online. We decided to test some desktop application and as we were on Skype we thought of testing Skype only. We decided that for first 25 minutes Ajay will test/demonstrate and then for next 25 minutes it would be my turn for testing application.

We picked the Skype>Advanced Feature for testing. Ajay shared his screen so that I can view testing progress of a renowned tester. Ajay open up the feature and make a quick note of the functionality. Skype advanced features allows you to take a back up of your contacts in VCF file format. Ajay did not directly jump to testing which I think most of tester would have (including me). He browsed to www.fileinfo.com and gets info about VCF file format, gets some test data ready (from outlook). While testing he makes a good observation regarding vcf file size which many of us might not have make.

After 25 minutes he put an end to his testing having few interesting observations like inconsistency in using blocked People and blocked contacts term, File size, save button enabled when nothing to save is there.

My observation after witnessing Ajay testing session was

- Effective use of time: He completed his testing before the time limit keeping time for any Q&A. In our daily testing activities this time can be utilized to analyze our testing results, to identify if there is any gap.

- Perfect Planning: Though he only have 30 minutes to test the feature I think he had done full justice to the feature by giving required coverage to the feature. This was possible because he has the plan in place before beginning the testing and not deciding things on run.

Now, it was my turn to begin testing. So after having an option to test same feature or different feature Ajay give me an option to test “Contact > Search for skype users” feature.

I noted the start time and straightaway jumped to testing. I browsed the feature to get familiar with feature.

I am always for documentation while doing testing. So I make a note of what the Search feature is about and what are the different parameters through which we can test. I started testing with the idea to generate sanity test ideas for feature and later on see how stable the feature is about.

I started testing the feature without giving details to test data. I assume ashish_maheshwari as my username which was instead “maheshwari_ashish”. So I was doing the testing with wrong input for all duration.

I spend around 30 minutes testing “Contact > Search for Skype users” and found 1 Bug Candidate.

Now is the turn for getting most of the one hour spent or I would say most valuable learning so far

- I did not make ask about the mission making the assumption to do the functional testing for “Contact > Search for Skype users”. This was one of the biggest learning for me.

- Test your tests first. Another blunder I made is not testing my test. I keep on testing with wrong input for almost half an hour. If I could have tested my input initially I could have avoided most embarrassing moment.

- Smart Documentation is the essence of smart testing. I am a firm believer for documentation but I spent around 5 minutes in documentation which is more than 15% of total time spend. Had I followed smart documentation strategy I could have spent more time for testing. Smart documentation can be applied if I had used phrases instead of complete sentence, avoided unnecessary documentation and documenting only what it is necessary

- Assumptions are best tool for suicide. All assumptions I made turns out be invalid assumption and was the main reason for my ineffectiveness in this particular testing session.

- Saving the product in between. Though we have been told many times before also about “saving work” in between but often we forget to save our work and title of editor remains “Untitled” and when s/w crashes or power cut happens we are helpless courtesy us.

- Mention Time Zone: This would have helped setting tone right and who know, this can act as evidence in court and remove any position for ambiguity regarding time.

So all in all, this was the most learning experience of my testing career till now. Thanks to Ajay for sharing these pearls with me.

Friday, September 4, 2009

Compiling Check list for web based application

It's been a very long time no post is here.
Testing task for this week is to prepare a check list for web based application.

Friday, March 20, 2009

IE shortcut

The Keyboard Lover’s Guide to IE7

No harm in trying these shortcut and see if some thing can be used in our day to day Test activity

Basic navigation

To do the following Press this
Go Back to the last page* Alt+Left Arrow
Go Forward to the next page* Alt+Right Arrow
Stop the page from loading** Escape (Esc)
Refresh the page*** F5 or Ctrl+F5
Go to your Homepage Alt+Home
Give focus to the Address Bar Alt+D
Add “www.” and “.com” to what you typed
in the address bar before navigating****
Ctrl+Enter
Scroll down/up the web page Spacebar / Shift+Spacebar
Close the window Alt+F4

Others:

Some interesting hotkeys you cannot see by simply looking in the menus…

To do the following Press this
Immediately add this site to your favorites Ctrl+D
Open your favorites in a folder window Shift+Click on the “Organize Favorites”
menu item
Put focus on the Information Bar Alt+N
Open a link in a new window Shift+Click
Open the right click ‘context’ menu for the currently selected item Shift+F10
Change the text size (will be Zoom in IE 7) Ctrl+Mouse wheel Up/Down

* Shift+Mouse wheel up/down also navigates forward and back, so does Backspace and Shift+Backspace
** Did you know that hitting the stop button (or Esc) will also stop background sounds?
*** If F5 doesn’t refresh all content try Ctrl+F5. This ensures no content is pulled from the cache.
**** In the Preview build we also added Ctrl+Shift+Enter when focus is in the address bar. This works like Ctrl+Enter from the address bar does today but will append a suffix of your choice to the end of the string instead of “.com” (.org, .edu, .co.uk, etc…). You can change the default suffix in the Internet Options control panel.

Note: In the Preview build we have changed the pop-up blocker override key from “Ctrl to “Ctrl+Alt” in order to avoid conflicts with our new “Ctrl” tabbed browsing hotkeys

New in Internet Explorer 7

Now that we have basic navigation down, let’s talk about some cool new shortcuts in IE 7. You will notice that for features that exist elsewhere (for example: Tabbed Browsing) we put effort into maintaining consistency where possible.

Tabs:

To do the following Press this
Open links in a new tab in the background Ctrl+Click
Open links in a new tab in the foreground Ctrl+Shift+Click
Open a new tab in the foreground Ctrl+T
Switch between tabs Ctrl+Tab / Ctrl+Shift+Tab
Close current tab (or current window when there are no open tabs) Ctrl+W
Open a new tab in the foreground from the address bar Alt+Enter
Switch to the n’th tab Ctrl+n (n can be 1-8)
Switch to the last tab Ctrl+9
Close other tabs Ctrl+Alt+F4
Open quick tabs Ctrl+Q

Zoom:

To do the following Press this
Increase zoom (+ 10%) Ctrl+(+)
Decrease zoom (-10%) Ctrl+(-)
Original size (100% zoom)* Ctrl+0

* If you are using the recent Windows Vista preview you might notice that the 100% zoom hotkey changed from Ctrl+(*) to Ctrl+0

Search:

To do the following Press this
Go to the Toolbar Search Box Ctrl+E
Open your search query in a new tab Alt+Enter
Bring down the search provider menu Ctrl+Down Arrow

Favorites Center:

To do the following Press this
Open Favorites Center to your favorites Ctrl+I
Open Favorites Center to your history Ctrl+H
Open Favorites Center to your feeds Ctrl+J

Great new mouse actions in IE7

Even with all these cool keyboard hotkeys we’ve introduced a few helpful shortcuts for mouse users as well.

To do the following with a mouse Press this
Open a link in a background tab Middle mouse button
Close a tab Middle mouse button on the tab
Open a new tab Double click on empty tab band space
Zoom the page in/out 10% Ctrl+Mouse wheel Up/Down

Monday, November 17, 2008

Planned Ad hoc Testing

Test execution can be categorized into two broad categories:
  1. Scripted test execution
  2. Non scripted test execution

Scripted test execution is about execution of pre written test cases and test script while non scripted test execution includes exploratory and ad hoc testing.

Importance of Non Scripted testing

  • Generally majority of test planning works include writing test cases or test script but for the bugs found we can not say the same thing. Majority of the bug found is via non scripted test execution rather then scripted test execution.

  • Do we expect end-users to just execute the test script written by Test Engineer?

  • Test script execution may be affected by blindness for expected result where tester looks only for the expected result and might ignore the other apparent bug.

  • Not all projects have enough documentation to begin test case writing.

For non scripted testing we can say this is the technique which does not require any test documentation, try to follow the end user flow and try to cover all apparent and non apparent bug. Hence, majority of the bug are found by Non Scripted test execution

If we can say that majority of the bugs found are via Non Script test execution then for a effective testing process we need a planned ad hoc test approach and following are some pointers which can lead to a planned ad hoc test approach

ü Getting user’s feedback of previous release: Having user feedback from customer care unit, various groups or blogs can help in focusing the efforts in ad hoc testing in right direction. Often we ignore the area in our test cases creation which is part of end user flow. Sometime this information is readily available with Customer Care or we can found users to discuss about the problem areas in various groups, blogs and community sites.

ü Feature Swap: If we planned for feature swap for exploratory testing we could get better result if we swap feature in such a way that tester gets to test the feature for which he was not executing test script this would helps in getting a new eye for feature and would eliminate chances of user getting influenced by test cases he has seen and executed

ü Bug Hunt: Bug hunt is another approach for performing ad hoch testing. If done at correct stage and with proper planning bug hunt can provide bugs which normally we do not see in our regular test case execution

ü Bug density: If we found a bug around any area in product chances of having another uncovered bug in same area increases.