Sunday, 19 May 2013

Backup for SAP

This Blog talks about the ways and the significance of backup.

Storage is energetic to any computer system. It is doubly vital to an ERP system which serves as the memory for the business. Lose that memory and you’re out of business.

Backups are copies of all the important data on your system taken and preserved in such a way that you can recover your data no matter what happens. Making backups and being sure you have good ones is a best business practice. Typically an SAP system will have one or more storage administrators to take care of storage and backup systems. However as manager of the SAP system it’s important that you understand the basics of backup because it is so intimately connected with running a successful SAP system

Basically, ‘backup’ refers to three different things, only one of which is truly backup. There is short-term backup, which preserves a copy of the document for a short time, typically a week or two. There is true backup which saves a copy for a year or so. Then there is archival backup which saves important data permanently – or at least for five years.

These three serve very different purposes. Short term storage is designed to protect files from corruption or deletion, usually accidental and generally within a short time of creation. It is marked by very fast restore times and is the most commonly used form. True backup is designed to keep a copy of the files available for a longer period of time for restoring data. It may last for a year or even longer. Archival storage is designed to save important data permanently. Short term storage usually relies on storage on the active disk or disks. True backup uses longer term disk storage, often on slower, less expensive disks or on tape. Archival storage is typically on magnetic tape, although hard disk arrays are becoming more common. Archival material is often moved off site to a secure facility for permanent storage.
The three differ in how quickly they can recover the data committed to them. Short-term storage can usually get the data back in seconds. True backup takes longer, sometimes as much as a day or so if the backups have been taken off site. Archival storage may take a day or two to recover

Backups have to balance speed, cost and security. Short term storage is on the server’s hard disk and has a very low cost, but also low security. True backup is stored on a separate disk array or magnetic tape cartridge, usually dedicated to backup. It takes somewhat longer to recover, even when the storage media are kept on site. It is more secure than the short term storage and more costly. Archival storage stores large quantities of data on magnetic tape or special disk arrays. It is the slowest to get back and the most secure. Although the tape drives and tape are expensive, they are the cheapest kind of storage on a per megabyte basis.

Short term storage is usually created in the background as the user works. It records data almost on a keystroke by keystroke basis so the information is available for restoration immediately. It typically comes as a component of operating systems like Windows.

True backup is slower to create, usually taking the storage in chunks and transferring it from the working and disk to the backup disk or tape. It requires a dedicated backup disk or disks (in an array) or tape drive. Often backups are only taken once a day or even longer because backups slow down the system quite noticeably.

Archival storage is done by disk to disk or disk to tape transfers in the background. Usually the files are automatically edited so that only the important ones are preserved. If tape is used, it is not unusual to take the tapes off site to protect them in some sort of secure storage.

All this is important to the SAP manager because it has a direct impact on how day-to-day operations are performed. Since SAP systems tend to be heavily used you want a backup system that puts the minimum performance penalty on your ERP server.

The rule of backup is to always make at least one copy of all important data – essentially everything – and to store it securely.

Thursday, 11 April 2013

Data Staging for SAP Conversion

Purpose

The white paper lists and describes steps needed to perform data conversion for SAP from the staging point of view.

Goals and Objectives

The goal is to enable the staging developers during Data conversion process and help them verify the data converted before it goes into SAP.

Pre-requisites for Developers

The goal is to enable the staging developers during Data conversion process and help them verify the data converted before it goes into SAP.

Developers working on the Staging part should have the following pre requisites.

  • Knowledge of Oracle
    • SQL
    • PL/SQL
    • SQL Loader Utility
  • Knowledge of MS Excel
  • Familiarity with TOAD
 Overview

Data conversion process involves converting/processing the data received from the legacy applications/systems and transforming it into desired output structure of SAP as per the specifications provided.

Functional specification documents provide details about the business rules that are to be applied on the data before it goes into SAP. Staging team does the work of applying those business rules by writing stored procedures in ORACLE.

Pictorial representation of Data conversion process:

Data conversion process
 Data conversion process

Steps to be followed

  • Understanding and analyzing the requirements of conversion
  • Development of procedures for conversion
  • Import process
  • Processing data
  • Versioning
  • Exception Reports


Understanding and analyzing the requirements of conversion

  • Understanding business rules (from specs) to be applied and analyzing data received from legacy systems
  • Getting clarifications from the client on any of the doubts that may arise from step 1
  • Preparing a mapping document (using MS excel); listing all the input fields from the data received and output fields as per the structure required for a particular conversion
  • Ensuring that the mapping done is correct by getting it verified from a business user

Development of procedures for conversion

Once a mapping document is in place and has been verified by business users, the actual development of code and import process of data is initiated for a particular conversion.

Import process

SQL Loader utility is used to import data into ORACLE tables. Data received from a legacy system could be in various forms. For e.g. it could either be in a tab delimited, CSV or can even be in an excel spreadsheet format. It is important to review the data received is in correct format and ensure that no field / information is missing. Process to import data may differ depending upon the format in which data has been received.

Steps in general that are needed to be performed for the import process:

In case the data received is in tab delimited format, MS Excel can be used to prepare the file before it is used with the SQL Loader tool. Although data can be imported to ORACLE as is, but sometimes the data received is not directly importable into ORACLE tables and has to be converted into a format which is acceptable to the ORACLE.

Select Import Data option available under Data->Import External Data menu option.

  • Step 1 will open up an Open file dialog window
  • Select a file (Excel, tab delimited or CSV format) containing data using an appropriate path and follow the steps of the Import wizard
Note: An important thing that has to be kept in mind before importing the data is to set the format of all the cells to text. Otherwise, a value larger in size or one with leading zeros might create a problem. Leading zeros get trimmed and a larger value does not get displayed properly in a cell or doesn’t get exported properly.

After the import process completes successfully, a file can be saved as a tab delimited file through MS Excel only. The tab delimited file can then be used by SQL Loader utility to load the data into ORACLE tables.

Processing data

Processing the data imported is one of the major steps in data conversion process. It’s quite possible that in a real life scenario, changes to the code might be required during the course of data conversion for a particular module in the procedures created for processing the data.

Creating separate versions of code

One way to protect and maintain the code written for a particular type of conversion is to write a new stored procedure. That’s the way it’s been mostly done while there have been changes that we were to make in our code during the process of conversion in Wave I. Maintaining separate versions of code like this is a tedious process especially without an integration of a version management tool with the development environment.


Example:

Here is an example of a data conversion process for a module (Sales Order), where there was a requirement to create a separate set of data for all the orders belonging to plant in Canada. In this kind of a scenario, we had created a separate procedure to process the records for Canadian plant. During this conversion process, there have been number of changes, which were done quite frequently to each of these two separate procedures and the nature of changes that were to be made were also different for each of the plants. In a real life scenario, it becomes very difficult to maintain separate versions and also keep track of the changes being made on continuous basis to these different sets of code.

Processed Data

Once the processing is complete, the data has to be delivered to the SAP.

Data Version

Versions of the data are maintained as the data goes into different environments in SAP. It’s required because of the following reasons:

  • Delta loads – When a delta load has to be sent for a particular version, it becomes important to know what all data has already been sent so that no duplicate records go into SAP
  • Identifying any incorrect data that might have been sent to a particular environment of SAP because of any reason. This helps in keeping track whether incorrect data was sent by Legacy, Staging or it something went wrong at SAP’s end only
Generating Exception records reports

Generating an exception report is one of the most crucial steps in the process of data conversion. An exception report is a log of those records which could not be processed due to the business rules applied as per the functional specs.

An exception report helps the staging team in reporting the users/developers of the legacy system to identify the problems at their end and resolve them and re-send the data to the staging team to process it, so that the same could be uploaded in SAP too. A typical format of an exception report is a collection of raw data fields along with a reason as why those could not get processed. This is sent usually in an MS excel format.

Also, count of records processed and the records that came as part of the raw data is maintained and communicated to the business users/legacy system team to find out how much data got loaded and how much of it failed to load.

Friday, 8 March 2013

Best Practices in Regression Testing

Practice 1Regression can be used for all types of releases

 Regression testing can be applied when

• We need to measure the quality of product between test cycles (both planned & need based);
• We are doing a major release of a product, have executed all test cycles, and are planning a regression test cycle for defect fixes; and
• We are doing a minor release of a product (support packs, patches, and so on) having only defect fixes, and we    can plan for regression test cycles to take care of those defect fixes.

There can be multiple cycles of regression testing that can be planned for every release. This applies if defect fixes come in phase or to take care of some defect fixes not working with a specific build.

Practice 2Mapping defect identifiers with test cases improves regression quality

When assigning a fail result to a test case during test execution, it is a good practice to enter the defect identifier(s) (from the defect tracking system along, so that you will know what test cases are to be executed) when a defect fix arrives. Please note that there can be multiple defects that can come out of a particular test case and a particular defect can affect more than one test case.

Even though ideally one would like to have a mapping between test cases and defects, the choice of test cases that are to be executed for taking care of side effects of defect fixes may still remain largely a manual process as this requires knowledge of the interdependences amongst the various defect fixes.

As the time passes by and with each release of the product, the size of the regression test cases to be executed grows. It has been found that some of the defects reported by customers in the past were due to last-minute defect fixes creating side effects. Hence, selecting the test case for regression testing is really an art and not that easy. To add to this complexity, most people want maximum returns with minimum investment on regression testing.

Practice 3Create and execute regression test bed daily 

To solve this problem, as and when there are changes made to a product, regression test cases are added or removed from an existing suite of test cases. This suite of test cases, called regression suite or regression test bed, is run when a new change is introduced to an application or a product. The automated test cases in the regression test bed can be executed along with nightly builds to ensure that the quality of the product is maintained during product development phases.

It was mentioned earlier that the knowledge of defect, product, their interdependences and a well-structured methodology are all very important to select test cases. These points stress the need for selecting the right person for the right job. The most experienced person in the team or the most talented person in the team may do a much better job of selecting the right test cases for regression than someone with less experience. Experience and talent can bring in knowledge of fragile areas in the product and impact the analysis of defects.

Practice 4Ask your best test engineer to select the test cases

Strategy 1: The tiger has been put in a cage to prevent harm to human kind
Strategy 2: Some members of a family  lie inside the mosquito net as prevention
against mosquitoes.

Strategy1 has to be adopted for regression. Like the tiger in the cage, all defects in the product have to be identified and fixed. This is what “detecting defects in your product” means.

Strategy2 signifies “protecting your product from defects”. The strategy followed here is of prevention. 

Practice 5: Detect defects, and protect your product from defects and defect fixes

Another aspect relating to regression testing is “protecting your product from defect fixes”. As discussed earlier, a defect that is classified as a minor defect may create a major impact on the product when it gets fixed into the code. It is similar to what a mosquito can do to humans (impact), even though its size is small. Hence, it is a good practice to analyze the impact of defect fixes, irrespective of size and criticality, before they are incorporated into the code. The analysis of an impact due to defect fixes is difficult due to lack of time and the complex nature of the product. Hence, it is a good practice to limit the amount of changes in the product when close to the release date. This will prevent the product from defects that may seep in through the defect fixes route, just as mosquitoes can get into the mosquito net through a small hole there. If you make a hole for a mosquito to get out of the net, it also opens the doors for new mosquitoes to come into the net. Fixing a problem without analyzing the impact can introduced a large number of defects in the product. Hence, it is important to insulate the product from defects as well as defect fixes.

If defects are detected and the product is protected from defects and defect fixes, then regression testing become effective and efficient. Regression testing, in effect, provides the mosquito net.

Tuesday, 20 November 2012

Oracle R12 Applications Using LoadRunner

The Challenge

We recently load tested our first Oracle R12 release (All modules for nationwide and international wide of Oracle ERP R12). The company was upgrading to R12 from 11.5.8 largely for performance reasons.

We knew we’d be “cutting new ground” with LoadRunner on R12. This became evident with our first testrecord-and-playback, which failed even after finding and fixing all the missing correlations. We raised a ticket with HP (SR# #4622615067), and with their initial help, step by step we overcame all the nuances of coaxing vugen to record successfully, and then creatively working around its inability to recognize the full set of identifiers for a new java ITEMTREE object.

This Practical Solution describes the challenges we encountered and how we overcame them. See the

Technical Issues and Solutions section for the gory details.

Before You Start

As with 11i, remember that you must expose “Oracle Developer Names*” on the instance before

recording. There’s an HP technote on this, but here are the steps:

  • Launch Oracle application
  • Login with Administrator privileges
  • Select the role “System Administrator”. If you don’t have this role, you don’t have the rights to do this
  • Select the option “Profile: System”
  • Search for the following profile option: “ICX: Forms Launcher” (Search for a specific user to change only the settings for one user);
  • Append this string to the end URL: “?play=&record=names”;
  • Save and close.

* Failure to do this will result in generic field names that make scripting much more difficult and harder to

maintain.

Lessons Learned

  • HP Support can help
  • Oracle Metalink documents provide good info on R12 architecture and configuration
  • If VUGen stumbles on a new object, try the QTP Object Spy
  • R12 can be installed in two distinct communication modes; make sure it’s in the more efficient “Socket    mode” before you beginning scripting
  • Porting scripts is a pretty common requirement – save crucial time at “show time” by parameterizing the instance-specific values up-front

Technical Issues and Solutions

 

Issue1: Playback failure on nca_object_action statements

Following the nca_connect statement there are two statements of this form:

nca_java_action (“SR_DUP_GRID_DUP_GRID_0″,)

On playback, the script invariably fails at the first of these.

Practical Solution:

In each script directory, open the default.cfg file and in the [NCA_GENERAL] section add this line:

NCATimerWaitMode=0

Playback will now successfully process nca_object_action statements.

 

 

 

 

Issue2: Java Runtime version and memory setting

Oracle R12 now loads the JInitiator file (runtime program for executing the Oracle java applet) from the

Java JRE, instead of downloading it during the Oracle initialization process or installing it on your PC separately. Also, Forms 10g needs more desktop memory than Forms 6 to perform reasonably.

Practical Solution:

Upgrade the Java JRE to version 1.0.0_05 or later. Moreover, allocate at least 512 MB of memory, using the Java Control Panel, and add the parameter:

–mx512m –Dcache.size=500000000.

Issue3: The correlation of ncx_ticket is different from 11i

In 11i, the correlation rule to correlate the key Oracle Forms session id, ncx_ticket, yields a web_reg_save_param statement with these left and right boundaries:

web_reg_save_param (“ICX_Ticket”, “LB=icx_ticket=”, “RB=’”, LAST);

In R12, a new suffix appears on the LB and a there is a new RB:

web_reg_save_param (“ICX_Ticket”, “LB=icx_ticket&gv15=”, “RB=&“,LAST);

Practical Solution:

In the Recording Settings, modify the OracleApps ‘icx’ correlation rule with the above LB and RB

to enable vugen to properly correlate icx_ticket automatically.

 

 

Issue4: Failure to recognize certain new java objects on record

QTP has issues recording an R12 object called an ITEMTREE which was a list of expandable items that

look like this:

+40907824 ITEM1

+40907839 ITEM2

The following statements resulted after selecting an item during recording, but these would not play back:

nca_tree_select_item (“ITEMTREE_ITEMTREE_0″, “40907824 ITEM1 “);

nca_tree_activate_item(“ITEMTREE_ITEMTREE_0″, “40907824 ITEM1 “);

If you encounter this object, or any other new object that vugen has trouble with, this tip may get you

around the recording limitation.

Practical Solution:

We used QuickTest Pro’s ObjectSpy to examine the ITEMTREE object and found that each menu

item had a reference number that is selectable and plays back correctly. We then modified the

VUgen code with the correct reference numbers:

nca_tree_select_item(“ITEMTREE_ITEMTREE_0″, “409“);

nca_tree_activate_item(“ITEMTREE_ITEMTREE_0″, “409“);

This reference number is listed in the detail log files if you look carefully but they were not obvious

during recording, but we recommend using ObjectSpy if you encounter any other objects that

VUgen doesn’t handle correctly.

 

Issue5: Porting scripts between environments

A common situation is that you must develop scripts in one environment but conduct testing in another.

This requires that your scripts be portable across environments.

Practical Solution:

There are several values that need to be parameterized in order to make your scripts portable.

Parameterize and test these as early as possible to avoid frantic porting at test execution time!

The values to parameterize are:

1. Base URL or app server ip

2. Web server port number which you initially connect to for login authentication

3. Forms server port number, which the nca_connect sues to connect to the Forms server

4. Configuration (“config” value in nca_connect)

5. Module (second ‘path’ value in nca_connect)

Examples of where these are used:

/*initial Oracle launch*/

web_browser (“UnysisERPR12.com:8050“,

DESCRIPTION,

ACTION,

“Navigate=http ://{ app_srv} :{ port_web}/”,

LAST);

/*Forms server connect*/

nca_connect_server (“{app_srv}“, “{port_forms}“,

“module=/ebstop/{module}/apps/apps_st/appl/fnd/12.0.0/forms/US/FNDSCSGN fndnam=APPS

record=names config=’{config}‘ icx_ticket=’.{ICX_Ticket}..’

resp=’AR/CAN_CUSTOMER_MASTER_ADMIN’ …”);

Configuring Oracle Unified Directory (OUD) 11g as a Directory Server

I used Oracle Unified Directory (OUD) Version 11.1.1.5.0 during my test deployment locally here. I tried to collect as much information possible in this post for configuration.

Ideally, there are three possible configuration options for OUD:

  • as a Directory Server
  • as a Replication Server
  • as a Proxy Server

Directory Server provides the main LDAP functionality in OUD. Proxy server can be used for proxying LDAP requests. And Replication Server is used for replication from one OUD to another OUD or even to another ODSEE (earlier Sun Java Directory) server. You can my previous posts on OUD here and here.

In this post, we will talk about configuring OUD after installation as a Directory Server. You can read about OUD installation in my previous post here.

Once installation is completed, you will find following files in $ORACLE_HOME Directory.

-rwxr-x---  1 oracle oracle 1152 May 17 11:16 oud-proxy-setup  -rwxr-x---  1 oracle oracle 1482 May 17 11:16 oud-proxy-setup.bat  -rwxr-x---  1 oracle oracle 1180 May 17 11:16 oud-replication-gateway-setup  -rwxr-x---  1 oracle oracle 1510 May 17 11:16 oud-replication-gateway-setup.bat  -rwxr-x---  1 oracle oracle 1141 Aug 10 16:50 oud-setup  -rwxr-x---  1 oracle oracle 1538 May 17 11:15 oud-setup.bat

In this listing, .bat files are used in windows. So, In Linux (that is what I am using), we will be using following files.

  • oud-setup – To configure Directory Server
  • oud-replication-gateway-setup – To configure Directory Replication Server
  • oud-proxy-setup – To Setup Proxy Server

You can run the script shown below.

$ ./oud-setup OUD Instance location successfully created - /u01/oracle/Middleware/Oracle_OUD1/../asinst_2 Launching graphical setup...  The graphical setup launch failed.  Check file /tmp/oud-setup-8836874387532698932.log for more details.  Launching command line setup...  Oracle Unified Directory 11.1.1.5.0 Please wait while the setup program initializes...  What would you like to use as the initial root user DN for the Directory Server? [cn=Directory Manager]: Please provide the password to use for the initial root user: Please re-enter the password for confirmation:  On which port would you like the Directory Server to accept connections from LDAP clients? [1389]: 389  ERROR:  Unable to bind to port 389.  This port may already be in use, or you may not have permission to bind to it.  On UNIX-based operating systems, non-root users may not be allowed to bind to ports 1 through 1024 On which port would you like the Directory Server to accept connections from LDAP clients? [1389]:  On which port would you like the Administration Connector to accept connections? [4444]: Do you want to create base DNs in the server? (yes / no) [yes]:  Provide the base DN for the directory data: [dc=example,dc=com]: Options for populating the database:  1)  Only create the base entry 2)  Leave the database empty 3)  Import data from an LDIF file 4)  Load automatically-generated sample data  Enter choice [1]: 1  Do you want to enable SSL? (yes / no) [no]: yes On which port would you like the Directory Server to accept connections from LDAPS clients? [1636]:  Do you want to enable Start TLS? (yes / no) [no]: yes Certificate server options:  1)  Generate self-signed certificate (recommended for testing purposes only) 2)  Use an existing certificate located on a Java Key Store (JKS) 3)  Use an existing certificate located on a JCEKS key store 4)  Use an existing certificate located on a PKCS#12 key store 5)  Use an existing certificate on a PKCS#11 token  Enter choice [1]: Provide the fully-qualified host name or IP address that will be used to generate the self-signed certificate [ut1ef1]:  Do you want to start the server when the configuration is completed? (yes / no) [yes]:  Setup Summary ============= LDAP Listener Port:            1389 Administration Connector Port: 4444 LDAP Secure Access:            Enable StartTLS Enable SSL on LDAP Port 1636 Create a new Self-Signed Certificate Root User DN:                  cn=Directory Manager Directory Data:                Create New Base DN dc=example,dc=com. Base DN Data: Only Create Base Entry (dc=example,dc=com)  Start Server when the configuration is completed  What would you like to do?  1)  Set up the server with the parameters above 2)  Provide the setup parameters again 3)  Print equivalent non-interactive command-line 4)  Cancel and exit  Enter choice [1]: 3  Equivalent non-interactive command-line to setup server:  oud-setup \ --cli \ --baseDN dc=example,dc=com \ --addBaseEntry \ --ldapPort 1389 \ --adminConnectorPort 4444 \ --rootUserDN cn=Directory\ Manager \ --rootUserPassword ****** \ --enableStartTLS \ --ldapsPort 1636 \ --generateSelfSignedCertificate \ --hostName ut1ef1 \ --no-prompt \ --noPropertiesFile  What would you like to do?  1)  Set up the server with the parameters above 2)  Provide the setup parameters again 3)  Print equivalent non-interactive command-line 4)  Cancel and exit  Enter choice [1]: 4 No configuration performed. OUD Instance directory deleted. $

Then you need to run the oud-setup with the options provided for creating the directory server.

$ ./oud-setup           –cli           –baseDN dc=example,dc=com           –addBaseEntry           –ldapPort 1389           –adminConnectorPort 4444           –rootUserDN cn=Directory\ Manager           –rootUserPassword ******           –enableStartTLS           –ldapsPort 1636           –generateSelfSignedCertificate           –hostName ut1ef1           –no-prompt           –noPropertiesFile

OUD Instance location successfully created – /u01/oracle/Middleware/Oracle_OUD1/../asinst_2

An error occurred while parsing the command-line arguments:  An unexpected error occurred while attempting to initialize the command-line arguments:  Argument “bat” does not start with one or two dashes and unnamed trailing arguments are not allowed

Here, the issue is with the rootUserPassword value. Since I put * here, it replaced with all the files in the local directory, so it failed. Replace it with the required password for the “cn=Directory Manager” as shown below.

$ ./oud-setup           --cli           --baseDN dc=example,dc=com           --addBaseEntry           --ldapPort 1389           --adminConnectorPort 4444           --rootUserDN cn=Directory\ Manager           --rootUserPassword pass_t3st           --enableStartTLS           --ldapsPort 1636           --generateSelfSignedCertificate           --hostName ut1ef1           --no-prompt           --noPropertiesFile OUD Instance location successfully created - /u01/oracle/Middleware/Oracle_OUD1/../asinst_2  Oracle Unified Directory 11.1.1.5.0 Please wait while the setup program initializes...  See /tmp/oud-setup-5822533240188214866.log for a detailed log of this operation.  Configuring Directory Server ..... Done. Configuring Certificates ..... Done. Creating Base Entry dc=example,dc=com ..... Done. Starting Directory Server ......... Done.  To see basic server configuration status and configuration you can launch /u01/oracle/Middleware/asinst_2/OUD/bin/status $  cd bin $ ./status  >>>> Specify Oracle Unified Directory LDAP connection parameters  How do you want to trust the server certificate?  1)  Automatically trust 2)  Use a truststore 3)  Manually validate  Enter choice [3]: 1  Administrator user bind DN [cn=Directory Manager]:  Password for user 'cn=Directory Manager':  --- Server Status --- Server Run Status:        Started Open Connections:         1  --- Server Details --- Host Name:                ut1ef1 Administrative Users:     cn=Directory Manager Installation Path:        /u01/oracle/Middleware/Oracle_OUD1 Instance Path:            /u01/oracle/Middleware/asinst_2/OUD Version:                  Oracle Unified Directory 11.1.1.5.0 Java Version:             1.6.0_26 Administration Connector: Port 4444 (LDAPS)  --- Connection Handlers --- Address:Port : Protocol               : State -------------:------------------------:--------- --           : LDIF                   : Disabled 0.0.0.0:161  : SNMP                   : Disabled 0.0.0.0:1389 : LDAP (allows StartTLS) : Enabled 0.0.0.0:1636 : LDAPS                  : Enabled 0.0.0.0:1689 : JMX                    : Disabled  --- Data Sources --- Base DN:     dc=example,dc=com Backend ID:  userRoot Entries:     1 Replication: Disabled
$

Now, your newly created OUD Directory Server is running in the machine. You can check this with the ldapsearch command.

$ ldapsearch -h localhost -p 1389 -D “cn=Directory Manager” -w ebs_t3st -s sub -b “dc=example,dc=com” “(objectclass=*)” cn
dn: dc=example,dc=com

$

LDAP Search command will return one entry as shown above.

Here are some of my Observations:

  • If you want to use the port 389/636 for your Directory Server, then you need to run the setup using root user. Then you need to use start-ds and stop-ds commands using root user only.
  • There are six scripts to setup OUD components (three for unix/linux and three for windows environments)
  • You can setup a new TLS based certificate as part of configuring a new Directory Server.

 

Okay, thats all for now. We will meet in another post. Until then

Monday, 19 November 2012

HP DIAGNOSTICS


Overview
Identifying and correcting availability and performance problems can be costly, time consuming and risky. IT organizations spend more time identifying an owner than resolving the problem.
HP Diagnostics helps to improve application availability and performance in pre-production and production environments. HP’s diagnostics software is used to drill down from the end user into application components and cross platform service calls to resolve the toughest problems. This includes slow services, methods, SQL, out of memory errors, threading problems and more.

How HP Diagnostics software works
During a performance test, HP Diagnostics software traces J2EE, .NET, ERP, and CRM business processes from the client side across all tiers of the infrastructure. The modules then break down eachtransactionresponse time into time spent in the various tiers and within individual components. 

•Easy to use view of how individual tiers, components, memory, and SQL statements impact
Overall performance of a business process under load conditions. During or after a load test, you can
inform the application team that the application is not scaling and provide actionable data to them.

• The ability to triage and find problems effectively with business context, which enables to focus onproblems impacting business processes
Why? The Benefits
Diagnostics falls into the middle ground between Quality Assurance and Operations Performance Validation.
For developers, having Diagnostics means that tracing code doesn’t have to be added and removed. This is a big side effect of why diagnostics can improve performance.
Diagnostics is the science of pinpointing the root cause of a problem. Load Runner is the first load testing tool to provide a set of Diagnostics modules that trace, time, and troubleshoot end-user transactions acrossALL tiers of the system. These modules extend LoadRunner to provide a unified view of both end-user experience and application component (method, SQL) level performance. The intuitive visual interface allows the user to drill down from a problematic business process all the way to the poorly performing component. This granularity of results ensures that every load test provides development with actionable results, thus reducing the cost and time required to optimize J2EE/.NET applications.
Diagnostics can be integrated with HP Business Availability Center software, HP LoadRunner, and HP Performance Center
As the response times alone will not suffice the report, more people(client ,developer etc)  are interested to know the key features why the bottlenecks .As a part of performance engineering identifying the root cause as where the bottleneck is and why is it caused.

Any application  framework we test has numerous lines of code.it is difficult for a developer to identify why the application response in more on load if we just produce them with response times,if team has to fix them ,they will be in a doubt as which part of the code and methods are causing the increased response time.

Supported platforms
• WebSphere, WebLogic, Oracle 10g, SAP Web
Application Server, JBoss, Tomcat, Sun ONE, ATG,Borland ES, FUJITSU Interstage, Tmax Soft JEUS,
.NET 1.1 to 3.5
• WebSphere Portal Server, WebLogic Portal Server,SAP Enterprise Portal, Oracle 12i applications

Consider a J2EE/.net framework
As of the probes are installed on each layer like web,application layer, database layer the metrics are collected by diagnostics tool illustrating the behavior of the layers when a request is sent..
Key concern when it comes to metrics:
1.J2EE/.NET Framework –Average Method response time
2. J2EE/.NET Framework-Server requests response
2. J2EE/.NET Framework-server method calls persecond

When it comes in direct invoking of the diagnostics we have the following metrics
1. Average memory used
2. Average CPU used
3. JVM heap memory used
4. Connection pool, Thread pool
5. Collection leaks
6. EJB Methods /time
7. Server requests/time
8. Worst transaction
9. Worst SQL Queries
10. Network latency
11. Server request -exceptions

The report which we consolidate will speak clearly as where the
Developer-Which method or part of code should he fix?(methods and calls)
DBA-Which query should be tuned (any indexes are used  for the query)
Integration team-Any increase in servers and CPU are necessary for scalability.

Key Functions of Diagnostics:
Various Metrics (such as JVM heap size, garbage collection frequency, method invocation counts, etc.) are grabbed by Probes which pass metric data out to the Profiler web service (installed with and runs on the same server with the probe) to produce web pages in HTML or XML or format which can be parsed dynamically by Scripts running withing load runner programmed to store diagnostics values as user-defined values along with metrics maintained by LoadRunner (such as the number of vusers running concurrently).
HP(Mercury) Tuning Console product which tracks the impact of server configuration changes on metrics

When many app servers are involved add in (Diagnostic)to LoadRunner displays metrics files obtained from the
Diagnostic Server, also called the Commander, which stores data from the Collector and Mediator which filter and aggregate data obtained from probes on app servers.
Probe Profiler Tabs
Below is the sample of the probe metric page and listen below are the few metrics.

Summary
Memory
Load
Shortest Requests
Hotspots
Slowest Methods
CPU Hotspots (Methods)
Slowest SQL
Metrics
System (Host) CPU, Memory Usage, PageInsPerSec, PageOutsPerSec, PageCutsPerSec, Disk, Network
JVM: Probe: HeapFree, HeapTotal, HeapUsed
Java Platform: Classes, GC, Threads
Mercury System
Web logic: EJB, Execute Queues, JDBC, etc.

The final summary is that report plays a major role in making the performance of the application as desired by the User (Fast and scalable).Response times can be brought down by fixing these issues.
Hence forth diagnostics is the heart and soul for the Performance engineering Practice.

 

Performing Manual Correlation with Dynamic Boundaries in LR

What is Correlation: It is a Process to handle dynamic values in our Script. Here the dynamic value is replaced by a variable which we assign or capture from the server response.

Ways to do correlation: There are two ways to do this Correlation.

They are as follows:

  • Auto-Correlation: The Correlation Engine in LR Package captures the value and replaces it with another value
  • Manual Correlation: Understanding of the Script and its response is highly needed to do this. It is bit complex to do Manual Correlation sometimes but this is always the preferred method to handle Dynamic Values in our Script

Usually the Manual Correlation is done by capturing the dynamic value which is present in between the Static left and right Boundaries.

Objective: The intention of this article is that to give a method which will be useful when we wanted to capture and handle Dynamic Values when even the Left and right Boundaries are also dynamic.

The Solution can be much simple, Instead of determining the boundaries to the String we can actually use Text flags.

Before Getting into the Topic we should know about the Text Flags:

Text flags are the Flag used just after the text with Forward Slash.

Some of the commonly known and used Text flags are:

  • /IC to ignore the case
  • /BIN to specify binary data
  • /DIG to interpret the pound sign (#) as a wildcard for a single digit
  • /ALNUM<case> to interpret the caret sign (^) as a wildcard for a single US–ASCII alphanumeric character

Case 1: Digit Value

Suppose the response data is the string literal, but the issue is that the left boundary is changing every time; you get the left boundary as axb, where x ranges between 0 and 9, as follows:
a0b=Boundaryrb
a1b=Boundaryrb
a2b=Boundaryrb
——–
——–

a9b=Boundaryrb

We can capture the desired string by putting the following correlation function in place, using the /DIG text flag in combination with Left Boundary:

web_reg_save_param (“Corr_Param”, “LB/DIG=a#b\=”, “RB=rb”, LAST);

The corresponding place, which you expect to be dynamically filled in with a digit, should be replaced by a pound sign (#).

If there are multiple digits, we can use ‘##’.

Case 2: Boundary is String and case sensitive

web_reg_save_param (“Corr_Param”, “LB/IC/DIG=a#b\=”, “RB/IC=rb”, LAST);

Case 3: A Place to be filled either by a Digit or a letter

web_reg_save_param (“Corr_Param”, “LB/ALNUM=a^b\=”, “RB/IC=rb”, LAST);

HP Ajax TruClient – Overview with Tips and Tricks

Overview

  • In LoadRunner 11.5, TruClient for Internet Explorer has been introduced. It is now possible to use TruClient on IE-only web applications.

Note: This still supports only HTML + JavaScript websites. It does not support ActiveX objects or Flash or Java Applets, etc.

  • TruClient IE was developed as an add-in for IE 9, so it will not work on earlier versions of IE. This version of IE was the first version to expose enough of the DOM to be usable by a TruClient-style Vusers. Note that your web application must support IE9 in “standard mode”.
  • Some features have also been added to TruClient Firefox. These include:
    • The ability to specify think time
    • The ability to set HTTP headers
    • URL filters
    • Event handlers, which can automatically handle intermittent pop-up windows, etc.
  • Web page breakdown graphs have been added to TruClient (visible in LoadRunner Analysis). Previously they were only available for standard web Vusers.

Tips and Tricks

NTLM authentication -

Scenario: Some applications when accessed on Mozilla, demand NTLM authentication. If these steps appear while recording,   they don’t get recorded. Hence while replaying, due to the absence of these steps; the application fails to perform the intended transactions.

Solution: To avoid a situation in which an application asks for NTLM authentication while recording and replaying, one has to specify the application as a trusted NTLM resource. To make that, follow these steps.

  • Open the file “user.js” located in “%lr_path%\dat\LrWeb2MasterProfile”.
  • Locate the preference setting “network.automatic-ntlm-auth.trusted.uris”.
  • Specify the URL of the trusted resource as the value of this setting.
  • Save the file “user.js”

These changes are done only where the VUgen is used to develop the script. These changes get saved with the script and apply on different machines during load tests. 

Disable pop-ups during recording -

Scenario: The occurrence of unwanted pop-ups creates hurdles during script development.

Solution: To disable the pop-ups, we can do it by following the below mentioned steps –

  • In the Firefox address bar, enter ‘about: config’. Click ‘I’ll be careful, I promise’ tab
  • In the filter field, enter disable_open_during_load
  • Right click on ‘disable_ open_during_load’ and select ‘Toggle’. The value changes to ‘false’
  • Record initial Navigation step again
  • Your pop-ups will be disabled

Displaying the value in a parameter or variable -

Scenario: To understand the value that gets stored in a parameter while replaying the script.

Solution: This can be achieved using alert () function.

Example:

var x=”Good Morning”;

window.alert (x);

Calculating number of text occurrences -

Scenario: Scripting of most of the modern internet applications with number of dynamic features demand this requirement. Be it to check the presence of a text on the web page or to count the number of tickets generated in the application during run time, calculating text occurrences and using this count with right logical code.

Solution: In AJAX, using JavaScript functions, we can achieve this objective. This can be done as –

  • Drag ‘Evaluate JavaScript code’ from toolbox
  • In the arguments section add the following code -
    var splitBySearchWord = (document.body.textContent).split (‘Text to search for);
  • Then display the total number of occurrence of the text using Alert () method.
    window.alert (splitBySearchWord. Length);

 

 

Inserting random think time -

Scenario: End-user behavior is unpredictable and as a performance tester, while executing a performance test, our aspiration should always be to reach closest to the real time scenario. Some end users may spend only 2 secs before navigating to the next page, while many others may think for more time. Hence in many test scenarios, it would not be ideal to insert a fixed think time value before a web request; rather one must use random think time in such cases.

Solution: The above scenario can be achieved using advanced JavaScript functionality. They are:

  • From ‘Toolbox’, copy a wait function and paste it before the web request
  • In the argument section, replace the interval value ’3′ by ‘Math.floor (11*Math.random () +5); ‘

The above function will return a random number between 5 and 15.

Math.floor () method rounds a number Downwards to its nearest integer (Eg. The output of code ’Math.floor (1.8); ‘is 1). Hence 11 are used as a multiplication factor so that an integer in the upper decimals of 10 will be rounded to 10.Math.random () method returns a random number between 0 and 1.

Handling browser cache -

Scenario: You may wish to manage the cache handling features of the browser to replicate different types of test scenarios.

Solution: This can be achieved by following these steps -

  • Open the Script under Interactive mode.
  • Go to VUser > Run-Time Settings > General > Load mode Browser Settings
  • Inside the Settings frame display the option Advanced
  • Select the option “Compare the page in cache to the page on the network”; select one of the four values above according to your test requirements

0 = Once per session

1 = Every time the page is accessed

2 = Never

3 = When the page is out of date (Default value)

Conclusion

In Hexaware, we have used TrueClient protocol to record many applications for different clients. Some of the benefits we fruited are as follows – HP TruClient Protocol works with many frameworks like jquery, Ajax, YUI, GWT, JS, etc. Rich internet applications developed on Web 2.0 technologies can be easily scripted and replayed. Script development is interactive with script flow at one side of the window and application opened in the browser at the other. This makes scripting with AJAX TruClient protocol easier and faster. Object identification features minimize the use of complex correlations and make script more dynamic. Thus the scripts become more resilient to back-end changes. Complex client side events like Mouse over, slider bars, calendar items, dynamic lists, etc. can be very easily scripted, customized and replayed. Thus testing cycle is much shorter in case of Ajax TruClient as compared to that with other web protocols. Using AJAX TruClient, API + GUI response time can be obtained, as opposed to other protocols that provide only API response time.

 

Twitter Delicious Facebook Digg Stumbleupon Favorites More