Search This Blog

Testing Concepts

Introduction to Software Testing
Software Testing is the process to evaluate capability of a system/program to ensure that it meets its required results. Software applications can fail in different ways. Identifying all of the failures is generally not possible. But by applying different techniques in different types of testing an attempt can be made to identify critical defects. Hence testing can be used as a process to

  • Prove that the program is error free
  • Ensure that the software performs its intended functions as per the expectations.
  • Discover errors before the software is delivered to the customer.
  • Execute a program with the intent of finding errors.
  • Demonstrate that the software is working according to the specifications and that behavioral and performance requirements are met.
  • Perform operation on a system or application under controlled conditions (normal and abnormal) and evaluating the results.
  • Validate an application to check if it
·        Does what it is expected to do
·        Does not do what it is not expected to do

 There are 3 important testing Concepts. They are


  1. Structural Versus Functional Testing    
   Structural Testing also known as White Box testing. It uncovers errors that have been introduced while “coding” the program. This ensures sufficient testing for implementation of a function.
    Functional Testing also known as Black Box testing. It uncovers errors that occur while “executing” the program. It ensures that the requirements are properly satisfied.
 
  1. Static Versus Dynamic Testing
 Static Testing is performed without executing the code. This testing is not done on the software. Instead methods like code review, walkthrough and inspection are used. E.g. Code walkthrough
 Dynamic Testing is performed by executing the code. This is also known as program testing. E.g. System Testing

  1. Manual Versus Automated Testing
 Manual testing is performed with the help of human resource. 
Automated testing is performed with the help of automation testing tools.

 Tips for bug submission(Bug Submission & Report)

 The following points help us in preparing an effective bug report:

  • Analyze the error and describe it with minimum number of steps, so as to reproduce the problem easily.
  • Write a report that is complete and easy to understand.
  • Write bug reports as soon as a bug is found. The delay in reporting a bug can cause incompleteness in its description.
  • While submitting a bug, it is always better to prepare a snap shot for the sequence of steps. This would help the development team to analyze the bug faster and also will avoid unnecessary explanation time for the testing team.
Functional Testing:
1. Black box testing:   
• Black box testing, concrete box testing is used in computer programming, software engineering and software testing to check that the outputs of a program, given certain inputs, conform to the functional specification of the program.
• The term black box indicates that the internal implementation of the program being executed is not examined by the tester. For this reason black box testing is not normally carried out by the programmer.
• In most real-world engineering firms, one group does design/coding work while a separate group does the testing of the functionalities.
• This testing is based on the analysis of the specification of a piece of software without reference to its internal logic.
• The goal is to test how well the component conforms to the published requirements for the component.


2. Sanity/smoke Testing 
• A quick test to verify that the major functions of a piece of software work. This is also used as a checkpoint for accepting the build of an application by a tester.
• This testing is done to determine the suitability of an application or function for further testing.
• For example
– If a new software is crashing systems every 5 minutes
– Bogging down system to crawl
– Destroying databases
    The software may not be in a condition to warrant further testing.

3. Integration Testing
• Phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing.
• It takes as its input modules that have been checked by unit testing, groups them in larger aggregates, applies tests defined in an Integration test plan and delivers as its output, the integrated system ready for system testing.
• This type of testing is especially relevant to client/server and distributed systems.
• The purpose of Integration testing is to detect any inconsistencies between the software units that are integrated together called assemblages or between any of the assemblages and hardware.
There are two methods in integration testing
1. Incremental
2. Big bang

Incremental
• Continuous testing of an application as new functionality is added.
• The program is constructed and tested in small increments.
• Requires various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed.
• Requires test drivers be developed as needed - done by programmers or by testers.
• A systematic test approach can be applied.

Big bang
• All components are combined in advance.
• Big-Bang approach is where basically all the modules or builds are constructed and tested independently of each other and when they are finished, they are all put together at the same time.
• Correction is difficult because isolation of causes is complicated.

Two types of integration are usually used

Top-Down Strategy
• Starts at the top of the program hierarchy and travels down its Branches
• It is an incremental approach
• This can be done either
• Depth-first or
• Breadth-first
• Stubs are used until the actual program is ready

Bottom-Up Strategy
• Process starts with low level modules first
• Cluster approach to test them properly
• Test drivers are used
• No need of program stubs
• The critical modules are build first 
• Often works well in less structured shops
• Usually finds errors in critical routines earlier than the top down.

Sandwich Strategy
• Integration of Top-Down and Bottom up method
• Instead of completely going for top down or bottom up, a layer is identified in between.

4.  Usability Testing
     • Testing the ease with which users can learn and use the application.
     • This testing is performed from the point of view of the end user
     • User interviews, surveys, video recording of user sessions can be used to perform this testing.

5.  System Testing
     • Refers to testing of the entire system.
     • It can be tested as a whole system against the Business Requirement Specification(s) (BRS) and/or the System Requirement Specification (SRS),
     • It is conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements.
     • As a rule, System testing takes, as its input, all of the "integrated" software components that have successfully passed Integration testing.
     • This phase of testing is more of an investigatory testing phase, where the focus is to have almost a destructive attitude and test not only the design, but also the behavior and even the believed expectations of the customer.
     • It is intended to test up to and beyond the bounds defined in the software/hardware requirements specification(s).
     • It is the final destructive testing phase before Acceptance testing.
     • Compare new outputs (Responses) with old (Baselines)

     6.  Regression Testing
·        Regression testing refers to the continuous testing of an application for each new release.
·        The regression testing is done to ensure proper behavior of an application after fixes or modifications have been applied to the software or its environment and no additional defects are introduced due to the fix.
·        Regression testing ensures reported product defects have been corrected for each new release and that no new problems were introduced.

     7. User Acceptance Testing
     • User acceptance testing (UAT) is the last phase of a software project and often will be performed before a new system is accepted by the customer.
     • Users of the system will perform these tests which ideally are derived from the User Requirements Specification, to which the system should conform.
     • The focus is on a final verification of the required business function and flow of the system.
     • The idea is that if the software works as intended and without issues during a simulation of normal use, it will work just the same in production.

    Types of UAT:
    Alpha Testing: Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.

     Beta Testing: Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released. 

    Importance of UAT:
     To protect an organisation from any trouble and in order to address various risks involved during a change to an organization, UAT is important. Risks can be related to any of the following
Reputation: The customers, suppliers, or legal authorities perceive there is a problem with the organisation, and decide not to use it.
Legal: It is possible that the system could break laws letting the stakeholder to legal proceedings.
Time: If the system does not meet the key business timelines there could be potential loss of business to the customer and loss of reputation to the service provider
Resource: Not understanding of the system could lead to adding more cost in terms of human, software, hardware resources.

8. Globalization Testing
          • The objective of globalization testing is to detect potential problems in application design that could inhibit globalization.
          • It makes sure that the code can handle all international support without breaking functionality that would cause either data loss or display problems.
          • Globalization testing checks proper functionality of the product with any of the culture/locale settings using every type of international input possible.

     9. Localisation Testing 
     • Checks the quality of a product's localization for a particular target culture/locale.
     • This test is based on the results of globalization testing, which verifies the functional support for that particular culture/locale.
     • Can be executed only on the localized version of a product

     10. Compatibility Testing:
     Testing to validate how well software performs in a particular
     • Hardware
     • Software
     • Operating system
     • Environment    
     • Operating system

     11. Data Migration Testing
     The Data Migration testing is done to validate the Migration of the source data to the new platform. Data migration testing and implementation are practically inseparable. The data migration testing should be started as soon as possible to ensure that it occurs prior to the Design and Build phases of the core project.

     12. Data Conversion Testing
           The Data Conversion testing is done to validate the Conversion of the   source data to the target data. Data Conversion testing and implementation are practically inseparable. The Data Conversion testing plan should be made to confirm the following:
     • Whether or not the source data type has been converted to the target data type?
     • Is there any loss in the data?
     • Is data integrity maintained?

     13. Security Testing
     Security testing is performed to assess the sensitivity of the system against unauthorized internal or external access. Testing is done to ensure that unauthorized persons or systems are not given access to the system. Features include
               • Programs that check for access to the system
               • Session hijacking
               • Session reply
               • SQL Injection
               • Hidden file manipulation
               • Elements of security testing  Authentication, Authorization, Confidentiality, Integrity and Non- repudiation

     Skills required for Security Testing:
               • Ability to think like a hacker
               • Being aware of all known vulnerability & exploits
               • Thorough understanding of runtime environment
               • Identification of criticality and sensitivity of data assets

     14. Install/Uninstall Testing
     • Installation testing (in software engineering) can simply be defined as any testing that occurs outside the development environment.
     • Such testing will frequently occur on the computer system the software product will eventually be installed on.
     • Full, partial or upgrades
     • The installation test for a release is conducted with the objective of demonstrating production readiness.
     • Includes the inventory of configuration items.

     15. Scalability Testing:
·        Scalability is the capability of the system to continue to expand or contract as the needs change and to provide acceptable services as the load increases or decreases. Scalability could be expansion or contractions of various aspects for load, database volume, number of users, network size and performance optimization.
·        The purpose of Scalability testing is to identify major workloads and mitigate bottlenecks that can impede the scalability of the application. The comfort level of an application in terms of user load, end user experience and system tolerance levels .

     16. Recovery Testing:
     • Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
     • Desirable to have a system capable of recovering quickly with minimal human intervention.

     17. Exploratory Testing:
     • Informal test that is not based on formal test plans or test cases.
     • Testers will not know much on the software and will be learning the software as they test it.

     18. Adhoc Testing:
     • Informal test that is not based on formal test plans or test cases.
     • Testers should have significant understanding of the software before they test it.

     19. Mutation Testing:
     • A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect.
     • It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.

     20. Comparision Testing:
     Comparison testing is testing that compares software weaknesses and strengths to those of competitors' products.

     21. Conformance Testing:
     Verification of implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.

     22. Disaster Recovery Testing:
      This is carried out to ensure that the application and its data in a particular environment and operating system can be recovered successfully within a reasonable tine in case of a disaster.

     23. Reliability Testing:
     The purpose of reliability testing is to ensure that the installed product meets the reliability requirements specified. Using this, the probability of a software failure, or the rate at which software errors occur can be identified. Assumptions made, data collected about software defects as function of time can be used to model and compute software reliability metrics. These metrics attempt to indicate and predict the probability of failure during a particular time interval, or the mean time to failure (MTTF) and mean time between failures (MTBF).

     Tips:
·        Testing is based on the specifications. Hence the testing life cycle should always be based on the specifications.
·        Define the expected output or result.
·        Each test result should be inspected completely.
·        Test cases should include both valid/invalid scenarios.
·        Test planning should not be done assuming that there will be no errors

Scheduling ETLs in ODI

Step1: Start the Agent.bat or Agent.sh under ODI\oracledi\bin





Step2: Launch Topology Manager – > Physical Architecture – > Agents – > Insert Agent

Provide the necessary information and click Test. You should get a pop up: Agent Test Successful.

If you are getting this below error message, it means you have not started agent.bat or agent.sh. Please do so and test again.






[Note: - In case you wish to start an agent on another port say 20911, go to Oracledi/bin and type  agent.bat “ -port=20911”  and click test for successful connection. By default agent.bat or agent.sh communicate on 20910 port only]

Step3: Topology Manager – >   Logical Architecture – > Agents  – > Insert Agent
Link the Physical and Logical Agent with the required Context.

Step4: Edit the below parameters in odiparams.bat or odiparams.sh to update the Repository connection information.
ODI_SECU_DRIVER=oracle.jdbc.driver.OracleDriver
ODI_SECU_URL=jdbc:oracle:thin:@10.177.145.195:1521:PLMDM32
ODI_SECU_USER=ODIMASTER
ODI_SECU_ENCODED_PASS=hpfXf7qMPh.tZnQtoCaNKCCNj
ODI_SECU_WORK_REP=WORKREP
ODI_USER=SUPERVISOR
ODI_ENCODED_PASS=LELKIELGLJMDLKMGHEHJDBGBGFDGGH

Step5: Run the agent in agentscheduler mode from ODI\oracledi\bin





Step6: Operator  – > Scenarios – > Scenario – > Scheduling – > Insert Scheduling












Set the Context, Agent, Log Level and update the schedule as how you want to run the ETL. Restart the agent.bat and agentscheduler.bat. This will start the ETL as per the schedule you specified.

Note: Keep the agent.bat and agentscheduler.bat running during the schedule time you set.

Running Multiple Agents:

  1. For example Iam creating another agent AGENT2 on port 20911
  2. Run the agent2 with the port no you are planning to run
Eg: agent2.bat “-port=20911”
  1. Launch the Topology Manager and insert Physical and Logical agent
       i.e.   AGENT2
  1. For creating agent on two different server , make a copy of agent, odiparams and agentscheduler and lets say  I rename to  agent2.bat, odiparams2.bat and agentscheduler2.bat
  2. Modify the odiparams2.bat repository connection to point to the server where you want to run the AGENT2.
  3. Modify the agent2.bat, agentscheduler2.bat to call “%ODI_HOME%\bin\odiparams2.bat” instead of “%ODI_HOME%\bin\odiparams.bat”
 

Upload Testcases from Excel to QC





A systematic process that identifies a set of interesting classes of input conditions to be tested, where each class is a representative of a large set of other possible tests.