Joke Collection Website - Blessing messages - Are there any tools or methods to test the performance of communication software?

Are there any tools or methods to test the performance of communication software?

Refer to this article.

Spark.lee's Mobile Phone Software Test

As a special consumer electronic product, mobile phone needs the following tests: reliability test (RQT;; For hardware; For software, it is a field trial); Standard compliance test (FTA); Interoperability test (IOT); Safety test (safety test); Strength test, etc.

Among them, some kinds of tests, such as FTA, have strict standards (GSM, 3GPP, etc. ) to define the function points to be tested. What testers need to do is to embody these function points in the writing of test cases, try their best to create the running environment needed by these test cases to complete the test and feed back the test results. However, for performance testing, there is no such specification for testers to refer to. Therefore, which use cases need to be carried out in performance testing and the level of indicators that pass the use cases are very flexible, which is largely limited by the experience of testers, project resources and schedule pressure. How to find a balance between resources, schedule and quality is a problem that product leaders need to consider. Testers can define the scope of performance testing, define the design requirements related to performance testing (to improve the testability of products) and conduct more effective performance testing through automated testing tools to improve the quality of products.

First, the scope of mobile phone performance testing

Performance testing emphasizes long-term, repetitive or high-intensity operations to verify the performance of products under various extreme conditions. Performance testing belongs to system testing in software testing, which tests the performance behavior of software running in an integrated system, aiming at identifying and eliminating performance bottlenecks related to architecture in software as soon as possible. Through the analysis of test data and logs, we can also find the hidden defects of the tested system. As a mobile communication electronic product, the performance test of the terminal is mainly related to its function, which can be roughly divided into the following categories:

1. Time correlation.

Time-dependent performance testing can be divided into long-term retention testing and finite-time response testing.

Long-term maintenance test is mainly to test the ability of the terminal to perform a certain function stably for a long time. It mainly includes long-term standby capability, long-term CS domain service maintenance capability, long-term PS domain service maintenance capability and long-term combined service maintenance capability. Long-term standby test is to stand by for a certain time (for example, 4 days) according to the battery capacity of the mobile phone, and then verify whether the mobile phone can still initiate calling and called services, which means that the terminal itself is still in a normal state after long-term standby and can initiate called calls, indicating that the terminal can receive pages normally in sleep mode. Long-term CS domain service maintenance test is to continuously make voice or video calls for a certain period of time (for example, 2 hours) according to the capacity of the mobile phone battery, to test whether the images and sounds are continuous and clear, whether there is a single pass phenomenon, and whether the mobile phone board is overheated during the call. The long-term PS domain service maintenance test mainly tests whether the uplink and downlink data transmission rate is stable, whether the webpage display is smooth, and whether the streaming media is continuous. During the data service, through the WWW service, ftp service or streaming media service that lasts for a period of time (for example, 2 hours). Long-term combined service maintenance test is to maintain CS and PS domain services at the same time for a period of time to verify the ability of the terminal to carry out combined services for a long time.

Time-limited response test is mainly to test the ability of the terminal to respond to the user's operation and give the operation result within a specified time. It mainly includes startup delay, shutdown delay, CS domain service access delay, PS domain service access delay and local application running delay. Boot delay refers to the total time from when the user presses the boot key (terminal boot, system boot, task boot, network search, location update) to when the terminal enters the standby interface, prompting the user that normal service can be performed. Shutdown delay refers to the total time from the time when the user presses the shutdown key (the terminal completes the network disconnection and writes the modified data in RAM back to flash) to the time when the terminal is completely shut down. The access delay of CS domain service refers to the total time from pressing the dial key to hearing the other party ring when making a voice or video call. Because this process needs to allocate resources on the network side, the test results may be affected by the availability of current network resources. For example, when applying for CS 64k service under high network load, the network side needs to reorganize or merge the wireless resources to meet the service requirements, which takes a relatively long time. PS domain service access delay refers to the total time required to connect to normal data services from the beginning when data services are carried out. The operation delay of local application refers to the time required to complete some local operation and maintenance functions, such as opening the phone book, looking for contacts in the phone book, storing newly created contacts, storing short messages, storing multimedia files, opening the browser, playing multimedia files, etc. If these delays are too long, the satisfaction of the user experience will be greatly reduced.

2. Frequency correlation.

Frequency-dependent performance test is to test the ability of the terminal to perform a certain function repeatedly and stably. Including power on/off success rate, initial cell search, cell reselection, CS domain service, PS domain service, combined service, handover, local application, etc. This repetitive operation involves many objects being created and released many times, so it may find potential memory leaks and other problems. The success rate test of power-on and power-off is mainly to check whether the physical layer can not receive the initial search command correctly after power-on for many times. Incomplete power failure may also lead to the next power failure. In some cases, power can only be re-powered by plugging and unplugging the battery board after the system crashes. The test of CS domain service success rate refers to counting the failure times by calling or being called a certain number of times, classifying the failure reasons, and analyzing whether the failure reasons related to the terminal can be found. The test methods of PS domain service success rate, combined service success rate and handover success rate are similar. The success rate of local applications includes storing and deleting files, contacts, short messages and other operations for many times, opening an application or performing some operations for many times, testing the stability of the application and finding out the bottleneck.

3. Part-time job.

Concurrent testing is mainly to test the processing ability of the terminal when it carries out multiple services at the same time. For example, voice service in CS domain and download service in PS domain can be carried out at the same time, or WWW internet service can be carried out while playing MP3, so as to test the support ability of protocol stack, operating system and processor for concurrent services.

4. Load test.

The load test is mainly to verify the load working ability of the system. The performance and behavior of the terminal under high load in a certain period of time under the condition of unchanged system configuration. For example, multiple ftp downloads are performed at the same time, so that the downlink transmission rate is close to the limit, and whether the terminal can work normally is observed.

Second, the method of mobile phone performance test

The methods of mobile phone performance testing can be divided into manual testing and automatic testing according to the degree of automation.

Manual testing is mainly through the manual operation of testers, with some monitoring instruments and tools to verify the performance of mobile phones. However, due to the various functions of the mobile phone and the heavy workload of performance testing, if a single test engineer executes all the test cases by manually pressing buttons, it will consume a lot of test time, which will easily make the test engineer feel tired or even bored, and easily lead to the omission of the test. There are many repetitive tasks in mobile phone testing, such as sending several short messages or multimedia messages to verify its success rate and stability, repeatedly calling, adding and deleting file systems many times, conflict testing and limit testing under multi-task and multi-process. , are repetitive work, manual implementation of telephone bills is time-consuming and laborious. If there is an automatic execution mechanism, the efficiency of testing will be greatly improved.

Therefore, an automatic testing tool for mobile phones is needed. The MMI function test of mobile phone is different from the MMI test based on PC, which depends on PC platform. At present, there are many powerful and universal automated testing tools on the market to support its testing, such as Winrunner, Robot, Loadrunner, etc., but these tools are usually incompatible with embedded systems such as mobile phones. This requires testers to carry out secondary development based on the current platform to meet the needs of automated testing.

Automatic performance testing of mobile phones is generally divided into the following steps:

1. system analysis

Turn the performance index of the system into the specific target of performance test. Usually, this step needs to analyze the structure of the system under test and formulate a specific performance test implementation plan in combination with performance indicators. This requires testers to fully grasp the structure and implementation of the system under test.

2. Create a virtual user script

Convert business processes into test scripts, often called virtual user scripts or virtual users. Virtual users simulate real users by driving real client programs. In this step, all kinds of business processes to be tested should be confirmed and recorded from beginning to end. Understanding these processes helps to analyze the details and time of each step and can be accurately converted into scripts. This process is similar to the process of making a robot that can imitate human behavior and movements. This step is very important, and the behavior of a single user in the real world is translated into computer programming language more accurately. If the behavior of the real world is distorted and cannot reflect the real world, the effectiveness and necessity of performance testing will lose its significance.

3. Create test scenarios according to user performance indicators.

According to the real business scenario, the generated test scripts are copied and controlled, and transformed into test case sets that meet the performance test indicators. In this step, rules and constraints are formulated for the execution of the script. Specifically, it involves the setting of parameters such as service type and concurrency order. This is like the headquarters where the scripts are run. This step is very critical and often needs to be carefully analyzed in combination with user performance indicators.

4. Run the test scenario and monitor the application performance synchronously.

In the process of performance testing, real-time monitoring can let testers know the performance of the application at any time during the testing process. It is necessary to monitor every component of the system: protocol stack, MMI application, memory occupation, driver running status, etc. Real-time monitoring can find performance bottlenecks early in test execution.

5. Result analysis and performance evaluation of performance test.

Combined with the test data, the system performance behavior law is analyzed, and the system performance bottleneck is accurately located. In this step, a large number of data can be calculated and counted by mathematical means, making the results more objective. In performance testing, it should be noted that the performance testing scheme that can be implemented is not necessarily successful, and the key to success or failure lies in whether it accurately simulates the real world.

In the whole performance test process, the choice of automatic test tools can only affect the complexity of performance test execution, whether it is simpler or more complicated; But people's analysis and thinking will directly lead to the success or failure of performance testing. Therefore, this paper focuses on sorting out the ideas of performance testing. For the introduction of test tools, please refer to the information about automated test tools.