Joke Collection Website - Blessing messages - What kind of mobile phone card is the best?

What kind of mobile phone card is the best?

Although mobile is shameless, I still want to use mobile because it has many services. 3G refers to the third generation mobile communication technology, and we are now using the second and fifth generation mobile communication technologies. What is 3G?

3G is the abbreviation of "3rd Generation", that is, the third generation mobile communication system (IMT-2000), which is an industry term in the field of high-speed mobile data network communication. Throughout the development history of mobile communication systems, analog mobile phones are called "the first generation"; Digital mobile phones are classified as "second generation"; The technology developed subsequently is called "the third generation". At present, there are many first-generation and second-generation communication systems in the world, which has become an obstacle to the popularization of single communication terminal equipment in the world. In addition, the biggest challenge faced by 3G technology is the standardization of the system, and how to support the universal use of a single communication terminal device. The design basis of 3G technology is to support omni-directional mobile multimedia system and provide flexible support for multiple data rates, which can transmit not only voice data but also video data as needed. With 3G network, we can transmit application data that needs high bandwidth. For example, it can provide full video, video conference, high-quality voice and Web data services anytime and anywhere.

In Japan, there are currently two 3G systems: NTT DoCoMo and Vodafone use W-CDMA; Au company uses cdma20001x.

3G-324M protocol

The technical standard of the third generation mobile communication system IMT-2000 is formulated by ITU-R and ITU-T organizations. ITU-R and ITU-T organizations accept and evaluate the proposals (draft standards) submitted by various national and regional standards organizations. The main standards organizations involved in drafting standards include: ARIB (Radio Industry and Commerce Association), TTC (Telecommunications Technical Committee), ETSI (Europe), T 1 (USA) and TTA (DPRK). The third generation partnership project is composed of the above-mentioned standards organizations, and its goal is to formulate global application draft standards. 3G-324M(* 1) is a framework standard formulated by 3GPP, based on ITU-T H.324/M and other international standards. It can support real-time multimedia service applications in wireless circuit-switched networks. Several sub-protocol standards included in this standard are: multiplexing and separation of voice, video, user data and control data (H.223); In-band call control (H.245). It defines functional components and end-to-end communication programs to support audio-visual communication applications.

(* 1) 3G-324M: H.323 is a protocol standard formulated by ITU-T for communication systems and terminal equipment based on internet and local area network. SIP is a famous multimedia communication protocol standard formulated by IETF. Communication network needs the support of protocol standards, and is interconnected with SIP protocol and H.323 system through gateways, among which H.324/M is a protocol standard specifically for mobile communication. 3G-324M standard is a further development of H.324/M, which is used to support IMT-2000.

The 3G-324M standard is very similar to H.324/M in technology, but it stipulates that H.263 is the compulsory basic standard and MPEG-4 is the recommended standard for video coding. AMR is a mandatory standard for audio coding. H.223 establishes the multiplexing application standard of multi-channel audio and video signals in a single mobile communication channel, and H.245 establishes the message control exchange standard in each stage. However, the effective transmission method in error-prone networks is formulated in the 3G-324M standard. In addition, level 2 (formulated by Appendix B of H.223) is a mandatory multiplexing protocol layer, which can provide enhanced fault-tolerant control.

The protocol configuration details of 3G-324M standard are as follows.

3G-324M Media Coding Set

3G-324M defines mandatory media coding standards for media types such as video, audio and data.

(1) video coding

3G-324M stipulates that H.263 is the mandatory benchmark protocol (except the extended standard in the appendix), while MPEG-4 is the recommended video coding standard. As an old coding standard, H.263 is still applied to the existing H.323 system, so keeping it can provide system compatibility. MPEG-4 is more flexible than H.263 benchmark protocol and provides more advanced error detection and correction methods.

Both coding sets generally adopt QCIF (Quarter Universal Intermediate Format) input image format. MPEG-4 adopts a series of tool sets to improve fault tolerance. The method includes data segmentation, inversible long code (RVLC), resynchronization identification and HEC (header spreading code).

The data partitioning method provides discrete cosine (DCT) coefficients and motion vector parameters through identifiers, which can avoid the error of one set of data from affecting the decoding of another set of data. For example, if the DCT coefficient error is detected in a given macroblock, we can still hide the DCT coefficient error and reconstruct the macroblock with the correct motion vector information. In this way, compared with the method of replacing the wrong macroblock with the correct macroblock of the previous adjacent data frame in the decoding process, this method can provide higher video image decoding quality.

The RVLC method allows a specific data block to be decoded from the front end (forward) or the end (backward). This method improves the repair probability of fault data sets.

Resynchronization identifiers are codes inserted into the bit stream, which can help the decoder to resynchronize the decoding process.

HEC supports more efficient resynchronization of decoding process, and its extended resynchronization identifier also contains time information.

(2) Speech coding

ITU-T standard has no mandatory requirement for voice coding, and only IMT-2000 voice service applies mandatory AMR coding (adaptive multi-rate) to support 3G-324M equipment. G.723. 1 is an optional old coding standard recommended by 3GPP, which can provide compatibility with H.323 and other standards.

The highest processing rate of AMR speech coding is 12.2 kbps, and the actual transmission rate of AMR varies from 4.75 kbps to 12.2 kbps according to different base station distances, signal interference and traffic conditions. AMR also supports soft noise generation (CNG) and discontinuous transmission (DTX). We can also dynamically adjust the processing rate and error control according to different actual situations to provide the best voice quality in the current channel environment.

AMR coding also supports Unequal Error Detection and Protection (UED/UEP). This method classifies bit streams based on determinable data correlation. If an error is detected in the most relevant data, the AMR data frame can be directly decoded and the data error can be hidden.

(3) Data communication protocol

T. 120 is a data communication protocol recommended by data conference applications. But there is no mandatory agreement at present, so it is only an optional standard.

H.245 call control

H.245 is a general call control standard defined for H.324, H.3 10, H.323 and V.75 Unlike other ITU-T recommendation standards that are revised every two years, H.245 needs to be revised at any time as required, mainly because it is applied to quite a few systems, and we need to quickly enhance its functions to meet the needs of its rapid development.

H.245 adopts simple retransmission protocol (SRP) or SRP with numbering option (NSRP). H.245 establishes the protocol layer of control channel segmentation and reassembly (CCSRL), which can ensure the reliability of application in error-prone environment. The use of SRP, NSRP and CCSRL is determined by the negotiation layer. H.245 adopts ASN. 1 (abstract syntax symbol 1) to define its own message structure. In addition, the message data is encoded in binary based on the PER (packing coding rule) rule.

Before the two parties start the H.245 session, one problem that must be solved is: if there is a protocol conflict between endpoint devices, which endpoint device is responsible for solving or playing a leading role. Different endpoint devices may have different functions in H.223 signal multiplexing/signal separation, video and audio coding, data sharing and so on. H.245 provides the function of function exchange, and supports both devices to determine a common function set through negotiation.

Media and data streams are transmitted through logical channels, and corresponding control support needs to be provided. H.245 uses logical channel signaling to support logical channel switching and parameter exchange. In H.245 standard, the sender determines the coding set and parameters of communication between the two parties according to the supported function set broadcast by the receiver. If the receiver has specific functional requirements, it can send a request signal to the transmitter through mode request.

Finally, H.245 uses a set of call control commands and prompts to provide data flow control, user input prompts, video coding control, signal jitter and distortion prompts.

User input indication (UII) of *H.245 plays an important role in all application services that require user interaction. For video message applications, typical UII applications usually provide user preference selection, message recording and query, and general mailbox management functions. H.245 provides a reliable signaling protocol, which can ensure the reliable transmission of various messages (such as DTMF audio). H.245 UII provides two levels of user prompts: character prompts and information prompts indicating the length of strings. For example, how long the user has pressed a particular key.

H.223 multiplexing and signal separation

In order to provide different levels of fault-tolerant support, 3G-324M defines multi-level H.223 transmission. In H.223 multimedia multiplexing protocol, the adaptation layer provides QoS for logical channels, while the multiplexing and demultiplexing layer provides merging multiple logical channels into a single channel. It can support both time-division multiplexing and packet multiplexing, and can provide flexibility, high efficiency and low delay required by applications.

Multimedia communication in circuit-switched networks needs multiplexing technology to support mixed synchronous transmission of video, voice and data services. Multiplexing technology specifies a logical channel for each media type, which can combine multiple bit streams provided by different media sources into one bit stream and transmit it on one channel.

Different media types have different QoS requirements for their corresponding logical channels. For example, for data transmission, the delay requirement is generally not too strict, but it requires completely error-free transmission. In addition, the speech transmission delay is strictly limited, and its comprehensive quality can be achieved on the basis of 10-3 error rate. Video communication needs to transmit data and audio communication. Therefore, multiplexing technology needs such a function, which can provide different QoS control for logical channels according to different media coding requirements.

(1) multiplexing and signal separation layer

Level 0 (H.223 basic protocol)

As the basic protocol of H.223, Level 0 provides synchronization and bit stuffing support. Level 0 provides 16 different multiplexing modes and supports the mixed transmission of media, control information and data packets. The multiplexing mode can be determined by negotiation between communication endpoint devices. Level 0 has very limited fault tolerance. Bit errors may interrupt the transmission of HDLC (Advanced Data Link Controller) protocol and affect bit padding, which will be mistaken for payload.

Grade1(appendix a of h.223)

Appendix A of H.223 defines Level 1, and its synchronization mechanism can effectively enhance the transmission performance of error-prone channels. In order to improve the synchronous transmission performance of MUX-PDU, the 8-bit HDLC synchronization identifier used in MUX-PDU frame is replaced by 16-bit PN (pseudo noise) sequence. HDLC is replaced by a more stable frame pattern and a longer frame identifier. PN sequence, as a set of pseudo-noise signals, is actually a set of 0 and 1 bit sequences randomly generated according to statistics. Although it is generated randomly, the receiver can judge what the next symbol of the sequence is according to its specific structure.

Multiplex frames do not use bit padding, but in bytes (8-bit structure, the beginning of the frame corresponds to the first byte. 1 byte = 8 bits) and searches for the synchronization identifier in bytes.

In this way, even in a low-rate and transparent transmission environment, the generation of synchronization marks is no longer deterministic. However, this significantly improves the detection of synchronous recognition features under the condition of bit stream error.

Level 2 (H.223 Appendix B)

H.223 Appendix B defines Level 2. It is a further enhancement of level 1, providing more stable multiplexing protocol data unit data frames.

Level 3 (H.223 Appendix C)

Appendix C of H.223 defines Level 3, which provides the most stable transmission scheme. It includes improved multiplexing and conversion layer, and provides forward error correction (FEC) and retransmission mechanism (ARQ).

(2) Transfer floor

According to the different media types (data, voice and video) of the upper layer, the protocol defines three types of conversion layers (AL 1, AL2 and AL3). The AL-SDU data unit from the upper layer is transferred to the MUX layer to become an AL-PDU data unit. The design of AL 1 is based on data transmission, which is mainly used to transmit user data and H.245 control messages. It needs upper layer protocols to provide error control and handling. AL2 provides 8-bit CRC checksum and optional sequential coding control for packet loss monitoring. AL2 can support AL SDU unit with variable length, which is an ideal conversion layer for audio data transmission. AL3 is mainly based on video application design, providing 16 CRC checksum and optional sequential coding. It supports variable-length aluminum SDU units and provides an optional transmission mechanism.

Brief introduction of media transformation

The supporting technology of multimedia mobile communication (such as 3G) can provide users with multimedia access services anytime and anywhere through any networked multimedia terminal. However, the current problem is how to provide multimedia content and service applications to various types of terminal devices in an acceptable format. However, these terminal devices have various differences in computing power, display, network access and bandwidth support.

Media conversion can dynamically adjust the content of a frame, including image size, coding format and organization of multimedia content, and make the converted content as faithful as possible to the source content. Media conversion is based on the functions supported by the terminal device and user preferences. Related components used in the media conversion process include:

The multimedia message model (including hierarchical identifiers of different forms of content) required for conversion can support the display and transmission of multimedia content.

The conversion strategy decision component can analyze the characteristics of the content and calculate and select the appropriate conversion strategy.

Media processing technology supports media operation, translation, transcoding and reorganization of multimedia content.

MPEG-7 and the latest MPEG-2 1 standards provide the definition of multimedia information model to support media conversion. Telecom operators will generally formulate feasible media conversion strategies according to available media processing and transmission resources.

Media conversion processing technology

Media conversion support technologies can be divided into at least two categories: intra-media conversion and cross-media conversion.

Intra-media conversion technology needs to provide corresponding media conversion according to the special coding scheme of specific media. For example, the video compression characteristics that the conversion can be based on include: video frame transmission rate, image format and specific intra-frame and inter-frame quality, and support the conversion of specific data size and format. Similarly, content can be provided to terminal devices with bandwidth limitations through conversion. In addition, according to the terminal equipment's support for different codes, corresponding code conversion can be provided.

For application services based on 3G-324M, they need H.263 and MPEG-4 transcoding, which are two standard video formats. This conversion mode has its inherent limitations, and its inherent lower limit depends on the minimum perceptible reception quality of a specific media. * Cross-media conversion can overcome this limitation.

Cross-media conversion uses "semantically equivalent" media types to replace specific media types, which can minimize the impact on user reception.

For example, a video clip in TV format (720 x 480 pixels) can be converted into a series of still key images. Only when the image scene changes or obviously changes, the image size is reduced and converted into QCIF format (176 x 144), and the low-rate audio data is synchronized and encapsulated by NMS message. In this way, video clips can be transmitted to 2.5G mobile phones.

Because the mobile communication environment has obviously limited display capability and network bandwidth, intra-media conversion and cross-media conversion will play an important role in the content transmission field of mobile video services.

Application based on 3G mobile phone

20011kloc-0/,NTT DoCoMo launched the first third-generation mobile application service "FOMA" in the world. Compared with the second mobile phone application system "mova", FOMA can provide high-speed data communication services and videophone functions. Video phone function can support face-to-face communication, which is a major breakthrough in traditional voice communication.

Since the service was first launched on June 1 2006, operators have high expectations for the growth of their users because it is the first third-generation mobile application service in the world. However, because the size, battery life and communication coverage of mobile phones have not fully met the needs of users, the growth of FOMA users is not as exciting as expected. With the enhancement of FOMA mobile phone performance, the improvement of size, weight and battery life, the order quantity of FOMA users increased rapidly in 2004. On July 20th, 2004, NTT DoCoMo Company announced that its FOMA service users had exceeded 5 million, and only in the last two months, it added 1 10,000 users. According to the data of TCA (Telecommunications Operators Association),1October 30th, the number of FOMA service users has reached 7.57 million. NTT DoCoMo announced that it plans to reach the goal of 25 million FOMA service users in March 2007, which means that half of DoCoMo mobile phone users will subscribe to FOMA service.

According to the company strategy announced by NTT DoCoMo, various mature application services (such as video calling and video/text transmission services) will be developed and launched by improving the functions of terminal equipment, and the product line will be continuously expanded to further expand the "FOMA" application service that can support high-speed and large-capacity data transmission.

The video function of mobile phone will continue to develop, and it can support various applications in multimedia and general environment. In 3G technology, video message (video mail) and video streaming function can support the transmission of content to mobile phone users; The function of 3G video gateway is to provide the connection between 3G users and PC through broadband and other access lines. Video conferencing has become a basic mobile video service. For application systems that do not adopt 3G technology, it is also necessary to develop the above application services.

Developing application system by using videophone function

Application services using the video phone function include:

Real-time/non-real-time applications

Unidirectional/bidirectional application

Point-to-point/multipoint application

Personal/business use

Different application systems need corresponding operating platforms (development/implementation environments). NMS Telecom Company provides a platform for the development of "video access" application system, which can support the development of the above-mentioned various application systems.

Video message (video mail) application

When traveling outside the office or family, you can take photos of the trip and send video images to friends, family and colleagues. With the video message service, you can send e-mail to a remote FOMA mobile phone through a FOMA mobile phone, or to an H.323 device connected to a network (such as an IP network). Through the gateway, video recording/playing and video conversion functions, we can build various message application systems. The call access end of video mail application is the video server, so the video server needs to be equipped with transcoding function, so that the video can be stored and played in the server.

Visual communication/video conference application

At present, there are point-to-point and multipoint video conferences. You can create a video conference between FOMA and IP clients. FOMA users can join online meetings, whereas in the past, only personal computers could join online meetings. If you are far away from home and office, you can also realize video communication with friends and colleagues through the Internet.

In the fire brigade and police station, they can set up danger meetings and share video information such as fires, accidents and crime scenes with the headquarters.

After adding video recording and playing functions, video conferencing can be used for future information inquiry. Similarly, business applications can use the above functions to build their own video call centers. The video server can support multi-party video conference applications after it is configured with video mixed coding conversion.

Video streaming application

The video streaming application can transmit the video clips stored in the streaming server to the FOMA mobile phone through the gateway. Newsletters, travel information, product introductions, advertisements and even movies can be processed into videos and transmitted to users. This processing can convey more visual and intuitive information than traditional voice information.

The video streaming application can also support live broadcasts of concerts and sports, which are transmitted to FOMA's mobile phone through a remote server. With FOMA mobile phone, you can also watch real-time video streams of various events provided by remote monitoring cameras anytime and anywhere. The application fields include disaster areas, tourist attractions, transportation, hospitals, schools, important activities and so on. The same technology can help us monitor children and pets at home. These are basic one-way real-time applications.

Using 3G-324M protocol, compressed video (MPEG4-4), compressed voice (AMR) and signals can be transmitted at the rate of 64 kbps. Compared with video transmission based on packet network, this method can support more efficient video and voice information transmission. This service requires a video gateway equipped with a transcoder to provide corresponding processing for the data stream.

For more information, please visit NMS www.nmscommunications.com.cn.