Gigalight 100G Optical Modules Passed the Connectivity Test of Multiple Cloud Service Providers

Shenzhen, China, May 19, 2018 – Gigalight announced the 100G series optical transceiver modules have passed the connectivity test of multiple cloud service providers. The Gigalight 100G series products include 100G QSFP28 SR4 multi-mode VCSEL optical modules and 100G QSFP28 CWDM4 single-mode WDM optical modules. The interconnection test covers the mainstream cloud devices of major brand equipment vendors and the optical transceiver module products of our partners.

Qualified 100G Series Optical Transceiver Modules

Gigalight has always been among the top 10 companies in the world of optical interconnects with its invention of active optical cables and deep innovation. However, Gigalight is essentially an integrated solution provider of optical transceiver modules and optical network devices. Gigalight ships a large number of 10G multimode and 10G single-mode optical modules and 40G multimode SR4 optical modules to the world. In the field of 40G single-mode optical modules, Gigalight’s main customers include global TIE1 equipment vendors. The cloud service providers have directly verified Gigalight’s 100G optical modules since the end of 2017. The successful interconnection results so far have greatly encouraged Gigalight’s confidence in deploying 100G optical modules in bulk in the cloud.

Global Data Center Infrastructure Ecosystem

Global Data Center Infrastructure Ecosystem

Gigalight has a deep optical interconnect product line. Among this product line, the multimode optical interconnect products based on the VCSEL technology applications are the traditional advantages of Gigalight, including the cost-effective and reliable 100G QSFP28 SR4 optical modules with good compatibility. The single-mode 100G series short-range optical modules were developed in 2016 and this time passed the threshold of full-brand compatibility and interoperability testing after optical design thresholds and reliability verification thresholds. Finally, they will not lose pace in the industry’s striding forward in 2018.

As a global optical interconnect design innovator, Gigalight has prepared the best 100G optical modules for industry users.

About Gigalight:

Gigalight is a global optical interconnection design innovator. We design, manufacture and supply various kinds of optical interconnect products including optical transceivers, passive optical components, active optical cables, GIGAC™ MTP/MPO cablings, and cloud programmers & checkers, etc. These products are designed for three main applications which are Data Center & Cloud Computing, Metro & Broadcast Network, and WIreless & 5G Optical Transport Network. Gigalight takes the advantages of exclusive design to provide customers with one-stop optical network devices and cost-effective products.

  • Comments Off
  • Sin votos

What is Data Center Interconnect/Interconnection?

Data Center Interconnection means the implements of Data center Interconnect (DCI) technology. With the DCI technology advances, better and cheaper options have become available and this has created a lot of confusion. This is compounded by the fact that a lot of companies are trying to enter this market because there is a lot of money to be made. This article is written to straighten out some of the confusion.

According to the different applications, there are two parts of data center interconnections. The first is intra-Data Center Interconnect (intra-DCI) which means connections within the data center. It can be within one building or between data center buildings on a campus. Connections can be a few meters up to 10km. The second is inter-Data Center Interconnect (inter-DCI) which means connections between data centers from 10km up to 80km. Of course, connections can be much longer but most of the market activity for inter-DCI is focused on 10km to 80km. Longer connections are considered Metro or Long-haul. For reference, please see the table below.

DCI Distance Fiber Type Optics Technology Optical Transceivers
intra-DCI 300m MMF NRZ/PAM4 QSFP28 SR4
500m SMF QSFP28 PSM4
2km QSFP28 CWDM4
10km QSFP28 LR4
inter-DCI 10km SMF Cohernet QSFP28 4WDM-10
20km QSFP28 4WDM-20
30km to 40km QSFP28 4WDM-40
80km to 2000km CFP2-ACO


The big bottlenecks are in the intra-DCI and therefore, the highest volume of optical transceivers are sold here generating the most revenue, however, it is low margin revenue because there is so much competition. In this space, may of the connections are less than 300m and Multi-Mode Fiber (MMF) is frequently used. MMF is thicker, and components are cheaper because the tolerances are not as tight, but the light disperses as it bounces around in the thick cable. Therefore, 300m is the limit for many types of high speed transmission that use MMF. There is a data center transceiver with a transmission distance up to 100m over OM4 MMF for example.

Gigalight 100GBASE-SR4 100m QSFP28 Optical Transceiver

100G QSFP28 SR4 for MMF up to 100m

In a data center, everything is connected to servers by routers and switches. Sometimes a data center can be one large building bigger than a football field and other times data centers are built on a campus of many buildings spanning many blocks. In the case of a campus, the fiber is brought to one hub and the connections are made there. Even if the building you want to connect to might be 200m away, the fiber runs to a hub, which can be more than 1km away, so this type of routing increases the fiber distance. Some of the distances between buildings can be 4km, requiring Single Mode Fiber (SMF), which has a much narrower core, making it more efficient, but also increasing the cost of all related components because the tolerances are tighter. Therefore, with data centers growing, so has the need for SMF as the connections get longer within the data center. With SMF you have the option to drive high bandwidth with coherent technology, and we’ll see more of this in the future. Previously coherent was only used for longer distances, but with cost reductions and greater efficiency versus other solutions, coherent is now being used for shorter reaches in the data center.

Gigalight 100GBASE-LR4 Lite 4km QSFP28 Optical Transceiver

100G QSFP28 LR4L for SMF up to 4km

500m is a new emerging market and because the distance is shorter, a new technology is emerging, and that is silicon photonics modulators. EMLs (Externally Modulated Lasers) perform modulation within the laser, but with silicon photonics, the modulator is outside the laser and it’s a good solutions for distances of 500m. In an EML, the modulator is integrated into the same chip, but is outside the laser cavity, and hence is “external”. For silicon photonics, the laser and modulator are on different chips and usually in different packages. Silicon photonics modulators are based on the CMOS manufacturing process that is high scale and low cost. A continuous wave laser with silicon photonic modulation is very good for 500m applications. EMLs are more suitable for longer reaches, such as 2-10km. Therefore, with data centers growing, so has the need for single mode fiber as the connections get longer within the data center. With SMF you have the option to drive high bandwidth with coherent technology, and we’ll see more of this in the future. Previously coherent was only used for longer distances, but with cost reductions and greater efficiency versus other solutions, coherent is now being used for shorter reaches in the data center.

100GE PSM4 2km QSFP28 Optical Transceiver

100G QSFP28 PSM4 for SMF up to 500m/2km

100GE CWDM4 2km QSFP28 Optical Transceiver

100G QSFP28 CWDM4 for SMF up to 2km

100GBASE-LR4 10km QSFP28 Optical Transceiver

100G QSFP28 LR4 for SMF up to 10km


Inter-DCI is typically between 10km and 80km, including 20km and 40km. Before we talk about data center connectivity, let’s talk about why data centers are set up the way they are and why 80km is such an important connection distance. While it is true that a data center in New York might backup to tape in a data center in Oregon, this is considered regular long-haul traffic. Some data centers are geographically situated to serve an entire continent and others are focused on a specific metro area. Currently, the throughput bottleneck is in the metro and this is where data centers and connectivity are most needed.

100GE 4WDM-20 20km QSFP28 Optical Transceiver

100G QSFP28 4WDM-20 for SMF up to 20km

100GE 4WDM-40 40km QSFP28 Optical Transceiver

100G QSFP28 4WDM-40 for SMF up to 40km

Say you have a Fortune 100 retailer and they are running thousands of transactions per second. The farther away a data center is, the more the data is secure because the data center is so far away and separate from natural disasters, but with the increased distance there are more “in flight” transactions are at risk of being lost due to latency. Therefore, for online transactions there might be a primary data center that is central to retail locations and a secondary data center that is around 80km away. It’s far enough away not to be affected by local power outages, tornadoes, etc, but close enough that there is only a few hundred milli-seconds of latency; therefore, in the worst case a small number of transactions would be at risk.

In another example of inter-DCI, as if a certain video is getting a lot of views, the video is not only kept in its central location, but copies of the video are pushed to metro data centers where access is quicker because it’s stored closer to the user, and the traffic doesn’t tie up long haul networks. Metro data centers can grow to a certain size until their sheer size becomes a liability with no additional scale advantage and thus they are broken up into clusters. Once again, to guard against natural disasters and power outages, data centers should be far away. Counterbalancing this, data centers need to have low latency communication between them, so they shouldn’t be too far away. There is a compromise and the magic distance is 80km for a secondary data center, so you’ll hear about 80km data center interconnect a lot.

It used to be that on-off keying could provide sufficient bandwidth between data centers, but now with 4K video and metro bottlenecks, coherent transmission is being used for shorter and shorter distances. Coherent is likely to take over the 10km DCI market. It has already taken over the 80km market but it might take time before coherent comes to 2km. The typical data center bottlenecks are 500m, 2km, and 80km. As coherent moves to shorter distances, this is where the confusion comes.

The optical transceiver modules that were only used within the data center are gaining reach, and they’re running up against coherent solutions that were formerly only used for long distances. Due to the increasing bandwidth and decreasing cost, coherent is being pulled closer into the data center.

The other thing to think about is installing fiber between data centers. Hopefully this is already done, because once you dig, it’s a fixed cost, so you put down as many fibers as you can. Digging just for installing fiber is extremely expensive. In France when they lay fiber, they use another economic driver. Whenever you put in train tracks, you put in fiber at the same time, even if it is not needed. It’s almost for free because they are digging anyway. Fibers are leased to data centers one at a time; therefore, data centers try to get as much bandwidth as possible onto each fiber (this is also a major theme in the industry). You might ask, why not own your own fiber? You need to have a lot of content to own your own fiber. The cost is prohibitive. In order to make the fiber network function, all the nodes need to use the same specification and this is hard. Therefore, carriers are usually the ones to install the full infrastructure.

Article Source: John Houghton, a Silicon Valley entrepreneur, technology innovator, and head of MobileCast Media.

  • Comments Off
  • Sin votos

Gigalight Launches 40km 100G Single Receiver Optical Modules for DPI

Shenzhen, China, May 8, 2018 − Recently, Gigalight has launched two 40km single receiver optical modules, the 100G QSFP28 ER4 Lite Receiver and the 100G CFP2 ER4 Receiver, as a new solution for Deep Packet Inspection (DPI) applications.

With the rapid development of IP network (Cloud Computing etc.) and high bandwidth growth, various application requirements based on IP, such as monitoring, auditing and traffic analysis, have become increasingly complex. The optical interface of the front-end flow collection and distributary equipment has been upgraded from the past 10GE to the present 100GE ports. When the flow collection equipment can not be placed in the same room of the collected equipment, and the long distance fiber transmission is needed after the signal splitting, users have to face the challenge of the excessive attenuation of the optical signal. The usual practice is to add optical amplifiers, such as EDFA, on the optical link, but this will result the problems of cost and maintain. However, the flow collection equipment only needs to collect the optical signals, which means receiving only. In response to this feature, Gigalight has developed two 40km 100G single receiver optical modules of different packages to respond to different customer needs.

The first one is the single receiver 100G QSFP28 ER4 Lite optical module with a power dissipation less than 2.5W. It uses a high-sensitivity APD detector (ROSA) with a receiving sensitivity up to -15dBm per channel (1E-12, @25G). This module increases the optical transmission budget for users and meets the optical transmission applications (optical fiber directly connected without splitting) up to 40km when the Forward Error Correction (FEC) function in the front of the system side is enabled.

Single Receiver High-Sensitivity 100G QSFP28 ER4 Lite Optical Module

The second one is the single receiver 100G CFP2 ER4 optical module with a power dissipation less than 3.5W. It uses the PIN photodetector (ROSA) and the miniaturized Semiconductor Optical Amplifier (SOA). At the same time, it adopts the SOA closed-loop adaptive gain control algorithm developed by Gigalight, which can quickly lock the working current of SOA and quickly adjust the amplification performance of SOA, to ensure that the receiver’s acceptance sensitivity is as high as -21.4dBm per channel (1E-12, @25G). Even when the FEC function of the system side is disabled, it can also meet the optical transmission applications (optical fiber directly connected without splitting) up to 40km, fully compliant with the IEEE 802.3ba 100GBASE-ER4 standard and the ITU-T G.959.1 OTU4 (4L1-9C1F) standard that is more stringent.

Single Receiver High-Sensitivity 100G CFP2 ER4 Optical Module

Wiith the advantage of the Gigalight 40km 100G single receiver modules’ high sensitivity feature, users do not need to pay more on the relay optical amplification equipment, thereby reducing the operating cost and providing an economical solution for the long distance application of the 100GE ports between the machine rooms.

Gigalight has built a comprehensive portfolio for single receiver 100GE optical modules, including 100G QSFP28 LR4 Rx only, 100G CFP2 LR4 Rx only, 100G QSFP28 ER4 lite Rx only (New), and 100G CFP2 ER4 Rx only (New). These products have greatly met customers’ choice of 100G product diversity. At the same time, through technological innovation, the self-developed 100G ROSA components have been adopted to achieve the cost advantage, thus bringing the practical benefits to manufacturers of flow collection and distributary DPI equipment.

About Gigalight:

Gigalight is global optical interconnection design innovator. A series of optical interconnect products include: optical transceivers, passive optical components, active optical cables, GIGAC™ MTP/MPO cablings, and cloud programmers & checkers, etc. Three applications are mainly covered: Data Center & Cloud Computing, MAN & Broadcast Video, and Mobile Network & 5G Optical Transmission. Gigalight takes advantage of its exclusive design to provide clients with one-stop optical network devices and cost-effective products.

Article Source:

  • Comments Off
  • Sin votos

The Trend of DSP’s Application in Data Center

The data center 100G has begun to be used on a scale, and the next-generation 400G is expected to begin commercial use by 2020. For 400G applications, the biggest difference is the introduction of a new modulation format, PAM-4, to achieve a doubled transmission rate at the same baud rate (device bandwidth). For example, the single-lane baud rate of DR4 used for transmissions up to 500m need to reach 100Gb/s. In order to realize the application for such rate, the data center optical transceiver modules began to introduce Digital Signal Processor (DSP) chips based on digital signal processing to replace the clock recovery chips of the past to solve the sensitivity problem caused by insufficient bandwidth of the optical devices. Can DSP become a broad solution for future data center applications as expected in the industry? To answer this question, it is necessary to understand what problems the DSP can solve, what its architecture is, and how the development of its costs and power consumption trends in the future.

The Problems that DSP Can Solve

In the field of physical layer transmission, DSP was first applied in wireless communications for three reasons. First, the wireless spectrum is a scarce resource, and the transmission rate demand has been increasing. Increasing the spectrum efficiency is a fundamental requirement for wireless communications, so DSP is required to support a variety of complex and efficient modulation methods. Second, the transmission equation of the wireless channel is very complicated. The multipath effect, and the Doppler effect in the high-speed motion, can’t satisfy the wireless channel’s compensation demand with the traditional analog compensation. DSP can use various mathematical models to compensate the channel well Transmission equation. Third, the Signal-to-Noise Ratio (SNR) of the wireless channel is generally low, and the Forward Error Correction (FEC) should be used to improve the sensitivity of the receiver.

In the field of optical communications, DSP was first commercially used in long-distance coherent transmission systems over 100G. The reason is similar to that of wireless communications. In long-distance transmission, since the laying cost of optical fiber resources is very high, the improvement of spectral efficiency to achieve higher transmission rates on a single optical fiber is an inevitable requirement for operators. Therefore, after the use of WDM technology, the use of coherent technology based-on DSP has become an inevitable choice. Secondly, in long-distance coherent transmission systems, by using of a DSP chip, the dispersion effects, non-linear effects caused by transmitter (Tx) and receiver (Rx) devices and the optical fiber itself, and phase noise introduced by the Tx and Rx devices, can be easily compensated without the need for Dispersion Compensation Fiber (DCF) that placed in the optical link in the past. Finally, in long-distance transmission, due to the attenuation effect of optical fibers, an optical amplifier (EDFA) is generally used to amplify the signal every 80km to reach a transmission distance up to 1000km. Each amplification will introduce noise to the signal, reducing the SNR of the signal, therefore, the FEC should be introduced to improve the receiver’s receiving ability during long-distance transmission.

To sum up, DSP can solve three problems. First, it supports high-order modulation formats and can improve the spectral efficiency. Second, it can solve the effects caused by components and signal-channel transmission. Third, it can solve the SNR problem.

Then, whether there are similar requirements in data center has become an important basis for us to judge whether we should introduce DSP.

First of all, let’s take a look at the spectrum efficiency. Does data center need to improve spectrum efficiency? The answer is yes. But unlike the lack of wireless spectrum resources and insufficient optical fiber resources in the transmission network, the reason for improving spectrum efficiency in data center is that the insufficient bandwidth of the electrical/optical devices and the insufficient number of wavelength division/parallel paths (limited by the size of optical transceiver modules). Therefore, to meet the needs of future 400G applications, we must rely on increasing the single-lane baud rate.

The second point is that for single-lane 100G and above applications, current Tx electrical driven chips and optical devices can not reach bandwidths above 50GHz. Therefore, it is equivalent to that a low-pass filter is introduced at the transmitter. The performance on the code is inter-symbol interference in the time domain. Taking the application of 100G PAM-4 as an example, the bandwidth-limited modulation device will make the width of the optical eye diagram of the signal very small, then the clock recovery based on the analog PLL in the past could not find the best sampling point, making the receiver unable to recover the signal (this is also why the TDECQ needs to introduce an adaptive filter for equalization in the standards). After introducing the DSP, the signal can be directly spectrally compressed at the Tx end. For example, the extreme approach is to artificially introduce intersymbol interference between two symbols to reduce the signal bandwidth of the Tx end. At this time, the eye diagram of PAM-4 on the oscilloscope will become PAM-7 form. The Rx end recovers the signal through an adaptive FIR filter. In this way, the uncontrollable analog bandwidth effect in the modulating/receiving device becomes a known digital spectrum compression, reducing the bandwidth requirement for the optical device. Fujitsu’s DMT (Discrect-Multi-Tone) modulation technology, which has been promoted in conjunction with DSP, can even use a 10G optical device to transmit 100G signals.

Third, does FEC technology really need to be introduced at the module end? Inside the data center, the maximum transmission distance is not more than 10km. The link budget is about 4dB with the loss of the joint. Such SNR effects caused by the link is basically negligible. Therefore, the FEC in the data center is not intended to solve the link SNR, but to solve the performance shortage of the optical devices. At the same time, we need to consider that the electrical interface signal at the optical module end is upgraded from 25G NRZ to 50G PAM-4 (net rate) in the 400G era, so it is often necessary to turn on the electrical FEC to meet the requirements for transmission from the optical transceivers to the switches. In this case, reopening the FEC on the module side is not necessary and has no effect. Because for FEC, we mostly discuss the error correction threshold. For example, 7% FEC error correction threshold is at 1E-3 Bit Error Rate (BER), that is to say that FEC is able to correct all errors below this BER, and FEC above this BER is essentially useless (regardless of the Burst Error which is usually solved with Inter-leaver). Therefore, there is no difference between the effect of using multiple FECs and using only the best FEC. Considering the power consumption and delay caused by FEC on the module side, it may be better to open FEC on the switch side in the future.

The Architecture of DSP

In the optical communication field, DSP generally consists of several parts: the front-end analog digital mixing section, including ADC (Digital-to-Analog Converter, required), DAC (Analog to Digital Converter, optional) and SerDes, digital signal processing section (including FEC) ) and the PHY section. The PHY section is similar to the CDR chip with the PHY function, and will not be described here.


The main function of ADC and DAC is to convert analog signal and digital signal, which is a bridge between the modulation device and digital signal processing section. The ADC/DAC mainly has four key indicators which are sampling rate, sampling effective bit width, analog bandwidth and power consumption. For the 100G PAM-4 application, the sampling rate of ADC in the Rx end needs to reach 100Gs/s. Otherwise, Alias will be generated during sampling, which will cause distortion to the signal. The effective sampling width is also very important. For PAM-4 applications, it does not mean that 2 effective bits can satisfy the requirement of digital signal processing, but at least 4. Analog bandwidth is currently the main technical challenge for ADC/DAC. This index is limited by both effective bit widths and power consumption. Generally, there are two ways to implement high bandwidth ADC/DAC which are GeSi and CMOS. The former has a high cutoff frequency and can easily realize the high bandwidth. The disadvantage is very high power consumption, so it is generally used in instrumentation. The cutoff frequency of CMOS is very low, so to achieve high bandwidth, multiple sub-ADCs/DACs must be sampled using an interleaving method. The advantage is low power consumption. For example, in a coherent 100G communication system, a 65Gs/s ADC with 6 effective bits is composed of 256 sub-ADCs with a sampling rate of 254Ms/s. It must be noted that although the ADC has a sampling rate of 65Gs/s, its analog bandwidth is only 18GHz. With a clock jitter of 100fs, the theoretical maximum analog bandwidth of 4 effective bits width is only up to 30GHz. Therefore, an important conclusion is that under the condition of using DSP, the bandwidth limitation of the general system is no longer the optical device, but the ADC and DAC.


In the data center applications, and the digital signal processing unit is still relatively simple. For example, for 100G PAM-4 applications, it performs spectral compression of the transmitted signal, nonlinear compensation, and FEC encoding (optional) in the Tx end, then the ADC uses an adaptive filter to compensate the signal and digital domain CDR in the Rx end (separate external crystal support is required). In the digital signal processing unit, the FIR filter is generally used to compensate the signal. The Tap number and the decision function design of the FIR filter directly determines the performance of the compensation DSP and power consumption. It should be particularly pointed out that the DSP application in the field of optical communications is facing with a large number of parallel computing problems. The main reason is the huge difference between the ADC sampling frequency (tens or even 100Gs/s) and the digital circuit operating frequency (up to several hundred MHz), in order to support the ADC of 100Gs/s sampling rate, digital circuits need to convert the serial 100Gs/s signals into hundreds of parallel digital signals for processing. It can be imagined that when the FIR filter only adds one Tap, the actual situation is that hundreds of Taps needs to be added. Therefore, how to deal with the balance of performance and power consumption in the digital signal processing unit is the key factor to determine the quality of the DSP design. In addition, inside the data center, optical transceiver modules must meet the interoperability prerequisites. In practical applications, the transmission performance of a link depends on the overall performance of the DSP and analog optical devices in the Tx and Rx ends. It is also a difficulty to design a reasonable standard to correctly evaluate the performance of the Tx and Rx ends. When the DSP supports that FEC function is opened in the the physical layer, how to synchronously transmit and receive the FEC function of the optical transceivers also increases the difficulty of data center testing. Therefore, so far, coherent transmission systems are interoperable among manufacturers’ devices, and do not require interoperability among different manufacturers. (The TDECQ performance evaluation method is proposed for PAM-4 in 802.3.)

Power Consumption and Cost

Because DSP introduces DAC/ADC and algorithm, its power consumption must be higher than the traditional CDR chip based on simulation technology. And the method that DSP lowers the power consumption is relatively limited, mainly depending on the promotion of the process of tape, for instance, upgrading from the current 16nm to 7nm process can achieve a 65% reduction in power consumption. The current design power consumption of the 400G OSFP/QSFP-DD based on the 16nm DSP solution is around 12W, which is a huge challenge for the thermal design of the module itself or the future front panel of the switch. Therefore, it may be based on the 7nm process to solve the 400G DSP problem.

Price is always a topic of concern to data center. Unlike traditional optical devices, DSP chips are based on mature semiconductor technology. Therefore, larger chip costs can be expected to fall under the support of massive applications. Another advantage of DSP’s future application in data centers is flexibility, which can meet the application requirements of different data rates and scenarios by adjusting the DSP configuration in the same optical device configuration.

Article Source:

Related Gigalight 100G QSFP28 Optical Transceivers:

  • Comments Off
  • Sin votos

Gigalight Launches Industrial-Grade 100G QSFP28 Optical Transceivers

Shenzhen, China, April 30, 2018 − Recently, Gigalight has successfully developed two industrial-grade 100G optical transceivers: 100G QSFP28 LR4 (up to 10km) and 100G QSFP28 4WDM-40 (up to 40km). At the same time, these new products have been implemented in a small batch of trial production.

The Gigalight industrial-grade 100G QSFP28 LR4 transceiver is designed with the reliable Japanese industrial-grade TOSA in the transmitting end, and adopts our high-quality self-developed thermal-design ROSA in the receiving end. In the condition of full temperature of -40 to 85 degrees, there is an excellent performance with a margin more than 31% in the Eye Diagram (as shown in the figure below) . Moreover, it has a power consumption less than 3.5W and can meet the demands of 10km fiber transmission with zero-error-rate at 100GE data rates. It is fully compliant with IEEE 802.3ba 100GBASE-LR4 standard and is an ideal choice for applications of relatively harsh environments, meeting customers’ demands of particular applications and the optical transport network transmission between AAU and DU for the future 5G mobile fronthaul application.

Gigalight Industrial-grade 100G QSFP28 LR4

Gigalight Industrial-grade 100G QSFP28 LR4

Eye Diagram of Gigalight Industrial-grade 100G QSFP28 LR4 under -40℃ (left) and -85℃ (right)

Eye Diagram of Gigalight Industrial-grade 100G QSFP28 LR4 under -40℃ (left) and -85℃ (right)

The Gigalight industrial-grade 100G QSFP28 4WDM-40 transceiver adopts the reliable Janpanese industrial-grade TOSA and ROSA on both the transmitting and receiving ends. It has a low power consumption less than 3.8W at the full temperature range of -40 to 85 degrees, ideal for building green data center and reducing energy costs. Its ROSA adopts a highly sensitive APD photodetector with a sensitivity less than -16.5dBm. When the FEC function is enabled in the system side, it can transmit up to 40km and meet the requirements of the 100GE 4WDM-40 MSA specification. This optical transceiver provides a cost-effective long-distance Data Center Interconnection (DCI) solution for distributed data centers under harsh environments, and also meets the demends of the optical transport network transmission between AAU and DU for the future 5G mobile fronthaul application.

Gigalight Industrial-grade 100G QSFP28 4WDM-40

Gigalight Industrial-grade 100G QSFP28 4WDM-40

5G Fronthaul Network

5G Fronthaul Network

There is no doubt that the future 5G network will more rely on the support of optical network. Meanwhile, the fronthaul to 5G needs a strong optical network due to the flat network architecture. Besides, the density and increasing number of base stations will lead a tremendous demand for optical fiber resources and broadband. The successful launch of industrial-grade 100G QSFP28 optical transceivers has not only enriched the 100G QSFP28 product line but also created a product line with unique features, which can better serve the 5G optical transmission demands and supply the gap in the market.

About Gigalight:

Gigalight is global optical interconnection design innovator. A series of optical interconnect products include: optical transceivers, passive optical components, active optical cables, GIGAC MTP/MPO cablings, and cloud programmers & checkers, etc. Three applications are mainly covered: Data Center & Cloud Computing, MAN & Broadcast Video, and Mobile Network & 5G Optical Transmission. Gigalight takes advantage of its exclusive design to provide clients with one-stop optical network devices and cost-effective products.

  • Comments Off
  • Sin votos

Introduction on 25G/50G/100G Ethernet

The rise of cloud computing and the expansion of the data center are pushing the latest Ethernet speeds up, while big data based on cloud technology has already added to carriers’ workloads. To meet this requirement, the data center extends the bandwidth capabilities that are parallel to existing infrastructure. Rapid growth in the expected 25G and 100G Ethernet deployments is a testament to this trend.

In order to be able to handle the increasing data load, the industry’s largest long-distance cloud companies have together with their core network’s data center operators, to jointly use the 100G Ethernet architecture. However, most operators believe that 100G or even 40G is somewhat excessive for server connections because its workload only needs to be incrementally improved over 10G networks. This is why, although 40G and 100G Ethernet have been introduced, 25G and 50G Ethernet are still one of the reasons for the common choices within the data center. Below we will briefly explain why 25G is more suitable for these applications than 40G.

Several recent Ethernet bandwidth technologies are not designed to set a new high speed, but more to push such network protocols into adjacent markets, especially the data center market. Below we will explain the specific reasons by introducing 25G, 50G and 100G respectively.


The official draft of the IEEE 802.3 draft standard for 25G Ethernet will eventually be completed in 2016, and it will mainly be aimed at servers in cloud data centers. This is a relatively short time frame due to the reusable components of 10G and 100G Ethernet.

40G and 100G already exist, but why use 25G? This confused some operators. The answer lies in the requirements of architecture and performance. The existing 100G standard network system consists of four links, each of which has a bandwidth of 25 Gbps. This four-to-one ratio is equivalent to connecting servers to 25G switches and then converging to 100G uplinks, which helps network operators to more easily expand their data centers.

Similarly, 40G Ethernet is composed of four 10G Ethernet links. However, according to John D’Ambrosia, chairman of the Ethernet Alliance, many data centers have adopted more than 10G servers. This is why a number of chip vendors have provided 25G serial/deserializer transceivers. This will not only make bandwidth convergence for 25G, 50G, and 100G Ethernet more convenient, but also reduce costs due to volume.


Although the implementation of the IEEE standard for 50G Ethernet is still some time away (approximately 2018 to 2020), many industry alliances expect that products will begin to appear in 2016. Similar to 25G technology, 50G Ethernet technology will be the next solution for high-speed connection servers and data centers. According to analyst firm’s Dell’Oro data, over the next few years, servers and high-performance flash storage systems will need to exceed 25G.

To help deliver these accelerated Ethernet technology products faster, the 25G/50G Ethernet Alliance has eliminated the royalty fees for the 25G and 50G Ethernet specifications and is open to all data center ecosystem vendors.

Reusing the 25G components of the existing 100G network can reduce the implementation cost of 50G. For example, the cost structure of 25G cabling is the same as 10G, but its performance is 2.5 times. Similarly, 50G costs half of the 40G cost, but performance can be increased by 25%.


For long-distance carrier networks ranging from hundreds of kilometers to tens of thousands of kilometers, the deployment of 100G Ethernet will continue to grow.

But according to information provided by a new industry alliance, the 100G architecture will be another excellent market alternative. The 100GCLR4 alliance led by Intel and Arista Network believes that 100G is ideal for connecting large “ultra-large” data centers spanning 100 meters to 2 kilometers.

Other companies are also seeking alternative 100G implementations for the data center. Sinovo Telecoms has joined the CWDM4MSA industry consortium, which aims to define a common specification for low-cost 100G optical interfaces within 2 kilometers of data center applications. With the transformation of network infrastructure to 100G data rates, data centers will require long-range, high-density, 100G embedded optical connections. The MSA uses Coarse Wavelength Division Multiplexing (CWDM) technology to provide four 25G single-mode fiber (SMF) link channels. Similarly, the OpenOpTIcsMSA organization initiated by Ranovus and Mellanox Technologies will also focus on developing a data center supporting 2 kilometers of 100G.

In the past, the increase in speed has driven the development of most network components. Today, to handle the massive data flow through the cloud, companies need to seek a balance between speed-up and reuse technology to find a cost-effective solution. Gigalight, as a professional optical transceiver vendor, can provide various kinds of optical transceivers to meet your 25G/50G/100G/200G/400G transmission needs. For more details, please visit its official website.

5 Kinds of 40G QSFP+ Optical Transceivers

40G optical transceivers are a series of optical transceivers with 40Gbps transmission rate, in which CFP and QSFP are the main form factors. And the 40G QSFP+ optical transceivers are one of the most widely used optical transceivers. In the post, Gigalight will introduce you several kinds of most popular 40G QSFP+ optical transceivers that can help you have a better choice.


The 40G LR4 QSFP+ optical transceiver is typically used with LC single-mode fiber patch cables for transmission distances up to 10km, and it has 4 data channels that transmit data simultaneously. The advantages of 40G LR4 QSFP+ optical transceivers are high density, low cost, high speed, large capacity, and low power consumption.

The working principle of 40G LR4 QSFP+ optical transceiver: The laser driver controls the arrival wavelength, the optical signal passes through the multiplexer and is combined together for transmission. When arriving at the receiving end, these transmitted signals are then demultiplexed by the demultiplexer into four channels with a transmission rate of 10 Gbps. Then the PIN detector or transimpedance amplifier recovers the data stream and transmits the optical signal.


The 40G SR4 QSFP+ optical transceivers are often used with the MPO/MTP connector in 40G data transmission. It has four independent full-duplex channels and is also transmitted through four channels. The transmission rate is the same as that of the LR4. The difference is that the 40 G SR4 QSFP+ optical transceivers are often used with multimode optical fiber. The transmission distance when using it with the OM3 optical fiber jumper is 100m, and the transmission distance when using it with the OM4 optical fiber jumper is 150m.

The working principle of the 40GBASE-SR4 optical transceiver: when the transmitting end transmits a signal, the electrical signal is first converted into an optical signal via a laser array. When the transmitting end transmits a signal, the photodetector array converts the parallel optical signal when the receiving end receives a signal.


As a highly integrated 4-channel optical transceiver, the 40 G LR4 PSM optical transceivers have the advantages of high port density and low cost. The optical port of this optical transceiver adopts the parallel single-mode technology-PSM. It uses a 4-way parallel design MPO/MTP interface and the transmission distance is 10km.

The work principle of 40 G LR4 PSM optical transceivers is in the same way as the 40 G SR4 QSFP+ optical transceivers. The difference is that 40G LR4 PSM optical transceivers are often used to connect with single-mode ribbon fiber connectors. That is, parallel optical signals are sent in parallel through eight single-mode optical fibers.


The 40G QSFP+DAC high-speed cable consists of two 40G QSFP+ optical transceivers and a copper cable.

DAC Advantages:

(1) Low cost, reducing the impact of dust and other contaminants on optical cables, and improving transmission efficiency;

(2) The high-speed cable is made of copper core, which has good heat dissipation effect and is energy-saving and environmentally friendly.

(3) High-speed cables consume low power.


The 40G QSFP+AOC active optical cable is the core component of the parallel optical interconnection. It is composed of two 40G QSFP+ optical transceivers connected by a ribbon optical cable.

The QSFP+AOC active optical cable is an efficient integrated fiber optic cable assembly designed for short-range, multi-channel data communication and interconnection applications. Each signal direction has four data channels with a rate of 10 Gbps per channel.

AOC Advantages:

(1) The transmission power is lower, so the power consumption is small;

(2) Weight and volume are much smaller than high-speed cables;

(3) The transmission distance is farther (it can reach 100-300 meters).

In Conclusion

The above five kinds of optical transceivers are available from Gigalight. When you use the optical transceiver purchased in Gigalight, your device’s stability and network speed will be greatly improved. If you want to learn more about optical transceiver solutions, please visit Gigalight official website to view.

2 Notes about Using Optical Transceivers

Optical transceiver consists of optoelectronic devices, functional circuits, and optical interfaces. The optoelectronic devices include transmit and receive parts. The transmitting part is: Inputting a certain bit rate of the electric signal is processed by an internal driver chip to drive a semiconductor laser (LD) or a light emitting diode (LED) to emit a corresponding rate of modulated optical signal, and an internal optical power automatic control circuit is provided therein. The output optical signal power remains stable. The receiving part is: After a certain code rate of the optical signal input transceiver is converted into an electrical signal by the light detecting diode. After the preamplifier outputs the corresponding rate of the electrical signal, the output signal is generally PECL level. At the same time, an alarm signal will be output after the input optical power is less than a certain value.

Today Gigalight will share with everyone some tips on using optical transceivers if you usually pay attention to the maintenance of the optical transceiver. Note that the following two points can help you reduce the loss of the optical transceiver and improve the performance of the optical transceiver.

Note One:

1. There are CMOS devices in this chip. Pay attention to prevent static electricity during transportation and use.

2. The device grounding should be good, reduce parasitic inductance.

3. As far as possible manual welding, if you need to paste, control the reflow temperature cannot exceed 205℃.

4. Do not lay copper below the optical transceiver to prevent the impedance from changing.

5. The antenna should be away from other circuits to prevent radiation efficiency becomes lower or affect the normal use of other circuits.

6. The transceiver should be placed as far away from other low-frequency circuits, digital circuits.

7. It is recommended to use magnetic beads for the isolation power of the transceiver.

Note Two:

1. Do not look directly into the optical transceiver that has been inserted into the device (whether it is a long-range or short-range optical transceiver) with naked eyes, and avoid eye burns.

2. When using a long-distance optical transceiver, the transmit optical power is generally greater than the overload optical power. Therefore, it is necessary to pay attention to the length of the optical fiber and ensure that the actual received optical power is less than the overload optical power. If the length of the optical fiber is short, use a long-range optical transceiver and use it with light attenuation. Be careful not to burn out the optical transceiver.

3. To better protect the optical transceiver from cleaning, it is recommended that you plug the dust plug when it is not in use. If the optical contact is not clean, it may affect the signal quality, it may also lead to link problems and error codes.

4. Rx/Tx, or arrow in and out directions is generally marked on the optical transceiver to facilitate identification of the transceiver. Tx at one end must be connected to Rx at the other end, otherwise the two ends cannot be linked.

Read the above notes, whether do you have a new understanding of the use of optical transceivers? It is important to be helpful to everyone and thank you for your support and attention to Gigalight. For more product details, please visit our official website.

10G SFP+ DAC vs. 10G SFP+ Transceivers

The development of artificial intelligence and Internet of things presents new challenges to the expansion of data centers, and there is often a contradiction between technology and cost. In order to realize high density and high capacity, it is important to control cost factors and reasonable wiring. In the wiring, we can choose the high-speed cable and the optical transceiver cables, so how do we choose these two products in the actual scene? What are the differences and what advantages do they have? Let’s study together about the differences between 10G SFP+ DAC and 10G SFP+ transceivers.

As a transmission medium, 10G SFP+ DAC and 10G optical transceivers can be selected. What is the difference between the two?

  • The 10G DAC is connected to two switches through copper cables. The SFP+ optical transceiver is connected to the jumpers to connect the two switches.
  • 10G DAC is short-distance transmission; the longest distance is 15M, used in the engine room.
  • The SFP+ transceiver can perform long-distance transmission. The longest single fiber is 80KM, and the longest dual fiber is 100KM.

The Advantages of 10G SFP+ DAC:

The 10G DAC is a copper cable designed with SFP+ connectors on both ends and is less expensive than a 10G optical transceiver.

The use of 10G DAC wiring is more flexible, transmission distance up to 15 meters, in the actual construction process is less difficult to operate.

10G DAC cabling saves on connected devices, eliminating the need for patch panels, and servers and network equipment can be directly connected to TOR switches, which indirectly save on input costs.

The Advantage of 10G SFP+ Transceivers:

If the vertical distance of the wiring does not exceed the cabinet, 10G DACs can be used for the connection. When the distance between the TOR switch and the network switch is greater than 15M, multimode optical fibers and fiber transceivers can be selected. Usually, OM3/OM4 LC fiber jumpers and 10G SFP+ optical transceivers are used. In other words, 10G SFP+ optical transceivers are widely used in long-distance transmission.
Gigalight provides high-speed direct connection solutions for data center interconnection, including 10G SFP+ to SFP+ high-speed cable solutions, which not only reduces power consumption, but also increases network scalability. Want to learn more about the product details? You can visit our website.

Applications of Optical Transceivers in Data Centers

For data centers, fiber-optic technology is no longer an option, or is only used to solve the most difficult interconnection problems. Today, high broadband, high port density and fiber optic technology are needed to solve low power requirements, and the current optical fiber technology is a kind of batch production technology, low cost, and is widely used in various applications such as switches interconnect and server interface. And in this post, Gigalight will introduce what pluggable optical transceivers can do in data center in detail.

1. Extend Data Center Distance

From 100Mb/s to 100Gb/s, single-channel 25G Ethernet optical transceivers lead the optical transceiver market of next-generation servers and switches. 40G QSFP+ products can support transmission distances up to 300m over multimode optical fibers, which greatly exceeds the standard distance of IEEE 40G Ethernet. In the 40G QSFP+ that transmits on single-mode fiber, and the 10 GSFP+ product that transmits 80 km, our OIF module or CFP2-ACO module supports a transmission distance of 500km or more for data center metro or intercity connectivity.

2. Increase Density and Reduce Power Consumption

Our products are at the leading edge of the next generation of low-power optical transceiver products. The 100G QSFP28 optical transceivers (SR4, LR, CWDM4, and SDWM4) have a maximum power consumption of only 3.5W. The 40G and 100G Quad wire active fiber optic products have power settings that can be flexibly configured by the host system.

3. Deploy with Existing Multimode Fiber

Most data centers today are still based on the 10G Ethernet architecture and use 10Gbase-SR short-haul transmission over OM3/OM4 duplex multimode fiber. With the data center upgrading from 10G to 40G or even 100G, customers still want to retain the existing multi-mode fiber architecture. However, SR4 optical transceivers need to be connected with ribbon multimode optical cables (multi-core) on the interface, and LR4 optical transceivers need to be double. Single-mode fiber, both of which are not present in the data center of currently deployed duplex multimode fiber, QSFP+ LM4 modules allow customers to implement 40 links over existing duplex multimode fiber, SWDM4 modules for customers in the the existing affordable dual-mode multimode fiber architecture enables 40G and 100G Ethernet transmission solutions.

4. from 100G to 200G/400G

Since 2010, the 100G Ethernet optical transceiver has been in a leading position in the market, supplying a large number of CFP optical transceivers for the operator’s routers and transmission systems. Since then, we have continued to expand 100G products and developed and supplied CFP2, CFP4, Modules such as CXP and 100G QSFP28 should be widely used in telecommunication, emerging data centers, and 100G systems in enterprise networks. However, we have not stopped our steps. At present, we are actively leading the development of industry standards and the development of next-generation Ethernet products, including 200G and 400G-rate products that will meet the long-term technical requirements for future high-performance data centers.

IMPORTANTE. Los contenidos y/o comentarios vertidos en este servicio son exclusiva responsabilidad de sus autores así como las consecuencias legales derivadas de su publicación. Los mismos no reflejan las opiniones y/o línea editorial de Blogs de la Gente, quien eliminará los contenidos y/o comentarios que violen sus Términos y condiciones. Denunciar contenido.