Did you find out? I don’t know, since when, this world has become faster and faster. It was as fast as if I just came to this world yesterday, and the sun was setting in the blink of an eye.

Yes, time flies so fast. Recalling that the Institute of Electrical and Electronics Engineers (IEEE) approved the formal standard “IEEE802.3ae” of 10Gbit/s Ethernet in July 2002, it seems like it was yesterday. Now, the data center network has moved from 100G Ethernet application to 400G/800G. After more than 20 years, the link speed of the data center has increased by nearly a hundred times.

Whether it is a hyperscale data center, an enterprise data center, or a multi-tenant data center (MTDC), or a service provider data center, an indisputable fact is in front of everyone that the link speed of the data center has to be considered faster s Choice.


The challenge is coming, I can’t stop it

Once upon a time, data center managers would have thought that the link speed of the current data center would increase so fast.

The challenge is coming, and you can’t stop it. There is such a “triangular balance law” in the data center field. For a long time, servers, switches, and connections have determined the capabilities of the data center. The three of them check and balance each other, and the innovation and development of any one of them will drive each other to increase speed and reduce costs. Thus, the “triangular balance law” that affects the development and upgrading of data centers has been formed. However, the “triangular relationship” between them strives to achieve a balance in order to further promote the capacity improvement of the entire data center.

At present, China’s computing power has soared, and the total scale of data center racks in use has reached 5.2 million standard racks. China’s computing power ranks second in the world. Can the data center’s copper-based solution for network connection media still work? In order to further accelerate breakthroughs in high-end chips, new data centers, supercomputing and other fields, is the copper cable connection scheme of about 40G in the data center still feasible?

The effective distance of twisted-pair copper wires in the data center continues to shrink, and the complexity continues to increase. With the increase of switch capacity, the problem of copper wire distance is prominent. For large-scale data centers, what else can we do without decopperization? Of course, for small-scale enterprise data centers, although low-bandwidth, short-distance copper cable connections are available, if the network application level stays at 40G, I don’t know if it will be possible to meet the challenges of enterprises moving towards 400G/800G. competition?


As data center construction continues to support 400G and 800G in terms of servers and switches, the “triangular balance law” highlights the challenge of data center network connections becoming a “triangular relationship” balance.

Then, what factors are stimulating the unprecedented development and new demand of Ethernet in the data center?

Challenges brought by the explosive growth of various innovative application workloads such as cloud services, distributed cloud architecture, artificial intelligence, video, and mobile applications; challenges brought about by upgrading Ethernet in enterprise data centers to high-speed connections; The high data traffic generated by applications brings network challenges of higher capacity, faster speed, and lower latency. The continuous acceleration of cloud-based evolution from data centers brings higher challenges of connection bandwidth, latency, and speed. It can be seen that innovations in different industries and fields have broken the old pattern of data center networks, and the challenges and pressures faced by network connections are greater than ever before.

The data in all fields of all industries is growing at a high speed, and the digital flood is beyond our imagination.

As early as 2020…

Just one Internet user can generate 1.5GB of traffic per day, a smart factory will generate 1PB of data per day, and a cloud video service provider will generate up to 750PB of video data per day.

As a star supplier in the field of global video conferencing services, it was established in 2011 and was not well-known before the epidemic. After 8 years of development, the number of daily active users in December 2019 was only 10 million, but it surged to more than 200 million in March 2020, a 20-fold increase from the end of 2019.

What is even more unexpected is that one month later, the number of daily active users in April 2020 has soared to 300 million, an increase of 100 million in just one month.

Adding a daily active user will inevitably increase a video data flow. The high speed of the daily active user has brought about a blowout growth of the video data traffic of the video conferencing service provider, which has brought a lot of traffic to the video conferencing service provider’s data center. great pressure and challenges. The video conferencing service provider’s global cloud data center count has reached 18 after it opened a new data center in Singapore in 2020.

To bear the demands of massive conference communications from all over the world and ensure information security, the cloud data center of the video conferencing service provider has very high requirements on network bandwidth, speed, concurrency and other aspects. Daily active users have increased from 10 million to 300 million. If the network redundancy design of the cloud data center is not considered in advance, how can it support the access pressure brought about by the explosive growth of massive data traffic?

For this reason, to further realize the balance of the “triangular relationship” in the data center, it is more and more urgent for cloud data center managers to increase high bandwidth, speed up speed, and shorten delay. The choice of moving towards 400G/800G Ethernet has become an inevitable trend. Challenges and opportunities often coexist, how can the data center network be willing to lag behind?


Solving problems requires innovation

Challenges come and make our development more motivated. To solve the problem, we have to rely on more capable technological leaps.

However, why is optical fiber suitable for the construction of today’s data centers, and what is the reason behind the fact that light advances and copper retreats?

Analysts in the industry pointed out that in the equipment distribution area (EDA) from the switch in the cabinet to the server, the copper cable connection scheme still continues to play a “residual heat” for the data center managers who blindly pursue cheap, but more data center managers need It is a network solution for faster throughput and design flexibility, and the focus is on fiber-in and copper-out. According to incomplete statistics, the proportion of optical fibers in large data centers is far higher than that of copper cables, which has exceeded 70%. Almost the entire data center network uses optical fibers.

Demand is changing, and many challenges are openly before us. All these challenges and demand changes further demonstrate the value of fiber optic networks. Higher bandwidth capacity, faster speed, and lower delay rate, these increasingly obvious demands are constantly driving data center managers to choose more and more fiber optic deployments.

After the computing power has increased sharply and the performance of switch equipment has been improved, how can the bandwidth and speed slow down? High-density fiber deployment is not only necessary, but also requires sufficient redundancy, so as to plan ahead and reduce future-oriented considerations. Additional costs that may be brought about by “uncertainty risks”.


From this point of view, solving the challenges of data center networks depends on the technological transition of optical fibers. Optical fiber can provide greater bandwidth, security, and reliability than any other medium. From coilable ribbon fiber to 400G optical transceivers, take a look, the leap in optical fiber technology innovation is already on the road to future development.

The first innovation in fiber optic technology, the coilable ribbon fiber allows data center managers to actively respond to the growing connectivity challenges.

The number of fiber cores of the optical fiber backbone network of the data center ranges from 96 cores to 144 cores, 288 cores and 864 cores. Now some optical fiber manufacturers have provided optical cables with 6912 cores and 7776 cores.

In order to solve the challenge of fiber bend radius, the new fiber package and design brought about the increase of density, and the fiber optic cable manufacturers began to adopt the crimpable ribbon cable structure with 250 micron, 200 micron cladding layer, and some positive ultra-large-scale data The center started the deployment.

The second innovation of optical fiber technology, multi-mode makes data center managers shine.

Inside the data center, the connection distance from the leaf switch to the server is shorter and the density is higher. The main consideration is the cost of the optical module and the operation cost.

In 2016, OM5, a new multimode fiber that supports multiple wavelengths, was launched. Compared with OM4, OM5 multimode fiber achieves a higher single-core capacity, which can reduce the number of fibers and extend the distance in bidirectional applications. Although under the same data center network link configuration, the total cost of OM5 is about 6.2% more expensive than OM4.

However, under such a low total cost comparison, data center managers can reduce the number of fiber cores by using OM5 multimode fiber, so that they can make better use of existing fiber ducts. Fewer optical cables mean less space occupied.

What’s more worth mentioning is that OM5 can bring more degrees of freedom to use future technologies, because OM5 multimode fiber is not only compatible with OM3 and OM4 fiber types, but also supports data center network upgrades to 400G/800G or even higher-order modulation schemes.

Optical fiber technology innovation 3. Transceiver types are constantly changing and innovating, making data center operation and maintenance easier.

On the development path of optical transceivers, although embedded optical modules have higher bandwidth density and faster channel rates, the Ethernet industry still prefers 400G pluggable optical modules because pluggable optical modules are more It is easy to maintain, and at the same time realizes the cost-effectiveness of pay-as-you-go, and the new design of pluggable optical modules also provides network designers with more optional tools, such as QSFP-DD and OSFP, so as to take into account 400G optical modules and support the next generation 800G optical module.

In order to achieve better data center network connection efficiency, it is very interesting that many data centers are still using low-cost vertical cavity surface emitting laser (VCSEL) transceivers powered by multimode fiber. Many data centers also opt for a hybrid approach, using single-mode fiber in the core network and multimode fiber connections between the servers and the primary leaf switches.

Fourth, the innovation of optical fiber technology, the introduction of the new 400GBASE standard by IEEE has brought new opportunities.

For multimode fiber 400 Gbps, IEEE has introduced the 400GBASE-SR8 standard, which can support 24-core MPO connectors or single-row 16-core MPO connectors. In addition, the 400Gbase SR4.2 standard that IEEE puts out uses single-row MPO8 with two-way signaling, which is very suitable for the connection between switches and switches, and also introduces OM5 multimode fiber.

In addition to the challenge of Ethernet upgrade, it is the choice of fiber optic technology innovation. For data center managers, it is very obvious which way to choose.

Facing the future, there will be more challenges and faster development. The application of emerging technologies such as 5G, Internet of Things, cloud computing, and AI is changing with each passing day. With the acceleration of data center computing power, storage, and the overall performance of switches, optical fiber network technology is also constantly evolving, and challenges and opportunities still coexist.

For data center managers, whether it is to reduce the number of links, reduce energy consumption, or save investment costs, on the road to meet 400G, face 800G, and hope for 1.6T data center network development, switching capacity promotes The improvement of network efficiency and the continuous improvement of optical fiber technology innovation and capabilities can meet more and higher requirements of ultra-large-scale data centers, enterprise data centers, multi-tenant data centers (MTDC), and service provider data centers.

According to industry insiders, it is said that large-scale data center managers have begun to deploy 2x400G and 8x100G solutions in the data center. In this way, 400G is ushering in, and the deployment of 800G and 1.6T may be far away? It can be asserted that the general trend of building a next-generation data center network is irresistible.


There is no fastest, only faster

Since the emergence of 10Gb/s data center network in 2002, the rate has been continuously iterated. For more than 20 years, at the beginning of the new year in 2023, we have stood on the stage of 400Gb/s and started a wonderful performance of 800Gb/s.

It can be seen from the history of “welcoming and sending” in the data center network that the current application based on 100G/200G Ethernet has leaped to 400G/800G, and the foreseeable 1.6T in 2024 and 3.2T in 2025 are inevitable. It will accelerate the rapid development of optical fiber in the data center network all the way.

There is no fastest, only faster. In response to the continuous improvement of data center link speed, which data center manager does not want to think about what to do in the future? Is it to rest on its laurels, or choose to embrace 400G/800G, and the 1.6T/3.2T that can be seen in the future? The answer is already obvious.

At present and in the future, the competition in the data center industry is still very fierce. Whoever is at the forefront of the world in terms of satisfying computing power, storage, and faster connections can win more time and win more opportunities with the innovative application of next-generation data center network technology .

Uncover the truth about the evolution of data center networking

And all of this requires us to further see clearly the development trend of the data center network. How to see more clearly?

The white paper report “Development Trends of Data Centers: Prospects for Development Trends in 2023” uses 9 chapters to make an in-depth analysis of all this. The iterative development of any new technology will go through a process. Emerging applications such as 5G, the Internet of Things, and AI have brought explosive growth in data traffic. For forward-looking hyperscale data center managers, the leap to 800G is imminent. On the way to cope with the leap forward of Ethernet, low-latency, high-bandwidth and high-reliability fiber optic connection solutions present a better development trend. In fact, innovations such as rollable ribbon cables, OM5 multimode cables, and 400G optical transceivers have been brought about in the field of fiber optic technology. In addition, with the development of edge computing, 5G is also stimulating the continuous evolution of the role of the data center. At the same time, more factors are promoting the connection upgrade of the multi-tenant data center (MTDC), and the application of optical fiber technology is more diversified. With all the changes and developments, on the road to 1.6T with higher speed, larger bandwidth and capacity, managers of ultra-large-scale and multi-tenant data centers are already gearing up.

Friends who want to learn more about how data centers use optical fiber technology to deal with various network challenges of high bandwidth, high speed, and low latency, this white paper condenses the rich experience and wisdom of many experts who have worked in the field of global data center networks for decades Analysis, well worth the read. (by Aming)