Connecting Networks

Articles in Category: Archives GrenoblIX

It

on Tuesday, 03 September 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

It

For this school year 2019-2020, Rezopole is opening its doors and welcoming new interns in its Marketing / Communication and Administrative / Management departments.

 

If you wish to apply or transmit the information to your family and friends, find all the ads

in the dedicated space by clicking here.

 

 

 

 

 

4A page is turning for the French sovereign cloud

on Wednesday, 07 August 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

4A page is turning for the French sovereign cloud

The Cloudwatt, Orange's online data hosting service, will be disconnected on February 1st. Launched in 2012, it was one of the two heads of the French-style sovereign cloud co-financed at a loss by public money. With Bercy calling again for the creation of secure data centers to host sensitive government and corporate data, this failure could serve as a lesson.

 

At the origin of this project, called Andromeda, France wanted to invest 150 million euros in a shared server service that could reduce the costs of ministries and companies. But here we are, the French technology groups called upon to help did not succeed in reaching an agreement and therefore shared the envelope.

On the one hand, Cloudwatt was created by Orange and Thales by adding 150 million euros to the amount provided by the State. On the other hand, Numergy was launched by SFR and Bull with the same investment. However, none of them were able to find customers. And two years after their launch, Cloudwatt claimed only 2 million dollars in revenue. Even if Numergy was doing better with 6 million billed, these are crumbs compared to Amazon, Microsoft and IBM.

Bercy stops the expenses, a few months later, by assuring that she has spent only half of the promised sums. Orange and SFR then bought back the shares of the State, one from Thales and the other from Bull. Numergy and Cloudwatt, which had then become simple brands, had since been part of the offers designed for large companies by the two telecoms operators.

 

Today, the dominance of American players in the online IT market continues to raise concerns about the integrity of critical data. A recent report by MP Raphaël Gauvain criticizes the Cloud Act, an American extraterritorial law recalling the Patriot Act and spy programs.

The Government should therefore sign a strategic supply chain contract to develop a "Cloud of Trust" ecosystem in the fall. Being French will not be enough to be French and some American or Chinese technologies used in French data centers will be difficult to support.

"This time, we will not assume the nationality of the actors but their ability to guarantee data integrity with regard to our laws and strategic autonomy over our essential infrastructures and data," notes Jean-Noël de Galzain, Hexatrust's president on the sector's strategic committee. In addition, the State should commit itself to playing its role as a buyer.

 

 

 

 

 Read the article

 

Source : Les Echos

 

 

 

 

Tracking the exhaustion of IPv4 addresses

on Thursday, 01 August 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

Tracking the exhaustion of IPv4 addresses

Used since 1983, Internet Protocol version 4 (IPv4) allows the Internet to work: each terminal on the network (computer, telephone, server, etc.) is addressable by an IPv4 address. This protocol offers an addressing space of nearly 4.3 billion IPv4 addresses. But the success of the Internet, the diversity of uses and the multiplication of connected objects have as a direct consequence the progressive exhaustion of these addresses. By the end of June 2018, the four major French operators (Bouygues Telecom, Free, Orange, SFR) had already assigned between 88% and 99% of the IPv4 addresses they own.

 

Only 2.856 million public IPv4s remain available at RIPE NCC as of July 23th, 2019.

Two scenarios are now possible:

  • 1: allocation of 1024 IPv4 addresses by LIR until depletion.
  • 2: allocation of 1024 IPv4 addresses by LIR until the last million available IPv4 addresses, then 256 IPv4 addresses by LIR until depletion.

The most likely date for IPv4 depletion is May 6, 2020 (scenario 2).

If RIPE proposal 2019-02, allowing to limit to 256 IPv4 per LIR (scenario 1), is rejected, it will be on December 25, 2019.

 

On the day of the exhaustion of RIPE-NCC IPv4, the price of IPv4 on the secondary market for the purchase of already allocated addresses is expected to soar according to supply and demand. Indeed, players who have too many IPv4 addresses can sell them to those who do not have enough or none at all.

A high price that could erect an entry barrier against new market players and increase the risk of the development of an Internet split in two: IPv4 on the one hand and IPv6 on the other. As Jérémy Martin, Technical Director of Firstheberg.com explains: "With increasing demand for a fixed number of IPv4, the cost of renting an IPv4 will double in the next 2 years"

 

To address the shortage of IPv4 addresses, ISPs have implemented some alternative mechanisms. For example, Carrier-grade NAT (CGN) equipment allows an IPv4 address to be shared between several clients. However, they have several negative effects that make it difficult to maintain IPv4 and almost impossible to use it for a number of purposes (peer-to-peer, remote access to shared files on a NAS or connected home control systems, certain network games, etc.).

For Grégory Mounier of Europol, this can go further and "violates the privacy of many people who could be summoned in proceedings even though investigators are only interested in one suspect. In this context, only a near-total transition to IPv6 can be a sustainable response to this problem."

On the other hand, an operator buying IPv4 addresses from a foreign player takes the risk that its customers will be located outside France for many months and thus block many services.

 

Accelerating the transition to IPv6 is the only sustainable solution. Only a near-total mutation can allow content providers to do without IPv4.

 

 

 

 

 Read the article

 

Source : Arcep

 

 

 

 

Heat wave: why French DCs are holding up

on Thursday, 01 August 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

Heat wave: why French DCs are holding up

Heat episodes are not taken lightly by data center operators. In France, "we have gone from 40 degrees to 46 degrees in a few years. We have met the specifications of Spain," says Marie Chabanon, Technical Director of DATA4 Group.

 

In order to counter any heat stroke, the datacenters' resistance to temperatures has increased " The great fear is the domino effect [...] If all or part of the cold infrastructure has problems, it affects the rest of the equipment. And if the refrigeration unit stops, it's the worst thing that can happen to us with the complete power outage," added Fabrice Coquio, Interxion's Managing Director. A risk also linked to the quality of RTE or Enedis' electricity distribution. "We must anticipate a risk of electrical loss or incident," explains Marie Chabanon.

 

But data center operators have a secret boot to fight this domino effect. "Data center electrical systems are built to be 100% operational. However, this is never the case. The consequence is that in the event of a load, such as a higher cold demand, we have unallocated power that we can use," explains Fabien Gautier of Equinix. This is called capacity redundancy.

 

Especially since the densification of computing power per unit of space in recent years, with the democratization of virtualization, has led to more consumption and more heat. "With 14 or 15 kvA berries, we cause hot spots, which are more sensitive to heat waves," explains Fabien Gautier. The work of urbanizing the IT architecture deployed in the rooms is therefore essential. "Our work is therefore the urbanization of the rooms. If they were completed on the fly, that can be a problem," he adds.

This involves, among other things, load balancing. "Our data centers are designated with redundancies and a 50% load rate. The backup machines will be used to provide additional power" in the event of a heat wave, says Marie Chabanon. Nevertheless, it must be anticipated. "We must ensure that backup systems are ready to be operational, through maintenance and control actions on backup equipment."

 

The protection of data centers against heat also requires the installation of curative systems. "We installed water spray systems to water the roof equipment with water that is not too cold," says Fabrice Coquio.

And to be prepared for any eventuality in the early evening, the schedule of the technicians present on site has been modified. It is also necessary to warn customers so that they are careful.

 

Recent advances in hardware strength and data center design have made it possible to increase the temperatures in server storage rooms. "The idea is that the lower the PUE (Power Usage Effectiveness), the better it performs. Ten years ago, we used to make datacenters where it was difficult to achieve a PUE of 1.6. Today we are at 1.2 and we are getting closer to 1, which represents 20% savings by playing on the temperature and energy performance of the new equipment," says Marie Chabanon. As a result, the cooling system now focuses on machines with forced air. There is no longer any need to refrigerate entire rooms.

"We are seeing an evolution in the design of indoor temperature according to the recommendations of the Ashrae (American Society of Heating and Ventilating Engineers). The idea is to work well with much higher temperature ranges. We have gone from 20 to 22 degrees to 18 to 27 degrees," she adds. Since 2011, these standards have been raised: they recommend blowing at 26 degrees on the front panel on indoor equipment. "The humidity level was also modified [...] In 2008, it was between 40 and 60%. It is now 70%," says Fabrice Coquio.

 

This will limit cooling costs without affecting the resistance of the installations. A critical point in hot weather.

 

 

 

 

 Read the article

 

Source : ZDNet

 

 

 

 

History and impact of IXP growth

on Friday, 26 July 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

History and impact of IXP growth

It is 1990: the Internet has a few million users and the first commercial companies have recently adopted this new distributed infrastructure.

 

The routing of network traffic from one region to another generally depends on the major transit providers (level 1). These levels 1 are at the top of the hierarchy, composed of a few thousand existing AS, forming what is called the network of networks.

 

A lot has changed since those early days, when small ASs paid the biggest for connectivity. This dependence on intermediaries has resulted in transit costs, indirect routes, long round trip times and a general lack of control over the quality of service. The bypassing of intermediaries by direct peering interconnections became the obvious answer, and Internet Exchange Points (IXPs) then appeared as the default solution for establishing connections.

 

Between 2008 and 2016, the number of IXPs and members almost tripled. At the same time, accessibility via these facilities has stagnated at around 80% of the announced address space (IPv4) while resilience has increased due to increasing redundancy.

 

In almost all regions, particularly in Europe and North America, IXP members have grown richer with an increasing number of members and greater accessibility. However, the regional ecosystems were distinct. For example, European IXPs had the largest number of members but the smallest AS (in terms of accessibility), Asia-Pacific was at the opposite extreme.

 

This growth raises the question of the observable impact of IXPs on the Internet. To answer this question, Queen Mary University in London, in collaboration with researchers from Roma Tre Univ, the GARR Consortium and the University of Tokyo, extracted a large collection of traceroutes covering the same period and identified IXPs crossed.

 

The IXPs have had a clear impact on reducing the average length of access paths at AS level, particularly for large (hypergiant) global networks. Given that these networks are traffic-intensive, it is likely that a large proportion of Internet traffic has benefited from a substantial reduction in the number of AS crossed.

 

They have also clearly helped to bypass level 1 transit providers. However, their impact on reducing the number of transit links (not necessarily level 1) visible on the route is more moderate.

 

Despite these changes, a clear hierarchy remains, with a small number of networks playing a central role. It is interesting to note that there is a small group of very central networks, regardless of whether the paths cross an IXP or not.

 

In addition, the Internet hierarchy has changed: large central networks have reduced their use of public peerings while IXPs have been adopted by smaller and less central ASs. This is probably due to the increasing popularity of private network interconnections (NIBPs), which are generally favoured by AS when large volumes of traffic are exchanged.

 

Overall, the increase in the number of IXPs since 2008 has had a clear impact on the evolution of the Internet, shortening paths (mainly) to hypergiants and reducing dependence on Tier 1 transit providers.

 

The results must be interpreted in the light of the constraints of existing data, and there are a number of areas where work is possible. For example, topological data are independent of traffic volumes and total visibility on the Internet is impossible to achieve.

 

In addition, content distribution network (CDN) redirection strategies are not included in the traceroutes; it is assumed that accounting for the increasing traffic volumes delivered by these networks would likely support these conclusions.

 

 

 

 

 Read the article

 

Source : RIPE

 

 

 

 

The "small" operators are attacking Orange

on Friday, 26 July 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

The

The AOTA - Association of Alternative Telecommunications Operators - has just referred the matter to the Arcep to request the opening of Orange's fibre network. Indeed, the 47 members of the association complain that they do not have sufficient access to it and accuse the incumbent of anti-competitive practices.

 

Since they cannot build very expensive networks themselves covering the entire country, small operators must first "borrow" Orange and SFR networks. They therefore rent access to the two dominant players in the corporate telecom market and buy voice or data from them at wholesale prices. They then sell them to their own customers.

 

But here we are, alternative operators feel ousted from the market of companies that have not been able to "connect" enough to the Orange network. With 12.4 million outlets, the incumbent's fibre network is both very large and very capillary. Hanging on to it therefore makes it possible to target SMEs with connectivity needs on several sites or plants spread over the territory. It is precisely these customers who escape the more geographically limited members of the AOTA.

 

A long-standing problem linked to the lack of fibre regulation for professionals. Indeed, Orange is obliged to offer wholesale offers to small operators wishing to access the copper network (ADSL) but not on fibre. In 2017, Alternative Télécom had already demanded more openness.

 

However, it is impossible for Orange to open up to competition a network built with billions of investments. Small operators believe that the operator has been favoured by its historical footprint on cable, which it was able to convert very quickly to fibre. Today, Orange controls approximately 70% of the corporate fibre market.

 

For its part, the French Competition Authority has chosen to regulate this market by creating a third player, Kosc, to "break" the predominance of Orange-SFR. This "wholesale" operator deploys its own fibre network, which it then rents to small AOTA or Alternative Télécom operators. "Kosc is a good complement, but it's one of many solutions. And anyway, the Kosc network does not have the same capillarity as Orange," explains one of these small operators. The ball is now in the Arcep's court.

 

 

 

 

 Read the article

 

Source : Les Echos

 

 

 

 

 

Savoy holds its optical fiber

on Monday, 22 July 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

Savoy holds its optical fiber

One month after the validation by Arcep, the contract was finally signed between the department and Savoie Connectée. This transaction comes two years after the departmental council terminated its contract with Axione, a Bouygues subsidiary. At the time, an investment of €223 million was planned: €63 million financed by local authorities and €70 million provided by the region, the State and the European Union.

 

In 2017, elected officials from the Maurienne had entrusted Fibréa with the installation of its own fibre optic network. The departmental council then accused the latter of having unbalanced the public service delegation concluded with Axione. Mauritian elected officials said they had had enough of waiting for the department to deploy the fibre. They had therefore taken charge of it themselves to ensure the development of their territory.

 

A few months after the termination, the government proposed to local authorities to deploy AMELs to accelerate the installation of optical fibre in rural areas. A system enabling departmental councils to have the deployment financed from operators' own funds.

The Savoie department has therefore chosen this framework to deploy its fibre optic network in rural areas. And it is Savoie Connectée that finances the cost of the works, relying on its shareholders (Covage with 70% of the capital and Orange with 30%). Within four years, 255,000 sockets will have to be connected in 243 municipalities in Savoie. Almost the entire territory of the department will then be connected to very high bandwidth.

 

The operation is not new for Covage since the operator operates the DSP for optical fibre in 246 municipalities in Haute-Savoie. And he was also awarded an AMEL in Saône-et-Loire.

This eliminates the need for the department to worry about funding and shortens the deployment time. The maturity date is 2023 instead of 2026 in the terminated contract with Axione. However, the departmental council will have paid 6.8 million euros in compensation to Axione to free itself.

A year ago, Covage acquired Fibréa, the company that has wired nearly 500 kilometres of optical fibre in the Maurienne. Therefore, the repetition of the scenario from the previous contract is ruled out. "This avoids any subsequent conflict," agrees Hervé Gaymard, president of the Savoie County Council.

 

 

 

 

 Read the article

 

Source : La Tribune

 

 

 

 

Can we map the Internet?

on Monday, 22 July 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

Can we map the Internet?

Since the Internet is considered as a space, is it possible to provide a complete graphic representation, to draw one or more maps? Artists and experts have been trying for a long time.

 

To find an answer, the first tool was Google. The results in French mainly show Internet maps seen from the outside. For those in English, we get some visualizations of the weight of sites in number of connections.

 

Boris Beaude, a geographer and researcher at the University of Lausanne, explains that mapping the Internet "makes it possible to better understand the architecture[of the Internet and its sub-areas, such as websites], the actors who produce them, the reality of what is happening there and the underlying power issues".

This is not a fundamentally recent idea, since he himself studied the subject at the end of the 1990s. He noted that "the continuation of TCP/IP protocols, or even packet switching alone, is based on essentially spatial considerations: how to make communication as efficient as possible on heterogeneous and vulnerable networks".

 

Beyond the researchers, some artists have also looked at the issue. An American designer, Chris Harrison, represented the journeys that data makes around the world. He explains his gesture quite simply: "Humans have always tended to represent graphically the spaces in which they evolve. Now the Internet, we walk around, it moves, there are millions, trillions of tools connected to each other".

 

Louise Drulhe was looking to represent the Internet for her diploma from the Ecole des Arts-Déco. She then faced two difficulties. The first is that the space in which we operate online is constantly changing. She explained to Numerama "it's terrible to want to represent cyberspace, because the speed at which it changes has nothing to do with geography. When I started working on my first maps in 2013, we were barely talking about the Chinese Internet, for example".

The second is the very low number of representations of the online space. "In 2013, I was working on a thesis on Internet space, but I quickly realized the lack of information on online space as I understood it. There were some old maps from the 90s. But it had nothing to do with the current cyberspace," says the young woman.

 

According to Boris Beaude, the lack of representations is explained by the fact that it is the "powerful imaginaries, which suggest that all the spatial vocabulary associated with the Internet is metaphorical". He attributes this mixture to "a materialistic conception of space". The territory and "the materiality of the ground on which our feet rest" would too often be confused with the idea of spatiality.

 

So what measures for an online space? For Boris Beaude, "distance is thought in terms of gaps, contact or interaction. This allows us to think about the relationship and how beings (and more and more objects) are connected and interact". And to show the architectures that facilitate these contacts.

 

Louise Drulhe has opted for a multiplicity of hypotheses that suggest a different aspect of the Internet. But all of them meet the same need: "representing the Internet helps us to understand the (geo-)political issues at stake".
The artist does not stop there since his last hypothesis is that of an Internet architecture that would be specific to everyone. It may not be possible to map the Internet for the simple reason that no road is used exactly twice in the same way.

 

Boris Beaude responds to this by opposing personal navigation and the power of the world's largest networks: "Google and Facebook are the two players with the greatest visibility on contemporary digital spatialities". Paradoxically, therefore, "while it is difficult to map the Internet because the relationships that constitute its space are so disproportionate and reticular, it has never been so simple for those who have mastered it to map the spatiality of individuals".

And to conclude that "politicians will have to ask themselves what they are trying to control: things[in this case, data, editor's note] or the movement of things and the architecture that makes this movement possible". A cross-border and disproportionate space par excellence.

 

 

 

 

 Read the article

 

Source : Numerama

 

 

 

 

Arcep: Open Internet

on Thursday, 11 July 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

Arcep: Open Internet

The Autorité de Régulation des Communications Électroniques et des Postes is publishing the 2019 edition of its Internet Health Check in France. Submitted to Parliament, this report highlights the actions taken to ensure the openness of the Internet, looks at potential threats and presents the regulator's action to contain them.

 

The balance sheet in brief!

 

1- Quality of services
The service comparators are so inhomogeneous today that the Arcep wanted to improve them by setting up an API in the boxes containing the "access identity card" of each terminal. This will allow a much better diagnosis with reliable information on the parameters of each measurement. This API is complemented by a code of conduct. Gradually adopted by the measurement stakeholders, it makes it possible to improve the reliability, transparency and readability of the results.

 

2- Data interconnection
In constant evolution, this ecosystem can be the site of occasional tensions. The Arcep is vigilant in monitoring the market. It publishes data from its information collection in its annual barometer of interconnection in France. When the situation requires it, the Arcep can also become a "gendarme" and settle disputes between the actors.

 

3- Transition to IPV6
The end of IPV4 is now scheduled for June 2020. Operators' planned deployments of IPV6 may not be able to address the shortage of IPV4 addresses. Therefore, Arcep will organise the first working meeting of the "IPV6 Task Force" in the second half of 2019. These meetings will aim to accelerate the transition to IPV6 in France by sharing the experiences of the different actors and defining actions to be implemented

 

4- Net neutrality
The guidelines for the implementation of the principle of net neutrality by national regulators have generally proved their worth. The country has a positive balance sheet. However, Arcep ensures that access providers continue to adjust their practices in line with the European regulatory framework.

 

5- Opening of terminals
If in terms of net neutrality, the Arcep can exercise its protection on networks there is a weak link: terminals. Adopted at the beginning of this year, the European "Platform-to-business" regulation brings more transparency on the practices of online platforms towards their corporate clients. However, this regulation does not yet ensure the neutrality of terminals. Arcep made 11 concrete proposals to ensure an "end-to-end" open Internet in a report on the issue in February 2018.


 

 

 Read the report

 

Source : Arcep

 

 

 

 

3.8 billion people now have access to the Internet

on Thursday, 11 July 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

3.8 billion people now have access to the Internet

Since 1995, Mary Meeker, an investor in the powerful venture capital firm Kleiner Perkins Caufield & Byers, has been reporting on major Internet trends. An analysis of the global use of the Web and its services: e-commerce, social networks, video games, podcasts, various connected objects, etc.

 

The 2019 edition has reached a new milestone! According to the document, more than half (51%) of the world's population now has access to the Internet, or more than 3.8 billion people worldwide. Whereas in 2009, just ten years ago, this rate was only 24%. China, India and the United States are the three countries with the most Internet users in the world.

 

Nevertheless, global growth is slowing every year. Between 2018 and 2019, it was 6%. Indeed, for Mary Meeker, it is more difficult to connect new people as the number of Internet users increases.

 

The report details, for example, that Americans spend 6.3 hours online per day. An average increase of 7% compared to last year. They now spend more time in front of their mobile phone (almost 4 hours a day on average) than in front of their television (about 3h30).

 

The document also includes the 30 most highly valued new technology companies in the world. Of these, 18 are American, 7 are Chinese and only 1 European (Spotify, the Swedish music application, is ranked 30th in this ranking).

 

 

 Read the article

 

Source : Le Figaro

 

 

 

 

The Internet network is drowning

on Tuesday, 02 July 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

The Internet network is drowning

Fibre optic cables, data transfer and storage stations and power plants form a vast network of physical infrastructure that underpins Internet connections.

 

Recent research shows that a large part of this infrastructure will be affected by rising water levels in the coming years. After mapping the Internet infrastructure in the United States, scientists overlayed it with maps showing sea level rise. Their results: in 15 years, thousands of kilometres of fibre optic cables and hundreds of other critical infrastructures are at risk of being overwhelmed by the waves. Still according to the researchers, the extra few centimetres of water could plunge nearly 20% of the U.S. Internet infrastructure underwater.

 

"Much of the existing infrastructure is located just off the coast, so it doesn't take much more than a few centimetres of water to get it underwater", says Paul Barford, a scientist at the University of Wisconsin, Madison, and co-author of the study: The network was deployed 20 years ago, when no one thought that sea levels could rise.

The physical structure of the Internet network has been installed somewhat randomly and often opportunistically along power lines, roads or other major infrastructure in recent decades when demand has exploded.

 

While scientists, designers and companies have long been aware of the risks posed by rising water levels on roads, subways and power lines, no one has so far been interested in the consequences that this could have on the physical Internet network.

"When you consider how interconnected everything is today, protecting the Internet is crucial", says Mikhail Chester, director of the Resilient Infrastructure Laboratory at the University of Arizona. Even the smallest technical incidents can have disastrous consequences. He continues "this new study reinforces the idea that we must be aware of the state of these systems, because it will take a long time to update them".

Rich Sorkin, co-founder of Jupiter Intelligence, a company that models climate-induced risks, says, "We live in a world designed for an environment that no longer exists". And concludes by saying that "accepting the reality of our future is essential - and this type of study only underlines the speed with which we will have to adapt".

 

 

 Read the article

 

Source : National Geographic

 

 

 

 

Operators want to avoid overheating

on Tuesday, 02 July 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

Operators want to avoid overheating

Telecommunications operators are concerned about the impact of this heat wave on their infrastructure networks. Just as heat waves can affect smartphones, tablets and other laptops, the infrastructure underlying telecommunications networks can also suffer in hot weather. Indeed, the electricity network, which is heavily solicited and subjected to high temperatures, can fail this equipment. They can then stop working in a punctual and localized way.

 

But the real "vulnerability" of these infrastructures is concentrated on two specific network points: relay antennas and data centres. Depending on the equipment considered, the temperature threshold not to be exceeded is around 50 degrees Celsius.

Relay antennas are particularly exposed to high temperatures since they are located at high altitudes, particularly on the roofs of buildings in urban areas. The risk of their electronic components going into standby mode in the event of overheating is therefore not negligible.

Depending on their air conditioning systems and exhaust air devices, data centres risk overheating if they fail to operate.

 

All the attention of the operators is focused on not exceeding 50 degrees Celsius. "We have set up a weather monitoring system to enable us to prevent in the event of a natural disaster and to analyse the good performance of each point of our network [...] Over the past few months, we have focused our preventive campaigns on monitoring the maintenance and air conditioning equipment in our infrastructures for greater safety", explains Hubert Bricout, Bouygues Telecom's Regional Director for Ile de France and the North-East.

In the event of a site failure, rapid response teams will be dispatched. To support the systems already in place, mobile air conditioning equipment has also been reserved in the event of overheating of a network point.

 

 

 Read the article

 

Source : ZDNet

 

 

 

 

Sign up for yALPA 002 !

on Tuesday, 25 June 2019 Posted in Archives Rezopole, Archives GrenoblIX

Sign up for yALPA 002 !

After a first meeting on January 29th, the second is planned for July 2th at Challes-les-Eaux.

 
Theme: HSBB in the Alps

Departments concerned: 38; 73; 74 and 01 (South Jura) with a focus on tourist areas (ski resorts, hotels).

 
Observation:

Broadband needs are exploding with the advent of "Over The Top" (OTT) networks and the strong growth of services such as Amazon Prime Video, Netflix, OCS, MyCanal...
Initial observations between the 2017 and 2018 Christmas holidays show a +60% increase in the throughput consumed in hotels and hotel residences. For these establishments, connections in Mbs will soon no longer be enough and requests in 1, 2, 3 or even 4 Gbs are beginning to arrive.

 
Work in synergy:

From now on, it is essential for telecom and Internet operators to organise themselves in order to be able to respond effectively to this need, which was yesterday emerging and is now very present.
This is the objective of yALPA! We must encourage the actors of Very High Speed deployment to meet and get to know each other better in order to consider future collaborations rather than planning, each on its own, investments in the same places.
The local DSPs (38, 73 and 74) do part of the work, but simply changing departments is complicated. Let us bet that by exchanging informally, a collective intelligence will make it possible to accelerate the arrival of really Very High Speed offers in the Alps (resorts but also valleys and plains).
It is understood that the results of these initial yALPA discussions will not have immediate effect. However, in the more or less short term, the problems currently encountered in terms of HSBB in tourist areas can be solved.

 
Morning program (9:00 am - 12:00 pm):
  • welcome, coffee, pastries
  • round table discussion: presentation of each participant
  • Transalpinet presentation (study by Rezopole)
  • room for improvement:
    • common carto tool.... Should we start where we wait another 10 years?
    • new backbone offers from external operators
    • territories without POPs
 
Registrations:

from Samuel Triolet (director of Rezopole) : striolet (from) rezopole.net

 
Practical informations:

See you on July 2nd at 9:00 am at Hub des Alpes (salle Altitude 193) - 37 avenue des Massettes, 73190 Challes-les-Eaux.

 

 

 

 

The Data Center Continuum

on Tuesday, 25 June 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

The Data Center Continuum

The visionary trend of the 2010's massively positioned data center surfaces in Hyperscale DCs, ideally located in areas close to the Arctic Circle. At the time, only the issue of systemic risks seemed to be able to slow down this development.

 

But today the reality is no longer the same. Indeed, a continuum model has replaced this vision of hyper-concentration of surfaces, which can be summarized in 6 levels.

  • Hyperscal Data Centers are still attractive for mass storage and non-transactional processing. Their objective is to bring the best production cost, by positioning a large area pooling where land and energy are cheap.
  • Hub Data Centers are mainly located in Frankfurt, London, Amsterdam and Paris in Europe. These areas concentrate large data centers and benefit from fast interconnection between them. These areas over-attract operators because interconnection takes precedence over the potential of the local market.
  • Regional Data Centers, located in all other major cities, address this time the local economic potential, with cloud players for companies or hosting providers acting as first level access to DC Hubs.
  • "5G" Data Centers will be located as close as possible to urban areas in order to meet the need for latency required by population uses.
  • Micro-Data Centers will bring low latency during a high concentration of use (a stadium, a factory).
  • Pico-Data Centers will address the use of the individual, thus bringing a minimum latency and especially a management of private data.

 

Despite different sizes, the first three levels of these data centers follow the same design principles. Except that Hyperscal Data Centers are often single users. It is therefore possible for them to position more restrictive design choices than in shared apartments.

The last three levels belong to the Edge universe and aim to position the DC space as close as possible to usage. However, these levels have different design principles.

The installation will be done in an industrial way for micro and pico-Data Centers. The main issues will be more related to physical protection or maintenance/operation of these infrastructures.

The "5G" Data Centers bring a new deal. Indeed, they have all the characteristics of a "small" DC but must be implemented in complex environments. They are subject to numerous safety and standards compliance constraints being located in urban areas. However, the greatest complexity lies in the lack of space to deploy the technical packages.

 

 

 Read the article

 

Source : Global Security Mag

 

 

 

 

 

 

5G : clean slate on the 1.5 GHz band

on Tuesday, 25 June 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

5G : clean slate on the 1.5 GHz band

In the fight expected from operators for the acquisition of frequencies dedicated to 5G, the Regulatory Authority for Electronic Communications and Posts is preparing to open a new front. Indeed, last weekend Arcep reported that it had set 31 December 2022 as the maximum deadline for frequencies in the 1.5 GHz band, known as the L band.

 

"Today used for point-to-point links for the collection of mobile networks open to the public and professionals and by the Ministries of the Interior and Defence", its release by the end of 2022 should allow mobile operators to have more frequencies to deploy future 5G and Very High Speed networks.

"The 1.5 GHz band has been subject to European harmonisation since 2015. It has 90 MHz that can be used to meet downlink requirements. The propagation properties of these frequencies are particularly interesting for the coverage of the territory and the coverage inside buildings", said the Telecom Constable.

 

However, there could be many pitfalls.... Indeed, the current tenants of the band have already sent comments to the Authority during the consultation period: a disputed reallocation plan, potentially huge migration costs.

 

However, the decision is widely welcomed by operators who are pleased to be able to obtain new frequency blocks for the development of their future 5G networks. While the latter accept that this L-band will only be operated "for additional exclusively downlink links (in SDL mode)", it will still improve the throughput and capacity of downlinks below 1 GHz.

The spectrum available for the deployment of future 5G networks is relatively limited, so this release should be of significant interest to operators, particularly in the event of coupling with other frequency bands.

Operators are also unanimous that the entire band will not be able to operate effectively due to unfavourable neighbourhood conditions. On its adjacent bands, there are "space exploration satellite services, radio astronomy and space research services", which do not allow the use of both ends of the 1.5 GHz band. Orange has only one 85 MHz band that can be used, while Free goes further with only one 40 MHz band. For the operator, this block of frequencies constitutes "the only sub-band with a mature ecosystem today" and could even be the subject of an "immediate allocation scenario" via a reallocation of 10 MHz bands to each operator.

 

A scenario that will not be retained by Arcep but which illustrates the operators' appetite for this band, to the great displeasure of its current tenants. They should be required to be housed elsewhere, particularly in the 6 GHz band.

Most of these actors are industrialists and express doubts about the Arcep's decision and its implications for their own activities and finances. Questions about the economic viability of this migration on the part of EDF, for example, for whom "the estimated time required to replace 1.4 GHz links, without significantly impacting the company's performance, is around ten years".

Especially since the timetable imposed by the telecoms police officer is already causing the actors concerned to shudder. For Enedis, the deadlines proposed jointly by Brussels and Arcep "do not take into account this specific framework for the use of the 1.4 GHz band by Enedis, nor the current limits or the constraints imposed by the alternative solutions". And even one of the alternatives proposed by Arcep would involve the reconstruction of a large part of its network.

The public authorities also seem to be waiting, as does the Ministry of Transport, for whom the timetable mentioned cannot be kept. Hence the Ministry's request to maintain the current network "at least until 2027, knowing that if studies show that it is possible to have the future network available earlier, the network can be shut down before that date".

Current tenants propose other solutions such as the establishment of a "cohabitation context". This would allow L-band frequencies to be allocated to operators in dense urban areas and other actors to "continue to use Radio Beams in rural areas, which are less likely to be targeted by the need for SDL".

 

 

 Read the article

 

Source : ZDNet

 

 

 

 

FaLang translation system by Faboba