MARANELLO, Italy — Computing and personal transportation are permanently entwined. Computer-aided design, geopositioning systems, crash sensors, system monitoring and hundreds of other applications have been standard on our cars for years.
However, few land-vehicle enterprises are more intensely tied into computing systems than Italy’s Ferrari, which makes its living on high-performance automobile races and sales of its world-renowned personal sport vehicles.
Click here for a look at Ferrari’s data center.
Ferrari not only employs data centers for its corporate business, but it also has a state-of-the-art, high-performance data center dedicated to its racing division. Racing is, after all, Ferrari’s lifeblood and passion, and this facility — called the F1 Data Center — on the impressive Ferrari design and manufacturing campus here represents a true convergence of science, industry know-how and passion for speed.
Ferrari is reserved about divulging technical details about the center, but it is of medium size — approximately 2,500 square feet in area, and loaded with about 60 racks of IBM, Sun Microsystems and Hewlett-Packard servers and tiered storage arrays. The power conversion (AC to DC) equipment consists of APC Symmetra PX 250-500 power supplies, Modular 3P PDUs, and InRoom coolers. Both air and liquid cooling systems are utilized; the room itself was kept at about 23 degrees C. (about 72 degrees F.)
The F1 center, located in an original Ferrari company building constructed in the 1940s but completely modernized by APC, takes an enormous amount of power to operate, so the power conversion apparatus had better be top-flight. Piergiorgio Grossi, CIO of Ferrari F1, told eWEEK that most of the racks average about 20 kilowatts of power usage per day.
“This is not exactly what you call ‘green computing,'” Grossi said with a smile. “High-performance computing, of course, is a different animal. But we still make sure that we are as efficient as possible in the way we use computing.”
This is an extremely well-organized and operated data center — it literally hummed. Not a cable appeared out of place, or even a millimeter too long or too short. None of the units appeared to be down or even operating at less than full power.
“What’s amazing to me about how Ferrari is using this is that in the three years they have been up and running [24/7], they have by now completely swapped out every disk in the place — with only one incident of downtime,” Rob Bunger, a marketing director for APC, said.
Why would an automobile company need this kind of high-performance computing firehose?
The F1 handles all the aerodynamic designs, all the engine designs and dynamics, power and exhaust monitoring, wind testing data analysis, and hundreds of other technical duties. The list of computational needs is lengthy; the computations continue around the clock. Cars are constantly being re-designed, re-tooled and re-provisioned; it’s obvious that Ferrari has an obsession in its reach for perfection.
When you hear the cars running the test track just down the road here in Maranello, you begin to see the results of this obsession. The cars are so well-engineered, tuned and driven, that they do not sound like cars at all; they are so high-pitched that they sound like hummingbirds or mosquitoes whizzing by your ear.
Something that is unique to Ferrari and about 10 other competing international companies (including Mercedes, Toyota, BMW, Renault and others) is that a number of times per year — 18, to be exact in 2009 — the company takes literally to the road for the Formula One Grand Prix road racing season. The season starts March 29 at the Australian Grand Prix in Melbourne.
This requires some challenging work on the part of the company’s IT staff, about which most race fans have no idea.
At those 18 locations — in places as diverse as Malaysia, Brazil, France, Japan and Australia — the Ferrari data center team will set up a temporary data center on site that will work in real time during the race. The temporary center will hook directly into the F1 site.
The Ferrari IT team consists of about 150 crew members and seven large trucks’ worth of servers, power supplies and everything else that goes with them. It is no simple chore to move this high-tech entourage from one continent to another.
What does this IT setup do during the race, which can last from two to three hours or more, depending upon weather and other race conditions? Four men per car — and Ferrari usually has two cars in each race — hunker down at their workstations and stay there for the duration of the race, staying in constant contact with the driver, informing him about wind velocity changes, fuel consumption, tire pressure, engine temperatures, oil pressure and a hundred other measures of the race.
The IT men cannot actually make any changes to the car once the race has started; only the driver can make any adjustments. To gain a competitive edge, this is the level of sophistication a company such as Ferrari must attain to stay among the world’s race leaders.
It paid off in 2008, when Ferrari won the world Formula One championship.
By the way, Ferrari has been so satisfied with the racing data center’s performance over the last three years, it now intends to rebuild its entire corporate IT system as an emulation of F1.
Fast Facts on the F1 Data Center at the Ferrari Campus
Server racks:
–Mixed usage of IBM, Sun Microsystems and Hewlett-Packard servers; ZFS, Windows and AIX filesystems; Sun StorageTek, IBM Bladecenter S storage, HP ProLiant Data Protection Storage Servers.
Power management equipment:
–Symmetra PX 80KW N+1 UPS system configured as 2N (System + System). This configuration is also called Dual feed, with redundant A and B sides. There were several zones of these UPS systems.
–InfraStruXure Power Distribution: Pre-configured power distribution that takes the power from the UPS to each rack enclosure, which have Metered rack PDUs in them.
–NetShelter SX rack enclosures & APC cable management accessories
–InRow chilled water air conditioner. This is APC’s first generation of row-based cooling. This facilitates much higher densities than raised floor cooling and is also more efficient due to the shorter air flow path.
–Open row cooling for medium density zones.
–Hot aisle containment for the high density zone, which was about 80 degrees F. inside.
–Since the original commissioning, the data center has gone through a few IT refreshes and also an expansion of the cooling and power capability. During this period they only shut down a portion of their data center once.
–The expansion included an addition of APC’s newer InRow RP chilled Water cooling unit, which has much higher capacity in a smaller space than the original InRow FM. They also added a very new UPS and Distribution system available in Europe — Symmetra PX 160kW. This solution doubles the power of the original Symmetra PX80kw in the same footprint. Part of this system is Modular Power Distribution, which is a new Touch Safe, hot pluggable breaker/cable system for power distribution to the rack. This system was recently released in the United States, but has been in Europe longer.
–The high density zone has several racks full with HP blade servers that can draw 20kW per rack. While there, the metered rack PDUs indicate the rack was drawing about 16.5kW per rack.
Ferrari was an early adopter of row-based cooling. Not only has the company reliably powered and cooled these high-density loads, but it also has been able to grow with IT refreshes and improvements to the power and cooling apparatus with minimal disruption to their operations. Standardized solutions and excellent operation procedures enable this.