Roger Kay, Author at eWEEK https://www.eweek.com/author/roger-kay/ Technology News, Tech Product Reviews, Research and Enterprise Analysis Tue, 02 Feb 2021 17:43:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 New President Will Need to Scrutinize U.S.-China Relations for IT https://www.eweek.com/it-management/new-president-will-need-to-scrutinize-u-s-china-relations-for-it/ https://www.eweek.com/it-management/new-president-will-need-to-scrutinize-u-s-china-relations-for-it/#respond Wed, 20 Jan 2021 10:56:07 +0000 https://www.eweek.com/uncategorized/new-president-will-need-to-scrutinize-u-s-china-relations-for-it/ The algorithm Donald Trump applied to the policies of his predecessor, Barack Obama, was simple: anything Obama had done, Trump rescinded, canceled or did the opposite. Joe Biden might be tempted to play turnabout with Trump, and in many cases that would make a lot of sense. In matters relating to areas such as the […]

The post New President Will Need to Scrutinize U.S.-China Relations for IT appeared first on eWEEK.

]]>
The algorithm Donald Trump applied to the policies of his predecessor, Barack Obama, was simple: anything Obama had done, Trump rescinded, canceled or did the opposite. Joe Biden might be tempted to play turnabout with Trump, and in many cases that would make a lot of sense. In matters relating to areas such as the Paris Agreement on climate change, policies with respect to immigrants and adherence to the Foreign Emoluments Clause of the Constitution, Biden would be perfectly correct in reversing course.

But, from an industrial policy perspective, it would be worth examining closely how to deal with China. China was one of the large national rivals that took advantage during the past four years of the Trump administration’s inability to keep its eye on the ball. Russia found our blind spots like an expert squash player dropping a shot where his opponent isn’t, using Trump’s (almost) inexplicable softness toward Vladimir Putin to promote its agenda. While we were watching for election disruption, Russian hackers popped open SolarWinds like a can of Mountain Dew and entered the computer networks of thousands of organizations. Iran and North Korea edged merrily toward greater nuclear capabilities. But China, among them all, boldly and directly, went about the business of displacing the United States in as many domains as possible, maneuvering itself into position to become the next great industrial and military superpower.

China: Long an IT factory for the U.S.

Nowhere was this propensity more in evidence than in the domain of technology, where China has long been a factory for the United States and others, making what we design. That relationship was deeply interwoven when Trump took office, and he and his trade representatives spent a lot of time and energy tearing it apart. Luckily, individuals such as Apple CEO Tim Cook knew how to hit just the right soothing notes when speaking to Trump and managed to talk him down off some of the most potentially damning moves. But in general, trade policy devolved into an escalating tariff war, whose main consequence was to slow down the velocity of the technology industry, particularly the sectors involved in hardware manufacturing.

It might be tempting to go back to the good old days and just tie back together those frayed trade links. The industries in both countries benefited handsomely from the arrangement and likely would again.

And yet, something was never quite right about the China relationship, even in the supposedly good old days. It was rather one-sided. The Chinese government often required U.S. companies to give their Chinese business partners a majority stake in joint ventures and sometimes also stipulated technology transfer. When IBM created the OpenPower Foundation in 2013, a raft of Chinese companies signed up as members. IBM was no longer able to support its own silicon development and so essentially gave away powerful processor technology to anyone capable of running with the ball. Since then, Power technology has become a core component of the Made in China 2025 plan.

Although recent reports indicate that Chinese firms are experiencing setbacks in their pursuit of technological independence, the United States can take only cold comfort there. China is far more unified in its quest than any Western nation. Its government, industry and financial sector are all working together to learn from these impediments and move on.

Coming 5G deployments make the stakes even higher

So, should we cooperate? Shun? Compete? Some combination? There’s quite a lot of devilry in the details, and the stakes are particularly high on the eve of widespread deployment of 5G wireless communications technology. The United States has a leadership position in 5G. But so does China. U.S.-based Qualcomm is clearly the king of 5G handsets, supplying everyone from Apple to Samsung. But China-based Huawei is the leader in 5G base stations and not just in China. Germany is investing heavily in Huawei equipment, and other nations want in as well.

Striking the right balance will be a bit of a trick.

How do we make sure we’re protecting our intellectual property without choking off our markets? We don’t want to throw out the grain with the chaff. We don’t want to make it hard to sell chips to China on the eve of the next big telecommunications upgrade. Huawei wants to buy 5G products from Qualcomm, which sold the Chinese company a big pile of 4G products and has a long relationship there. If Huawei becomes a total pariah in the eyes of the U.S. government, it’s U.S. companies that will be left out in the cold. Huawei will simply buy 5G products from Samsung.

But this story is much bigger than Huawei. It’s about a clash between two governments and their differing approaches to industrial policy. From our perspective, the Chinese government has not played by the rules. But viewed from the perspective of international rivalry, Huawei belongs to the same class of “frenemy” as Samsung.

Meanwhile, economically, China is on a roll. Everyone except China is reeling from COVID-19. Having used its centrally controlled society to effect a hard shutdown early, China is nearly back to normal. This is not a market for the U.S. to let get away.

Chip value still resides in the U.S.

So, where should we end up in all this?

I would say that we’re actually in a pretty good position if we play our cards right. In the chip industry, most of the value is still created in the United States. Most U.S. firms have moved to a “fabless” model, wherein they do the design work and leave the manufacturing to someone else, often Taiwan Semiconductor Manufacturing Company (TSMC). Even TSMC recently agreed to build a leading-node factory here in the United States. It’s the tens of thousands of U.S.-based chip designers that set us apart from other nations, with something like 80% of the value in a chip being created right here.

The technology revolution led by 5G will affect a cascade of changes throughout society, from the reinvention of health care as we emerge from the pandemic to bringing back jobs for small and midsize businesses. It’s technology that will enable all those new jobs and business opportunities. 5G will enable the economy to keep going during the pandemic.

We do have to think carefully about how the supply chain operates. While we don’t want to be over-invested in end-stage production, there is a national security component to letting others make our chips. We don’t want to be beholden to Asia, given the current trade war, pandemic and the unknowable outcome of the conflict between Taiwan and China. From that perspective, it would be great to bring some of our chip manufacturing home. At this time, Intel is the only semiconductor firm still making large quantities of chips on U.S. soil.

Innovative companies need to be protected

From a policy perspective, the new administration needs to think in terms of bringing back the knowledge economy as everything moves to digitization and connection. The government needs to nurture not just 5G, but a wide range of technologies, in which we can be the innovators, building new platforms and ecosystems that allow us to grow despite the competition.

We’re not going to win building cheap chip businesses against low-labor-cost Asian industries that accept lower margins. We need to cultivate our leadership in the innovation economy, paving a path for workers in the millennial generation and beyond.

We have the advantage here, but we need to maintain it by protecting innovative companies while remaining vigilant against foreign competitors that often have explicit backing from their own governments.

Roger Kay is affiliated with PUND-IT Inc. and a longtime independent IT analyst.

The post New President Will Need to Scrutinize U.S.-China Relations for IT appeared first on eWEEK.

]]>
https://www.eweek.com/it-management/new-president-will-need-to-scrutinize-u-s-china-relations-for-it/feed/ 0
Why WiFi 6E is a Much Bigger Deal Than the Name Suggests https://www.eweek.com/networking/why-wifi-6e-is-a-much-bigger-deal-than-the-name-suggests/ https://www.eweek.com/networking/why-wifi-6e-is-a-much-bigger-deal-than-the-name-suggests/#respond Tue, 05 Jan 2021 01:02:04 +0000 https://www.eweek.com/uncategorized/why-wifi-6e-is-a-much-bigger-deal-than-the-name-suggests/ Like many shoemakers, I should be ashamed that my children are running around barefoot. Actually, no–that’s a metaphor. It’s my communications infrastructure that’s running around barefoot. I’m a tech analyst with an embarrassingly ancient network and connection to the outside world. How did it get this way? Neglect and time, as always, were big factors. […]

The post Why WiFi 6E is a Much Bigger Deal Than the Name Suggests appeared first on eWEEK.

]]>
Like many shoemakers, I should be ashamed that my children are running around barefoot. Actually, no–that’s a metaphor. It’s my communications infrastructure that’s running around barefoot. I’m a tech analyst with an embarrassingly ancient network and connection to the outside world.

How did it get this way? Neglect and time, as always, were big factors. Laziness was also in there somewhere. Hey, it was great in 2005 when I got Comcast cable. A few years later, I threw in an Arris cable modem and a LinkSys switch. Although most things in my office are connected by wired Ethernet (what’s that?), the wireless networks around the house are a hodgepodge of power-line distribution and 2.4GHz and 5.GHz access points of venerable vintage. Guest and homey networks live side by side, and voluminous SSIDs (network names) abound on any client device hapless enough to view “available networks,” which include all of my neighbors’ as well my own wide array.

So, what’s with the confessional? And why now? Because now I’m determined to fix all this with a major upgrade, and it’s okay to admit you’re a drunk if you have a program to get sober. Which begs the question, when, exactly, am I going to undertake this life-changing transformation?

How to complete your dream network

Ay, and there’s the rub. There’s a piece of the puzzle, unavailable today, that will complete this dream network. That piece is based on products running WiFi 6E, a standard only recently ratified by the WiFi Alliance, the worldwide group that agrees on international WiFi standards. Products based on WiFi 6E are expected in just one week. Vendors will announce them at the Consumer Electronics Show (CES), a virtual version of which starts Jan. 11. Once those products are in, I’ll have everything I need.

What will the new network look like? First, I’ll swap out cable for Verizon FIOS. Verizon ran optical fiber through my neighborhood already a decade ago and has been pelting me ever since with invitations, which have often included one-time incentives of up to $500. These pitches I steadfastly ignored–until now. An internet connection is only as good as its weakest link, including, potentially, the server at the other end, but why not have the fastest “last 100 feet” you can?

And then comes the magic: even though the FIOS connection will go into another part of the house from where the cable now enters, most internal distribution from there will be done by a WiFi 6E wireless mesh network. It doesn’t matter where the wide area connection is. Distribution will be equally good in every part of the house with mesh. Mesh is basically a grid of like access points that talk with each other, share traffic and route it the best way. With I think three mesh nodes, my 6E access points will act as a collective network. With one SSID.

We have mesh networks today, you say? Yes, but nothing like these mesh networks. Actually, they’re really similar. The new WiFi 6E protocols are exactly the same as existing WiFi 6 (formally called 802.11ax). That’s why it’s 6E and not 7, which is coming later, like 2024. The big difference is that WiFi 6E will make use of a whole new band of unlicensed spectrum: the 6GHz band. WiFi 6 operates in two bands: 2.4GHz and 5GHz. WiFi 6E does everything WiFi 6 does, but in three bands: 2.4GHz, 5GHz, and 6GHz. When some vendors claim “tri-band” now, what they mean is three radios (e.g., one 2.4GHz and two 5GHz, one of which is used for backhaul).

Three different but cohesive bands

This is not that. This is three different bands.

What’s key about that is that, in older three-radio systems, client traffic (to and from endpoints) shares the same band with backhaul traffic (flowing among access points). With true tri-band systems, all the backhaul traffic will use the 6GHz band. Thus, even if you have no 6GHz endpoints (i.e., phones, computers, voice assistants, home automation devices), your client traffic gets a whole band or bands to itself, since all the housekeeping has been shipped off to 6GHz.

If you’re old enough, you will recall that once upon a time, WiFi operated only in the 2.4GHz band — along with microwave ovens and other appliances. In a sense, that band was already crowded the day it was introduced. When 5GHz came online in 1999, it was a whole new, much bigger band with little traffic. Routers and devices taking advantage of 802.11ac, which used the 5GHz band, had the freeway mostly to themselves.

But by 2015, particularly in dense urban environments, the 5GHz band was chockablock with traffic. It took promoters three years to convince enough national authorities around the world to free up the 6GHz band for unlicensed (WiFi, as opposed to cellular) use. The thing about this band is not only that it’s fresh and free of traffic. It’s also much wider than even 5GHz: 1.2GHz wide, and it can be divided into seven big 160MHz channels or up to 59 20MHz channels. The existing 5GHz band has only 500MHz of spectrum, less than half, yielding potentially only 25 20MHz channels. And 2.4GHz has even less. So, 6GHz: a big fat band with channels wide enough for the whole family. At the same time.

WiFi 6E is more like an information superhighway

I’m juiced about my incipient new network. Between FIOS and 6GHz WiFi 6E, I’ll be flying. And that’s even before I have any devices (phones, etc.) that use 6GHz because the network itself will use the new superhighway, relieving lower bands of all the busy communication between nodes.

Of course, WiFi 6E will work with all my old WiFi devices. As I bring future WiFi 6E devices onto the network, they will be able to hop onto the new spectrum.

Here’s another cool thing about WiFi 6E. Because the 5GHz and 6GHz bands are near each other in the electromagnetic spectrum, the physical hardware (e.g., antennas) can be the same design for both. How handy for hardware OEMs currently making WiFi 6 products!

Good thing for them, too, because on Jan. 11, when the WiFi Alliance announces the finalization of the 6E standard at CES, manufacturers, who have had the completed spec for more than a month, will be ready to announce simultaneous product availability. In other words, they’ll be shipping in a week.

Image caption: Comparing the blue channel map on the right with the green and grey ones in the middle and on the left, you can see how much more bandwidth 6GHz will introduce to the WiFi experience. To see a larger image, right-click on the image and select “View Image.”  (Source: Qualcomm Technologies, Inc.).

Roger Kay is affiliated with PUND-IT Inc., a longtime independent IT analyst and a regular contributor to eWEEK.

The post Why WiFi 6E is a Much Bigger Deal Than the Name Suggests appeared first on eWEEK.

]]>
https://www.eweek.com/networking/why-wifi-6e-is-a-much-bigger-deal-than-the-name-suggests/feed/ 0
Pro Hearing Tech Reaches the Masses Through Earbuds https://www.eweek.com/innovation/pro-hearing-tech-reaches-the-masses-through-ear-buds/ https://www.eweek.com/innovation/pro-hearing-tech-reaches-the-masses-through-ear-buds/#respond Mon, 26 Oct 2020 21:40:00 +0000 https://www.eweek.com/uncategorized/pro-hearing-tech-reaches-the-masses-through-earbuds/ Qualcomm is a firehose of product and standards announcements these days. On Oct. 20, I think I received something like a dozen announcements of one sort or another. I’ve written before about how the company manages to maintain its focus, even when under great stress, and now that Qualcomm is out from under a heap of […]

The post Pro Hearing Tech Reaches the Masses Through Earbuds appeared first on eWEEK.

]]>
Qualcomm is a firehose of product and standards announcements these days. On Oct. 20, I think I received something like a dozen announcements of one sort or another. I’ve written before about how the company manages to maintain its focus, even when under great stress, and now that Qualcomm is out from under a heap of lawsuits, it’s back in full flower as a technological cornucopia.

Of all the possible choices of what to highlight from among various recent developments, I found the joint announcement by the company and a small firm named Jacoti to be the most compelling for the simple reason that it involves the technology of hearing, and, as mine gets progressively worse, the topic has increasingly attracted my attention.

Of European origin, Jacoti is a medical device manufacturer, software developer and web service provider staffed by audiologists and their fellow travelers. The partnership involves Jacoti adapting its medical-grade audio technology to run on Qualcomm silicon, specifically the QCC5100, the firm’s premium-tier, ultra-low-power system on a chip (SoC). 

Uses Bluetooth to do its work

The QCC5100 is primarily about handling Bluetooth, the personal-area communications technology often used for peripherals such as headsets, game controllers and speakers. However, it has a full stack–including system and application processors, system and developer digital signal processors, memory controller, and I/O handlers. In this application, the chip sits in small, comfortable earbuds that talk to a cell phone. The phone is used for setup. After that, the buds can run entirely in local mode, thus saving juice even beyond power-sipping Bluetooth.

When paired with a phone, the Jacoti buds do an admirable job of playing streaming audio and handling hands-free phone calls, but, for us oldsters, it’s the live conversation application we find most intriguing. Simply wearing the earbuds while dining out with friends will keep your mug from parading that glassy look of incomprehension so common among our cohort in busy venues. Yes, not plugged into anything at all, these buds can act as an ace hearing aid.

Describing what the buds do is easy. Actually getting them to do it took the work of audiologists with hundreds of years of collective clinical and research experience. When you first get them, you charge them, put them on, connect them to your phone and let the setup take you through an audiological test, which treats each ear as its own separate subject. 

Jacoti’s cloud component then takes that data and creates your audio profile, per ear, which is downloaded onto the buds. Thereafter, they run with your custom profile. I have a hearing deficit in one ear or the other (maybe both) around 1,000 Hz, which just happens to be the register in which my wife most often speaks. It seems the little cilia on my cochlea, whose job it is to perceive that frequency, have simply died, whether from natural causes or from having been blown out by overuse.

Software uses smarts to adjust to user needs

If I were to undergo the Jacoti experience, the test would find this deficit and account for it in my profile. While performing noise cancellation and making other enhancements from the Qualcomm portfolio, the earbuds would use the Jacoti technology to boost the signal around 1,000 Hz. My experience would be normal—but somehow clearer—sound. Then, I would have no excuse not to take out the garbage.

Despite some misgivings, I’m sure this improvement would be a net benefit in my life. But beyond that, Jacoti and Qualcomm have managed to bring medical-quality audio to lightweight, low-cost, low-power consumer devices, which will be a boon to a much wider audience. You have to have decent medical insurance to get, in a hospital, a good audio test of the sort Jacoti will run as a cloud service, and hearing aids cost a mint.

Jacoti hasn’t released pricing, but you can be pretty sure it will undercut what the professionals charge. Around the world, literally millions of people could benefit from audio technology heretofore out of reach.

Roger Kay is affiliated with PUND-IT Inc. and a longtime independent IT analyst.

The post Pro Hearing Tech Reaches the Masses Through Earbuds appeared first on eWEEK.

]]>
https://www.eweek.com/innovation/pro-hearing-tech-reaches-the-masses-through-ear-buds/feed/ 0
How Intel Optane Helps Unify the Memory-Storage Pool https://www.eweek.com/pc-hardware/how-intel-optane-helps-unify-the-memory-storage-pool/ https://www.eweek.com/pc-hardware/how-intel-optane-helps-unify-the-memory-storage-pool/#respond Mon, 07 Sep 2020 21:48:00 +0000 https://www.eweek.com/uncategorized/how-intel-optane-helps-unify-the-memory-storage-pool/ Several years ago, a new memory type began to penetrate the market. Intel had been developing the technology later branded Intel Optane since 2012. Optane technology delivered technical advances on a number of fronts, with the result being a kind of non-volatile memory that was almost as fast as volatile working memory (dynamic random-access memory […]

The post How Intel Optane Helps Unify the Memory-Storage Pool appeared first on eWEEK.

]]>
Several years ago, a new memory type began to penetrate the market. Intel had been developing the technology later branded Intel Optane since 2012. Optane technology delivered technical advances on a number of fronts, with the result being a kind of non-volatile memory that was almost as fast as volatile working memory (dynamic random-access memory or DRAM) and also had the ability to retain data when power was turned off. DRAM stays “lit” with information only as long as electricity courses through it. In most computers, DRAM is stationed where, for example, a running program or working data sits.

While Optane can’t quite match DRAM in speed, it has the advantage of persistence. That is, it doesn’t need electricity to retain its state. Persistence is often associated with magnetism. Hard disk drives and tapes use a magnetic field to set a location to either a one or a zero. Most solid state drives on the market today use a technology that changes a voltage state to alter a bit’s numerical value. Optane technology can flip bits one at a time by changing their electrical resistance instead, a more efficient process. This bit-addressability—which, like DRAM architecture, allows random access—gives Optane a speed advantage over even today’s SSDs, which require reading and writing in blocks of data.

All these tricks don’t come for free, and so Optane is priced between faster, volatile DRAM and the slower, persistent NAND modules used in SSDs. In a certain sense, Optane is a hybridnot so much technologically, but on a feature basis.

Sits in the system between DRAM and NAND

From a market perspective, this combination of technical characteristics and economics has allowed Optane to insert itself into the memory/storage pool between DRAM and NAND. It fits nicely on a continuum of price and performance, helping smooth data’s pathway to and from increasingly distant portions of the pool. Optane provides a lower-cost alternative to expensive DRAM, allowing larger memory footprints for the same price, and much-faster-performing SSDs, which can act as fast storage caches in front of slower NAND storage. Thus, Optane fills a gap in the storage continuum between high-speed, expensive DRAM and less-expensive, slower NAND.

If the fastest, most expensive memory is right next to the central processor, the slowest, least expensive is far away.

At the very outer ring is good old reliable cheap magnetic tape, designed for huge reams of storage that few people want immediate access to. On the next ring in is found traditional magnetic hard drives with their rotating mechanical spindles. They are slow, but big and inexpensive. Good for long-term storage. Getting data from them is relatively easy if one is not in a hurry. Closer still, traditional NAND-based SSDs, faster and more expensive, can begin to participate in near-real-time analytics. These days, SSDs are freed from the communications constraint of the previous storage interface standard, SATA, which, while fast for its time, had become a bottleneck. Today’s SSDs take advantage of the Non-Volatile Memory Express (NVMe) standard, which is faster than any connection in the system except of the processor-memory link.

Then comes the Optane layer, which is really two layers, depending on the form factor.

SSDs based on Optane, using the fast NVMe channel, exceed what NAND-based SSDs can do. With this level of performance, Optane SSDs can greatly accelerate data access via a fast cache or high-speed storage tier. This capability is especially important for an online transaction processing (OLTP) system, in which access to data sets larger than the memory footprint are needed.

Also worth mentioning at this point is Optane drives’ endurance. With 20 times the life of a high-end enterprise class NAND SSD, Optane can perform many more read and write operations without wearing out, making it ideal for fast caching, which involves a constant flood of operations. A side benefit of this endurance is the ability to reduce the size of the caching layer because Optane doesn’t require the degree of over-provisioning necessary with NAND storage.

Closer in still, memory modules made with Optane technology can participate in operations even more tightly coupled to the processor. They can do this by way of the memory bus, a dedicated high-speed connection that memory shares with the processor. With this extra scooch of performance, Optane memory can stretch the capacity of DRAM to encompass some of the more challenging data analysis problems, such as large in-memory databases like SAP HANA or Oracle for real-time analytics or artificial intelligence.

An additional advantage of Optane memory is its persistence. One might ask: Why is persistence interesting if it’s only honored in the breach? That is, as long as the electricity doesn’t fail, why do you need it? After all, DRAM doesn’t have persistence, and many real-time analytical programs run just fine in main memory. The answer is: There’s an additional performance benefit with persistent memory, which is that the system doesn’t have to take the time to offload and save back vital data that must be replicated just in case of power loss. This step can be skipped with non-volatile memory, which will keep the data even if the juice cuts out. Although Optane persistent memory products are relatively new to the market, they have already won an innovation award and set a performance record.

Optane = Low latency

The discussion of Optane would not be compete without a reference to latency. A terrific advantage of Optane is its low latency. If speed measures how fast data moves through a channel, latency refers to how long a request has to wait before receiving data, essentially the startup overhead of a data request. With its bit-addressability, Optane can deliver any size data request with scant delay. This capability is particularly important when many, small data requests are made.

In NAND-based SSDs, which can address data only in blocks, this type of pattern quickly overwhelms the system’s responsiveness. By contrast, Optane SSDs deliver fast, consistent read latency, even under a heavy write load, a predictability associated with higher quality-of-service levels.

Next inward is the DRAM ring. As noted earlier, DRAM is fast, but expensive and volatile. To some degree, its speed is throttled by the memory bus, which, although quite fast, is not the last word. Because there are several more layers, all of them right on the processor die. These are the cache levels, up to three of them, which store temporary results from processor calculations. Relatively speaking, caches are small, super fast, very expensive, and fixed (their sizes are set in processor design and finalized in manufacturing).

Seen another way, the rings of the memory/storage pool can be represented as a pyramid,

which gives a notion of the size of each tier. At the bottom is the largest, slowest, least-expensive-per byte storage. At each level, quantity decreases while cost and performance rise.

With today’s storage elements, data can be moved up and down through the pyramid, depending on the degree to which it is needed for immediate computation. Intel has created tools to help application software engineers manage data location optimally.

Optane can deliver a big performance boost when used in front of a large array of magnetic storage. One example of an existing application of this type is SAP HANA. According to Intel executives, customers value Optane’s predictable performance, which consistently delivers a high quality of service per transaction.

In hyperconverged systems, where virtualization, compute, networking and storage subsystems can be configured by software, Optane provides a vital link between faster memory and slower magnetic storage, reducing system bottlenecks and allowing increased virtual machine density.

As mentioned earlier, the applications best able to take advantage of this smooth span of hierarchical memory and storage are in-memory analyses of large databases composed of mixed (structured and unstructured) elements. Today, such implementations are mostly found in the giant cloud service providers and the largest enterprises, which have the scale obtain the maximum benefit. Some very large enterprise customers can also reap these benefits. At some point, the service providers may be able to provide access to smaller customers as a service.

Most large hardware OEMs are adopting Optane in their converged products. For example, Dell’s highest-end VxRail hyperconverged infrastructure products feature both Optane persistent memory and Optane SSDs.

While still early days for Optane in the market, its promise is such that proliferation of Optane-enhanced systems is likely. More enterprises of all sizes will seek to distill instant wisdom from large, disparate sources of real-time data, and those unable to create and manage such hyperconverged systems themselves will likely turn to service providers for the capability.

Roger Kay is affiliated with PUND-IT Inc. and a longtime independent IT analyst.

The post How Intel Optane Helps Unify the Memory-Storage Pool appeared first on eWEEK.

]]>
https://www.eweek.com/pc-hardware/how-intel-optane-helps-unify-the-memory-storage-pool/feed/ 0