The post Google Cloud vs AWS 2022: Compare Features, Pricing, Pros & Cons appeared first on eWEEK.
]]>Public cloud service providers such as Amazon Web Services, Microsoft Azure, Google, IBM, Dell EMC, Salesforce, Oracle and others are making it easier all the time for customers to come and go or add or subtract computing capacity or apps as needed. These and other providers also keep coming up with new and more efficient services for companies to use, many of which now feature artificial intelligence options to make them more usable for technical and non-technical employees alike.
In this article, we take a close look at two of the three largest cloud services providers in the world: Amazon Web Services and Google Cloud Platform. eWEEK uses research from several different sources, including individual analysts, TechnologyAdvice, Gartner, ITC, Capterra, IT Central Station, G2 and others.
What we’ll do here is compare at a high level and in a few different ways these two global cloud storage and computing services, so as to help you decide on the one that suits your company as the most cost- and feature-efficient one available.
To use an AWS service, users must sign up for an AWS account. After they have completed this process, they can launch any service under their account within Amazon’s stated limits, and these services are billed to their specific account. If needed, users can create billing accounts and then create sub-accounts that roll up to them. In this way, organizations can emulate a standard organizational billing structure.
Similarly, GCP requires users to set up a Google account to use its services. However, GCP organizes service usage by project rather than by account. In this model, users can create multiple, wholly separate projects under the same account. In an organizational setting, this model can be advantageous, allowing users to create project spaces for separate divisions or groups within a company. This model can also be useful for testing purposes: once a user is done with a project, he or she can delete the project, and all of the resources created by that project also will be deleted.
AWS and GCP both have default soft limits on their services for new accounts. These soft limits are not tied to technical limitations for a given service; instead, they are in place to help prevent fraudulent accounts from using excessive resources, and to limit risk for new users, keeping them from spending more than intended as they explore the platform. If you find that your application has outgrown these limits, AWS and GCP provide straightforward ways to get in touch with the appropriate internal teams to raise the limits on their services.
AWS and GCP each provide a command-line interface (CLI) for interacting with the services and resources. AWS provides the Amazon CLI, and GCP provides the Cloud SDK. Each is a unified CLI for all services, and each is cross-platform, with binaries available for Windows, Linux, and macOS. In addition, in GCP, you can use the Cloud SDK in your web browser by using Google Cloud Shell.
AWS and GCP also provide web-based consoles. Each console allows users to create, manage, and monitor their resources. The console for GCP is located at https://console.cloud.google.com/.
One area where there is not a notable difference between these two market leaders is in pricing. AWS uses a pay-as-you-go model and charges customers per hour—and they pay for a full hour, even if they use only one minute of it. Google Cloud follows a to-the-minute pricing process.
Many experts recommend that enterprises evaluate their public cloud needs on a case-by-case basis and match specific applications and workloads with the vendor that offers the best fit for their needs. Each of the leading vendors has particular strengths and weaknesses that make them a good choice for specific projects.
So, let’s get more specific.
For the past 15 years, Google has been building one of the fastest, most powerful, and highest-quality cloud infrastructures on the planet. Internally, Google itself uses this infrastructure for several high-traffic and global-scale services, including Gmail, Maps, YouTube and Search. Because of the size and scale of these services, Google has put a lot of work into optimizing its infrastructure and creating a suite of tools and services to manage it effectively. GCP puts this infrastructure and these management resources at users’ fingertips.
Google Cloud was developed by Google and launched in 2008. It was written in Java, C++, Python including Ruby. It also provides the different services that are IaaS, PaaS and Serverless platform. Google cloud is categorized into different platforms, such as Google App Engine, Google Compute Engine, Google Cloud Datastore, Google Cloud Storage, Google Big Query (for analytics) and Google Cloud SQL. Google cloud platform offers high-level computing, storage, networking and databases.
It also offers different options for networking, such as virtual private cloud, cloud CDN, cloud DNS, load balancing and other optional features. It also offers management of big data and Internet of things (IoT) workloads. Cloud machine learning engine, cloud video intelligence, cloud speech API, cloud Vision API and others also utilize machine learning in Google cloud. Suffice to say there are numerous options inside Google Cloud, which is most often used by developers, as opposed to line-of-business company employees.
Nearly all AWS products are deployed within regions located around the world. Each region comprises a group of data centers that are in relatively close proximity to each other. Amazon divides each region into two or more availability zones. Similarly, GCP divides its service availability into regions and zones that are located around the world. For a full mapping of GCP’s global regions and zones, see Cloud Locations.
In addition, some GCP services are located at a multi-regional level rather than the more granular regional or zonal levels. These services include Google App Engine and Google Cloud Storage. Currently, the available multi-regional locations are United States, Europe and Asia.
By design, each AWS region is isolated and independent from other AWS regions. This design helps ensure that the availability of one region doesn’t affect the availability of other regions, and that services within regions remain independent of each other. Similarly, GCP’s regions are isolated from each other for availability reasons. However, GCP has built-in functionality that enables regions to synchronize data across regions according to the needs of a given GCP service.
AWS and GCP both have points of presence (POPs) located in many more locations around the world. These POP locations help cache content closer to end users. However, each platform uses their respective POP locations in different ways:
GCP’s points of presence connect to data centers through Google-owned fiber. This unimpeded connection means that GCP-based applications have fast, reliable access to all of the services on GCP, Google said.
PROS: Users count heavily on Google’s engineering expertise. Google has an exemplary offering in application container deployments, since Google itself developed the Kubernetes app management standard that both AWS and Azure now offer. GCP specializes in high-end computing offerings such as big data, analytics and machine learning. It also provides considerable scale-out options and data load balancing; Google knows what fast data centers require and offer fast response times in all of its solutions.
CONS: Google is a faraway third-place in market share (8 percent; AWS is at 33 percent, Azure at 16 percent), most likely because it doesn’t offer as many different services and features as AWS and Azure. It also doesn’t have as many global data centers as AWS or Azure, although it is quickly expanding. Gartner said that its “clients typically choose GCP as a secondary provider rather than a strategic provider, though GCP is increasingly chosen as a strategic alternative to AWS by customers whose businesses compete with Amazon, and that are more open-source-centric or DevOps-centric, and thus are less well-aligned to Microsoft Azure.”
This is a high-level comparison of two of the top three major cloud service leaders here in 2021. We will be updating this article with new information as it becomes available, and eWEEK will also be examining in closer detail the various services—computing, storage, networking and tools—that each vendor offers.
Amazon Web Services (AWS) is a cloud service platform from Amazon, which provides services in different domains such as compute, storage, delivery and other functionality which help the business to scale and grow. AWS utilizes these domains in the form of services, which can be used to create and deploy different types of applications in the cloud platform or migrate apps to the AWS cloud. These services are designed in such a way that they work with each other and produce a scalable and efficient outcome. AWS services are categorized into three types: infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS). AWS was launched in 2006 and become the most-purchased cloud platform among currently available cloud platforms. Cloud platforms offer various advantages such as management overhead reduction, cost minimization and many others.
PROS: Amazon’s single biggest strength really turned out to be the fact that it was first to market in 2006 and didn’t have any serious competition for more than two years. It sustains this leadership by continuing to invest heavily in its data centers and solutions. This is why it dominates the public cloud market. Gartner Research reported in its Magic Quadrant for Cloud Infrastructure as a Service, Worldwide, that “AWS has been the market share leader in cloud IaaS for over 10 years.” Specifically, AWS has been the world leader for closer to 15 years, or ever since it first launched its S3 (Simple Storage Service) in fall 2006.
Part of the reason for its popularity is certainly the massive scope of its global operations. AWS has a huge and growing array of available services, as well as the most comprehensive network of worldwide data centers. Gartner has described AWS as “the most mature, enterprise-ready (cloud services) provider, with the deepest capabilities for governing a large number of users and resources.”
CONS: Cost and data access are Amazon’s Achilles heels. While AWS regularly lowers its prices—in fact, it has lowered them more than 80 times in the last several years, which probably means they were too high for starters. Many enterprises find it difficult to understand the company’s cost structure. They also have a hard time managing these costs effectively when running a high volume of workloads on the service. And customers, beware: Be sure you understand the costs of extracting data and files once they are in AWS’s storage control. AWS will explain it all upfront for you, but know that it’s a lot easier to start a process and upload files into the AWS cloud and access apps and services than to find data and files you need and move them to another server or storage array.
In general, however, these cons are outweighed by Amazon’s strengths, because organizations of all sizes continue to use AWS for a wide variety of workloads.
Go here to see eWEEK’s listing of the Top Cloud Computing Companies.
Go here to read eWEEK’s Top Cloud Storage Companies list.
This article is an update of a previous eWEEK study by Chris Preimesberger from 2019.
The post Google Cloud vs AWS 2022: Compare Features, Pricing, Pros & Cons appeared first on eWEEK.
]]>The post IBM Boosts Its Hybrid Cloud with New Power Systems, Red Hat Features appeared first on eWEEK.
]]>IBM is the only systems vendor still developing its own silicon (Oracle might disagree but its SPARC CPUs haven’t been updated since the arrival of M8 in 2017) and optimizing resulting servers for hybrid clouds. Additionally, IBM has sizable portfolios of home-grown enterprise operating systems (AIX, IBM i and z/OS), middleware and business applications it can bring to bear for cloud-based services. Finally, the company’s decades-long support of Linux (the lingua franca of cloud) resulted in strategic partnerships with major open-source vendors, as well as IBM’s 2019 acquisition of Red Hat which has its own substantial cloud-enabling technologies and services.
What this all means for enterprise customers was made abundantly clear in the new Power Systems and Red Hat offerings IBM introduced this week. Let’s consider that announcement.
On the hardware side, IBM introduced two Power Systems offerings:
IBM has also extended its Power Private Cloud with Dynamic Capacity function which enables customers using Power Systems Private Cloud solutions to unlock additional compute cores as needed and get cloud-like consumption-based pricing. IBM is extending that ability to hybrid cloud environments with hybrid capacity credits, which can be purchased and used to unlock capacity in on-premises IBM POWER9-based servers and IBM Power Virtual Servers on IBM Cloud. The company is also working with ecosystem partners to extend dynamic capacity across multiple Linux distributions.
Finally, IBM announced that AIX 7.3 (which is planned for GA in Q4 2021) will feature new continuous computing, scalability, security and automation capabilities, including some designed specifically for hybrid cloud environments.
IBM has also expanded Red Hat capabilities on Power Systems solutions. They include:
Many tech vendors “go where they know” in terms of cloud computing, providing solutions designed to address narrowly focused solutions or highly specific use cases. In contrast, IBM knows where it’s going in relation to virtually any hybrid cloud destination. The company’s deep experience in and broad array of silicon, server, storage, networking, OS, middleware, software, developer and open source technologies means that it can assist cloud-bound customers with whatever goals they aim to achieve or challenges they encounter.
These new and improved Power Systems and Red Hat solutions are merely the latest examples of the company’s clear-eyed focus on hybrid cloud. We expect IBM to continue delivering powerful, useful hybrid cloud solutions for many years to come.
Charles King is a principal analyst at PUND-IT and a regular contributor to eWEEK. He is considered one of the top 10 IT analysts in the world by Apollo Research, which evaluated 3,960 technology analysts and their individual press coverage metrics. © 2020 Pund-IT, Inc. All rights reserved.
The post IBM Boosts Its Hybrid Cloud with New Power Systems, Red Hat Features appeared first on eWEEK.
]]>The post How CEO Swan Set Up New CEO Gelsinger for Future Success at Intel appeared first on eWEEK.
]]>And Pat isn’t standing Pat. He is already recruiting ex-Intel superstars in a very unusual move that should be considered a best practice. I would argue this move supports Gelsinger being named Glassdoor’s CEO of the year.
Turnaround CEOs are those brought in to fix a company so severely broken that it can’t rely on the succession plan to replace a poorly performing CEO. They tend to come in two classes: those who prep a company for sale, pretty much gutting it to make its financials look better; and those who fix the company’s structure so that it again can be successful long term. I worked for Louis Gerstner at IBM, one of the latter, and watched those in the former group destroy companies. Fortunately, Bob Swan was one of the excellent turnaround CEOs. In record time, he got Intel back into shape so that a more traditional strategic CEO like Gelsinger could take over and assure the firm’s long-term future.
The turnaround process is critical because turnaround CEOs are very different from operational CEOs. Think of this in terms of auto racing, when you have a car that just isn’t working. You could get the best driver on the planet, and you’d still lose. Your first job is to fix the car so it works, then you can get a top driver to win the race. Swan, in this analogy, is the mechanic; Gelsinger is the driver.
Even though he doesn’t start until next month, Gelsinger, who put in 30 years at Intel early in his career, has already started making waves by doing brilliant things.
Anyone that has ever raced professionally knows that to win, you need a champion crew. But one of the problems that Swan’s predecessor Brian Krzanich created was stripping the company of many of its most capable people. Now, typically (particularly at Intel), the new CEO wouldn’t look at former employees as potential resources. Common complaints are they have been away too long, but the real reason is that they may upset others’ advancements because the folks who were let go were more qualified than those who stayed. Equally common is that the employees who were let go represented a threat to some better-positioned senior manager who doesn’t want that threat to come back.
A new CEO needs to build a team loyal to him or her. However, if he builds it with new employees, the people below those employees may not be loyal, and there is an even greater learning curve for the resulting team. But if you bring back people who were poorly treated and take care of them, they’ll be loyal, the people below them will know and trust them, and they can hit the ground running because they know how to work at the firm. One of the significant impediments to a new employee is understanding how things uniquely work in the company, and an ex-employee doesn’t have that learning curve.
Gelsinger has raised some eyebrows by breaking with tradition and going after ex-employees, but I think this should be a sustaining practice. Intel has one of the strongest alumni associations of any firm, and that asset has been underutilized for virtually the entire history of the company. Pat’s moves here suggest that may change, and that change could further ensure Intel’s future.
Unlike what we just saw in Washington, where President Biden was handed a country badly broken by his predecessor, Bob Swan has done Pat Gelsinger a solid favor and given him a version of Intel that was vastly better than he found it. Gelsinger is already thinking out of the Intel box to ensure a positive outcome for his tenure and showcase that he is indeed the perfect CEO to take Intel into the future.
That is excellent news for Intel’s stockholders, employees and customers, and it showcases how things should be done.
Rob Enderle is a principal at Enderle Group. He is a nationally recognized analyst and a longtime contributor to eWEEK and Pund-IT. Enderle is considered one of the top 10 IT analysts in the world by Apollo Research, which evaluated 3,960 technology analysts and their individual press coverage metrics.
—————————————————
Editor’s additional sidebar: President and Principal Analyst Patrick Moorhead of Moor Insights & Strategy adds some perspective into Intel’s quarterly earnings report here:
“Even in the midst of Intel’s 7nm manufacturing challenges, the company pulled off a phenomenal Q4, significantly exceeding expectations by $20B in revenue and $1.52 EPS. Tiger Lake demand looks strong on the PC side and I think, based on its 33% growth, likely gained market share. While the data center business did better than expected, it was weighed down by the cloud ingestion cycle, competition and continued decline in enterprise and government purchases. Mobileye was a huge standout, driving 39% quarterly revenue growth and 93% improvement in profitability. Mobileye is on its way to be an over-billion dollar annualized business, a real accomplishment. The company is forecasting a Q1 revenue decline, but keep in mind, that does not include memory business it is spinning off to Hynix. We’ll have to see if the company says anything about 7nm and outside foundry use, but it wasn’t mentioned in the release.”
The post How CEO Swan Set Up New CEO Gelsinger for Future Success at Intel appeared first on eWEEK.
]]>The post New President Will Need to Scrutinize U.S.-China Relations for IT appeared first on eWEEK.
]]>But, from an industrial policy perspective, it would be worth examining closely how to deal with China. China was one of the large national rivals that took advantage during the past four years of the Trump administration’s inability to keep its eye on the ball. Russia found our blind spots like an expert squash player dropping a shot where his opponent isn’t, using Trump’s (almost) inexplicable softness toward Vladimir Putin to promote its agenda. While we were watching for election disruption, Russian hackers popped open SolarWinds like a can of Mountain Dew and entered the computer networks of thousands of organizations. Iran and North Korea edged merrily toward greater nuclear capabilities. But China, among them all, boldly and directly, went about the business of displacing the United States in as many domains as possible, maneuvering itself into position to become the next great industrial and military superpower.
Nowhere was this propensity more in evidence than in the domain of technology, where China has long been a factory for the United States and others, making what we design. That relationship was deeply interwoven when Trump took office, and he and his trade representatives spent a lot of time and energy tearing it apart. Luckily, individuals such as Apple CEO Tim Cook knew how to hit just the right soothing notes when speaking to Trump and managed to talk him down off some of the most potentially damning moves. But in general, trade policy devolved into an escalating tariff war, whose main consequence was to slow down the velocity of the technology industry, particularly the sectors involved in hardware manufacturing.
It might be tempting to go back to the good old days and just tie back together those frayed trade links. The industries in both countries benefited handsomely from the arrangement and likely would again.
And yet, something was never quite right about the China relationship, even in the supposedly good old days. It was rather one-sided. The Chinese government often required U.S. companies to give their Chinese business partners a majority stake in joint ventures and sometimes also stipulated technology transfer. When IBM created the OpenPower Foundation in 2013, a raft of Chinese companies signed up as members. IBM was no longer able to support its own silicon development and so essentially gave away powerful processor technology to anyone capable of running with the ball. Since then, Power technology has become a core component of the Made in China 2025 plan.
Although recent reports indicate that Chinese firms are experiencing setbacks in their pursuit of technological independence, the United States can take only cold comfort there. China is far more unified in its quest than any Western nation. Its government, industry and financial sector are all working together to learn from these impediments and move on.
So, should we cooperate? Shun? Compete? Some combination? There’s quite a lot of devilry in the details, and the stakes are particularly high on the eve of widespread deployment of 5G wireless communications technology. The United States has a leadership position in 5G. But so does China. U.S.-based Qualcomm is clearly the king of 5G handsets, supplying everyone from Apple to Samsung. But China-based Huawei is the leader in 5G base stations and not just in China. Germany is investing heavily in Huawei equipment, and other nations want in as well.
Striking the right balance will be a bit of a trick.
How do we make sure we’re protecting our intellectual property without choking off our markets? We don’t want to throw out the grain with the chaff. We don’t want to make it hard to sell chips to China on the eve of the next big telecommunications upgrade. Huawei wants to buy 5G products from Qualcomm, which sold the Chinese company a big pile of 4G products and has a long relationship there. If Huawei becomes a total pariah in the eyes of the U.S. government, it’s U.S. companies that will be left out in the cold. Huawei will simply buy 5G products from Samsung.
But this story is much bigger than Huawei. It’s about a clash between two governments and their differing approaches to industrial policy. From our perspective, the Chinese government has not played by the rules. But viewed from the perspective of international rivalry, Huawei belongs to the same class of “frenemy” as Samsung.
Meanwhile, economically, China is on a roll. Everyone except China is reeling from COVID-19. Having used its centrally controlled society to effect a hard shutdown early, China is nearly back to normal. This is not a market for the U.S. to let get away.
So, where should we end up in all this?
I would say that we’re actually in a pretty good position if we play our cards right. In the chip industry, most of the value is still created in the United States. Most U.S. firms have moved to a “fabless” model, wherein they do the design work and leave the manufacturing to someone else, often Taiwan Semiconductor Manufacturing Company (TSMC). Even TSMC recently agreed to build a leading-node factory here in the United States. It’s the tens of thousands of U.S.-based chip designers that set us apart from other nations, with something like 80% of the value in a chip being created right here.
The technology revolution led by 5G will affect a cascade of changes throughout society, from the reinvention of health care as we emerge from the pandemic to bringing back jobs for small and midsize businesses. It’s technology that will enable all those new jobs and business opportunities. 5G will enable the economy to keep going during the pandemic.
We do have to think carefully about how the supply chain operates. While we don’t want to be over-invested in end-stage production, there is a national security component to letting others make our chips. We don’t want to be beholden to Asia, given the current trade war, pandemic and the unknowable outcome of the conflict between Taiwan and China. From that perspective, it would be great to bring some of our chip manufacturing home. At this time, Intel is the only semiconductor firm still making large quantities of chips on U.S. soil.
From a policy perspective, the new administration needs to think in terms of bringing back the knowledge economy as everything moves to digitization and connection. The government needs to nurture not just 5G, but a wide range of technologies, in which we can be the innovators, building new platforms and ecosystems that allow us to grow despite the competition.
We’re not going to win building cheap chip businesses against low-labor-cost Asian industries that accept lower margins. We need to cultivate our leadership in the innovation economy, paving a path for workers in the millennial generation and beyond.
We have the advantage here, but we need to maintain it by protecting innovative companies while remaining vigilant against foreign competitors that often have explicit backing from their own governments.
Roger Kay is affiliated with PUND-IT Inc. and a longtime independent IT analyst.
The post New President Will Need to Scrutinize U.S.-China Relations for IT appeared first on eWEEK.
]]>The post Top Data Center Managed Services Vendors for 2022 appeared first on eWEEK.
]]>Analysts have estimated that about 50 percent of formerly “officed” employees started working from home in 2020 and that after the pandemic subsides, about the same number will continue to work from someplace other than their original offices. This all weighs heavily on both cloud and on-premises-based IT services, and somebody with a data center has to provide them.
The separate sub-categories of cloud managed services include:
A managed service provider of any type is a vendor that provides information technology services on a 24/7 contract basis.
Here are eWEEK’s Top Managed Service Vendors for data centers in 2021.
Armonk, N.Y.
IBM is, as usual, consistent and, well, big. Big Blue was one of the original IT data center services providers and has maintained its market-share lead as tops in the world in revenue for more than a decade. It continues to provide the widest range of managed services offerings on the market, selling about $7.65 billion worth of them in 2019, according to company documents. IBM also provides a skills and training program through which IT professionals augment their existing skills and learn new ones.
Naturally, IBM recommends using its own hardware and software to implement this, although it does employ an open-standards approach that will take into account existing hardware and software investments by its customers.
Who uses it: Midrange to large enterprises
How it works: subscription cloud services, physical on-prem devices and services
Dublin, Ireland
Accenture is one of the most respected IT integrators and consultants in the world and owns an excellent reputation for speed and quality. This is a global management consulting firm that offers a range of services and solutions in strategy, consulting, tech and operations. Accenture ranks with IBM as the two largest and most well-known companies on this list when it comes to management consultancy. Accenture’s goal is to collaborate with clients to help them become high-performance businesses and governments.
Who uses it: SMBs to large enterprises
How it works: subscription cloud services, physical on-prem devices and services
Bangalore, India
While “outsourcing” may be considered a dirty word in some U.S. circles, Infosys doesn’t shy away from it. Infosys is a longtime international leader in consulting, technology outsourcing and next-generation services and is proud of it. Infosys says it provides enterprises with strategic insights on what lies ahead. The company enables clients in more than 50 countries; it claims its mission is to help enterprises renew themselves while also creating new avenues to generate value. Infosys claims to help enterprises transform in a changing world through strategic consulting, operational leadership and the co-creation of breakthrough solutions, including those in mobility, sustainability, big data and cloud computing.
Infosys is excellent at retaining its customers. More than 95 percent of its $45 billion in annual revenue comes from repeat business. Infosys has a growing global presence with more than 187,000+ employees.
Who uses it: SMBs to large enterprises
How it works: subscription cloud services, physical devices and services
Tokyo, Japan
Fujitsu has been among the top 10 data center managed services providers for most of the last two decades. It provides capabilities in various IT domains, such as IoT, edge computing, process automation, mobility and others. Fujitsu claims to provide stable service and strives to be a longtime business partner. Fujitsu is one of the leading Japanese information and communication technology (ICT) companies, offering a full range of technology products, solutions, and services. Approximately 140,000 Fujitsu people support customers in more than 100 countries. The company uses its experience and the power of ICT “to shape the future of society” with their customers.
Who uses it: Midrange companies to large enterprises
How it works: subscription cloud services, physical devices and services
Bezons, Ile-de-France, France
Atos is one of the youngest companies on this list and might be the most forward-thinking one as well. This, of course, comes as a benefit to a new company that looks at the competition and endeavors to find its own solution improvements. Atos has moved into quantum computing with the launch on an Intel-based emulator; the idea is to use the emulator to train its coders in the skills that will be needed in the future when actual quantum computers will be used for many tasks. Although it may now be among the less-famous tech companies, it earned total revenues of $20 billion last year, which is reasonably high compared with others.
Who uses it: SMBs to large enterprises
How it works: subscription cloud services, physical devices and services
San Antonio, Texas
Founded in 1998, Rackspace has been there since the beginning of data center cloud services—in fact, since the ASP (application service providers) days. The company provides hybrid cloud-based services that enable businesses to run their workload in a public or private cloud. Rackspace’s engineers deliver specialized expertise on top of leading technologies developed by OpenStack, Microsoft, VMware, and others through a service known as Fanatical Support. It has more than 300,000 customers worldwide, including two-thirds of Fortune 100 companies.
Who uses it: SMBs to large enterprises
How it works: subsc
ription cloud service, physical devices and services
Teaneck, N.J.
Cognizant turned 26 in 2020 and remains one of the world’s leading data center professional services providers—although it’s not as well known as Accenture, IBM and others. It is in the field of transforming clients’ business, operating and technology models for the digital era. Its industry-based, consultative approach helps clients envision, build and run more innovative and efficient data centers. Cognizant is ranked in the Fortune 200 and is consistently listed among the most admired companies in the world.
Who uses it: SMBs to large enterprises
How it works: subscription cloud services, physical devices and services
Mumbai, India
Tata Consultancy Services is a multinational information technology services, business solutions and consulting company that competes on a global level with all the major data center consultancies. TCS offers an integrated consulting-led portfolio of IT-enabled services comprising application development and maintenance, business intelligence, enterprise solutions, engineering and industrial services and infrastructure services delivered through its own Global Network Delivery process. Based in Mumbai, India, it was founded in 1968.
The company has domain expertise in a wide set of industries, comprising banking and financial services, insurance, telecom, manufacturing, retail and distribution, high tech, life sciences, health care, transportation, energy and utilities, media and entertainment and others.
TCS operates in five continents, with North America and Europe constituting the largest markets for its services. It derives more than 20 percent of its revenues from emerging markets such as India, Asia-Pacific, Latin America and Middle-East & Africa.
Who uses it: SMBs to large enterprises
How it works: subscription cloud services, physical devices and services
Wipro, the eldest of data center service providers, was founded in 1945, right after World War II ended. It operates in a more diverse range of markets than many others, because it offers managed services, business automation and home automation—services that competitors do not necessarily offer. The India-based company generated revenue of about $9 billion in 2019. During the next three years, the company is planning to go all-digital, with the CEO saying that 100 percent of the company’s resources are being allocated to the digital operations goal.
Who uses it: Midrange to large enterprises
How it works: subscription cloud services, physical devices and services
Jersey City, N.J.
Datapipe is a smaller MSP that offers managed hosting services and data centers for cloud computing and IT companies. The company, founded in 1998, offers a single-provider solution for managing and securing mission-critical IT services, including cloud computing, infrastructure as a service, platform as a service, co-location and data centers. Datapipe delivers those services from the world’s most influential technical and financial markets, including New York metro, Silicon Valley, London, Hong Kong and Shanghai.
Who uses it: SMBs to large enterprises
How it works: subscription cloud services, physical devices and services
Redwood City, Calif.
Equinix provides data center services for companies, businesses, and organizations. It offers a software application platform designed for digital businesses that help its users to connect to their customers, employees, and partners. The company was founded in 1998. Equinix describes itself as “the world’s digital infrastructure company.” Digital leaders deploy its platform to bring together and interconnect the foundational infrastructure that powers their success.
Who uses it: SMBs to large enterprises
How it works: subscription cloud service, physical devices and services
——————————————————————-
Honorable mentions: AT&T, Cisco Systems, HCL, Capgemini
The post Top Data Center Managed Services Vendors for 2022 appeared first on eWEEK.
]]>The post Top CASB Solutions 2022: Cloud Access Security Brokers appeared first on eWEEK.
]]>Going into 2021, there are few areas of security that are more important to businesses, government, the military, consumers or the scientific sector than CASB. This is because all of use are doing the majority of our work in cloud-based applications.
As enterprises adopt new services, applications and methods to manage data, the need to address changing data models and threat risks is essential. Organizations must address an array of issues that revolve around collaborative web applications, data flow, network designs, cloud infrastructure and other key areas.
Although major cloud providers typically offer robust built-in protections—including strong authentication, encryption and malware detection—there are often gaps in protection that result when organizations rely on multiple cloud service providers, different network topologies and numerous applications. These risks often involve key areas such as web application firewalls (WAFs), secure web gateways (SWGs) and data loss prevention (DLP).
Cloud access security brokers (CASB) take aim at this issue. “They deliver differentiated, cloud-specific capabilities generally not available as features in other security controls,” a recent industry report from Gartner Research said. “CASB vendors understand that for cloud services the protection target is different: it’s still your data but processed and stored in systems that belong to someone else.” Consequently, CASBs store policy management information and governance details across multiple cloud services. This delivers granular visibility and stronger controls. Gartner predicts that by 2022, 60 percent of large enterprises will use a CASB to govern cloud services, up from 20 percent today.
Here’s a look at 10 of the top vendors in the cloud security space. These ratings were curated with data and reviews from Gartner Peer Insights, G2 Crowd and IT Central.
Security Package: Cisco Cloudlock
Value proposition for potential buyers: Since Cisco Systems acquired Cloudlock four years ago, it has worked hard to incorporate the company into its portfolio of cloud-based products. The CASB solution offers a number of powerful capabilities, including the ability to configure policies dynamically and aggregate users into specific groups, based on real-time actions and behavior.
Key values/differentiators:
To Take Under Advisement:
Who uses it: Medium to large enterprises
How it works: subscription cloud service and on-prem options
Security Package: Palo Alto Aperture
Value proposition for potential buyers: Palo Alto Networks acquired CirroSecure in 2015 and has since relaunched the solution to include more focused cloud security tools. The 2020 solution is heavily focused on discovery along with SaaS policy and security management. Aperture includes strong data classification and monitoring tools, DLP, user activity tracking, known and unknown malware protection and detailed risk and usage reporting.
Key values/differentiators:
To Take Under Advisement:
Who uses it: Midrange and large enterprises
How it works: subscription cloud service and on-premises servers
Security Package: CipherCloud CASB+
Value proposition for potential buyers: CipherCloud is one of the more respected young cloud security companies on the scene. Encryption and tokenization are key elements of cloud security. CipherCloud, which has offered a CASB solution since 2011, places a heavy emphasis on data protection through cloud-native security and compliance across SaaS, PaaS and IaaS platforms. The solution offers robust cloud-based visibility and controls—extending to applications running in the cloud—and it can manage both structured and unstructured data.
Key values/differentiators:
To Take Under Advisement:
Who uses it: small to large enterprises
How it works: subscription cloud service
Security Package: Microsoft Cloud App Security (MCAS)
Value proposition for potential buyers: Microsoft’s addition of Adallom in 2015 broadened the company’s security offerings in a big way. MCAS offers a reverse-proxy-plus-API CASB that can operate independently or part of Microsoft’s Enterprise Mobility + Security (EMS) suite. This includes tools for Azure and other applications and components. The solution also includes threat protections and sophisticated analytics.
Key values/differentiators:
Who uses it: Personal to SMBs to midrange enterprises
How it works: subscription cloud service
Security Package: Forcepoint CASB
Value proposition for potential buyers: Identifying shadow IT, preventing compromised accounts and ensuring secure mobile access to cloud apps covers a broad expanse of enterprise security requirements. Clouds ratchet up the challenges exponentially. Forcepoint CASB focuses on these issues.
Key values/differentiators:
To Take Under Advisement:
Who uses it: SMBs to midrange enterprises
How it works: subscription cloud service and on-prem options
Security Package: McAfee MVISION Cloud
Value proposition for potential buyers: McAfee is one of the best-known and utilized security solutions in the world, in several categories that inlcude both B2B and consumer markets. The company, owned for a few years by Intel but back to being independent, acquired Skyhigh Networks in January 2018. The solution bolstered the company’s existing portfolio of DLP, SWG and network sandboxing technologies.
Key values/differentiators:
To Take Under Advisement:
Who uses it: SMBs, midrange, large enterprises
How it works: subscription cloud service
Security Package: Bitglass Next-Gen CASB
Value proposition for potential buyers: Bitglass runs natively from the cloud but it also can be deployed as a Docker container that serves as a host on-premises. The vendor has emerged as a leader in the CASB space by introducing a zero-day approach heavily tilted toward trust ratings, trust levels and at-rest encryption that’s tightly integrated with enterprise compliance and governance requirements.
Key values/differentiators:
Who uses it: mid-range to large enterprises
How it works: subscription cloud service with container option
Security Package: Netskope Security Cloud
Value proposition for potential buyers: Netskope remains an independent company in a space where major software and networking companies are scooping up CASB solution providers. The company has been shipping products since late 2013. The company focuses heavily on application discovery and SaaS security posture assessments.
Key values/differentiators:
To Take Under Advisement:
Who uses it: SMBs, midrange, large enterprises
How it works: subscription cloud service
Security Package: Oracle Cloud Access Security Broker (CASB) Cloud Service
Value proposition for potential buyers: Oracle has moved beyond a one-solution-fits-all approach to CASB. Its solution, originally Palerra, offers discovery and deep visibility into SaaS applications using a log-based approach that revolves around cloud activity. This helps the solution identify risky applications installed through Oracle, Salesforce and other platforms. The result is strong security monitoring, threat protection and incident response. Organizations can also license Inline DLP (for real-time detection) and API DLP (for retroactive scanning).
Key values/differentiators:
Who uses it: large enterprises
How it works: subscription cloud service and on-premises servers
Security Package: Symantec Cloud Data Protection
Value proposition for potential buyers: Strong cloud security requires an array of features. Symantec delivers strong capabilities through its Cloud Data Protection platform, which incorporates products formerly offered by Blue Coat. The focus is on tokenizing or encrypting data stored in SaaS applications. The platform achieves a high level of protection through log analysis and traffic inspection. It provides cloud security assessment ratings by plugging in user behavior analytics, cloud usage patterns, malware analysis and cloud application discovery.
Key values/differentiators:
Who uses it: Small,midrange and large enterprises
How it works: subscription cloud service
The post Top CASB Solutions 2022: Cloud Access Security Brokers appeared first on eWEEK.
]]>The post Lenovo Shows New Data Management Solutions for Hybrid Cloud, AI appeared first on eWEEK.
]]>These points underscore the value of the new storage systems, monitoring tools and management capabilities that Lenovo’s Data Center Group (DCG) recently announced. Working alone and with strategic partners, Lenovo has considerably expanded its business customers’ options for working with hybrid cloud, analytics and artificial intelligence (AI). Let’s look at that more closely.
Business IT solutions have long supported location-specific use cases, including remote or branch offices (ROBOs) and public cloud platform services. However, the continuing growth in both the volume and varieties of information that companies work with tend to strain traditional IT offerings.
That is the case for the majority of organizations that choose to implement hybrid IT environments that access multiple cloud platforms. It is also true for newer, still evolving use cases, like edge computing which are expected to grow significantly with the introduction of robust 5G wireless technologies and networks.
Cohesively accessing and managing far flung data assets is challenging for most enterprises but is especially problematic for smaller to medium sized enterprises (SMEs). Those same organizations also face notable challenges when it comes to effectively analyzing ever-expanding information resources.
These are some of the issues that Lenovo has addressed with its new data management offerings. They include,
A continuing truth about business computing is that companies of every size and kind can benefit from ever more powerful and capacious IT solutions, some organizations lag or are simply left behind while others forge ahead. There are numerous reasons for these discrepancies, including lack of access to capital or experienced IT staff.
However, the best vendors are those who assist all sorts of organizations by developing solutions that can be used to address a wide variety of business workloads and use cases. That approach is clear in Lenovo’s new data management solutions which are designed to cost-effectively enhance storage performance while optimally supporting the access and analysis of business information wherever it resides, including hybrid cloud environments. In addition, the company’s collaboration with NVIDIA and NetApp is designed to ensure that even small and medium sized IT teams have access to powerful AI training tools and methodologies.
No business technology or IT vendor can guarantee that customers will succeed. However, Lenovo DCG’s new solutions underscore the company’s intention to provide its customers the computing solutions they and their businesses require.
Charles King is a principal analyst at PUND-IT and a regular contributor to eWEEK. © 2020 Pund-IT, Inc. All rights reserved.
The post Lenovo Shows New Data Management Solutions for Hybrid Cloud, AI appeared first on eWEEK.
]]>The post Lenovo DCG and DreamWorks: Tech Innovation Meets Real-World Experience appeared first on eWEEK.
]]>Landing a client like this and keeping the relationship on track qualifies as a big deal but so does being supplanted by a powerful rival. At Lenovo’s recent Tech World conference, the company announced that animation leader DreamWorks Animation had chosen Lenovo’s Data Center Group (DCG) to update its legacy data center. DreamWorks had a longstanding strategic relationship with data center vendor HPE.
Let’s consider what likely drove DreamWorks’ decision and why Lenovo is the right vendor for the job.
The technology and entertainment industries have been linked at the hip for well over three decades, with computer-generated imagery (CGI) making a giant leap into mainstream films in 1991. That was the year that audiences flocked to James Cameron‘s Terminator 2: Judgment Day featuring the awesomely liquid metal T-1000 killer robot and Disney‘s Beauty and the Beast, the second traditional 2D animated film to be entirely made using CAPS (computer animation production system) technologies. Since then, CGI and CAPS have dominated mainstream films—the 50 highest-grossing movies of the past decade virtually all utilized or depended entirely on CGI or CAPS.
While a tiny handful of IT vendors (notably, Silicon Graphics/SGI) dominated early productions with proprietary solutions and tools, the shift toward industry-standard components and systems sparked fundamental changes. That was due in large part to synergies between graphics rendering processes and high-performance computing (HPC) technologies, the end result being the emergence of Intel-based systems vendors as major players and partners in CGI and CAPS production.
Those included the 2002 strategic alliance between HP (now HPE and HP Inc.) and DreamWorks, which followed the companies’ collaboration on DreamWorks’ Shrek and continued through other DreamWorks franchises, including Kung Fu Panda, How to Train Your Dragon, Trolls and The Croods. The alliance survived HP’s 2015 split into separate client/printing and data center companies. While remaining partnered with HP Inc., when DreamWorks began planning to upgrade its rendering data center, it turned to Lenovo DCG.
What led to the deal? While few details about the project’s size and scope are available, it’s reasonable to assume that DreamWorks was attracted to Lenovo’s deep experience in high-performance computing (HPC), the company’s innovative system designs and technologies and its global supply chain prowess.
As was noted in the story detailing the agreement, HPC is vitally important in digital content creation where “producing a computer-generated animated feature typically takes four years with hundreds of artists and engineers working in tandem to create half a billion digital files that require 200 million compute hours (22,000 compute years) to render.”
Lenovo is deeply experienced in all phases of HPC both at the highest levels of supercomputing-assisted research and in a broad range of commercial and industrial applications. The company has earned more places on the Top500.org list of best performing supercomputers than any other vendor since pushing HPE out of the top spot in June 2018.
In other words, it is hard to think of a better partner to help develop and deploy a world-class HPC cluster. Additionally, the experience Lenovo gained working with high-end supercomputing customers, including the Leibniz Computing Center, Cineca and the Barcelona Supercomputing Center, has informed and inspired Lenovo’s commercial HPC solutions, including the ThinkSystem SR670, ThinkSystem SD530 and ThinkSystem SD650.
The ThinkSystem SD650 also features Lenovo Neptune, a notable liquid-cooling technology that the company says can deliver up to a 40% savings in data center energy expenses or help customers pack significantly more compute power into a smaller space. Those points were especially important to DreamWorks, which runs its data center at a high utilization rate (currently 98%) and wanted to avoid expanding the footprint of its rendering facility.
Finally, the complexities of the DreamWorks project, along with challenges caused by the Covid-19 pandemic, required high levels of design, development and deployment expertise. Lenovo worked with DreamWorks contractors to integrate the plumbing and cooling systems so the systems could quickly go live and start adding value. Lenovo’s logistics team leveraged the company’s global supply chain, pre-ordering components with long lead times, staging them in Europe so they would be available as needed, and working with global suppliers to ship the systems and synchronize their arrival.
According to Skottie Miller, a Technology Fellow at DreamWorks Animation: “It was a beautifully orchestrated logistical masterpiece. I was joking that I couldn’t buy a roll of toilet paper during the pandemic, but I could buy and install a supercomputer.”
IT vendors like to focus on the value of marketing-leading performance and new technological innovations. However, having the experience to understand a customer’s business needs and the flexibility to deliver and deploy new solutions as they are required are equally important. DreamWorks Animation’s effort to update its rendering data center is an example of how, with a partner such as Lenovo DCG, an organization can enjoy or address all these issues and be ready to pursue ever-greater achievements.
Charles King is a principal analyst at PUND-IT and a regular contributor to eWEEK. © 2020 Pund-IT, Inc. All rights reserved.
The post Lenovo DCG and DreamWorks: Tech Innovation Meets Real-World Experience appeared first on eWEEK.
]]>The post How NVIDIA A100 Station Brings Data Center Heft to Workgroups appeared first on eWEEK.
]]>An emerging part of NVIDIA’s business is the systems group, where it makes full-functioning, turnkey servers and desktop PCs for accelerated computing. An example of this is the NVIDIA DGX Server line which is a set of engineered systems specifically built for the rigors of AI/ML. This week at the digital Supercomputing show, NVIDIA announced the latest member of its DGX family with the DGX A100 Station.
This “workstation” is a beast of a computer and features four of the recently announced A100 GPUs. These GPUs were designed for data centers and come with either 40 GB or 80 GB of GPU memory, giving the workstation up to 320 GB of GPU memory for data scientists to infer, learn and analyze with. DGX A100 Station has a whopping 2.5 petaflops of AI performance and features NVIDIA’s NVLink as the high-performance backbone to connect the GPUs with no inter-chip latency creating effectively one, massive GPU.
I put the term “workstation” in quotes because it’s really a workstation in form factor only; even at 2.5 FLOPS compared to the 5 that the A100 Server has, it’s still a beast of a machine. The benefit of the DGX Station is that it brings AI/ML out of the data center and allows workgroups to plug it in and run it anywhere. The workstation is the only workgroup server I’m aware of that supports NVIDIA’s Multi-Instance GPU (MIG) technology. With MIG, the GPUs on the A100 can be virtualized so a single workstation can provide 28 GPU instances to run parallel jobs and support multiple users, without impacting system performance.
As mentioned previously, the workstation form factor makes the A100 Station ideal for workgroups and can be procured directly by the lines of business. Juxtapose this with the A100 Server, which is deployed into a data center and typically purchased and managed by the IT organization. Most line-of-business individuals, such as data scientists, don’t have the technical acumen or even the data center access to purchase a server, rack and stack it, connect it to the network and do the IT things that need to be done to keep it running.
The A100 Station looks like a big computer. It sits upright on or under a desk and simply requires the user to plug the power cord and network in. The simple design makes it perfect for agile data science teams who work in a lab, a traditional office or even at home. DGX Station was designed for simplicity and does not require any IT support or advanced technical skills. My first job out of college was working with a group of data scientists as an IT person, and I can attest to the importance of simplicity with that audience.
Without something like A100 that was purpose-built for accelerated computing, workgroups would be forced to purchase CPU-based desktop servers which are severely underpowered for this kind of use case. Sure, the average Intel-based workgroup server can run Word and Google Docs, but it can take months to run AI-based analytic models? With the GPU-powered systems, what took months can typically be done in just a few hours or even minutes.
Although NVIDIA didn’t announce a price for the DGX A100 Station, I’m guessing it’s approaching six figures and that might seem high for a workstation. But considering the compensation level of data scientists, keeping them working versus sitting around waiting for models to run on CPU systems, that cost is a bargain. If one factors in the lost opportunity costs of not having an AI/ML optimized system, it makes the Station a no-brainer for workgroups that need this kind of compute power.
Some companies might turn all AI infrastructure over to the IT organization, and that’s a perfectly fine model. Those companies likely will leverage one of the server form factors.
For those who leave the infrastructure decisions and purchasing within the lines of business, the DGX A100 Station is ideally suited. GPU power at the desk might seem a bit sci-fi-ish, but NVIDIA announced it this week.
Zeus Kerravala is an eWEEK regular contributor and the founder and principal analyst with ZK Research. He spent 10 years at Yankee Group and prior to that held a number of corporate IT positions.
The post How NVIDIA A100 Station Brings Data Center Heft to Workgroups appeared first on eWEEK.
]]>The post Perspective: Why NVIDIA+Arm Shakes Up Chip Industry appeared first on eWEEK.
]]>Arm will continue to operate in its Cambridge, UK, headquarters and will function as a division of the broader NVIDIA corporation.
The enormity of this deal highlights just how massive Santa Clara, Calif.-based NVIDIA has become in a relatively short period of time. When SoftBank purchased Arm in 2016, it paid about $32 billion, and NVIDIA’s market cap was only about $30 billion. That was a mere four years ago, and NVIDIA is now worth $300 billion, or 10X its valuation back then. The company’s growth has been fueled by the demand for its graphics processing units, the main computing unit used to power accelerated computing systems, such as artificial intelligence, ray tracing, self-driving cars, super computers and a bunch of other leading-edge use cases.
This deal will have a profound impact on the broader computing industry because it helps pave the way to expose Arm’s massive customer base to the power of GPU computing and can help NVIDIA build better integrated “end-to-end” systems. This becomes increasingly important as the world relies on accelerated computing to solve some of the planet’s biggest problems, such as finding a cure for COVID-19, building autonomous vehicles and doing seismic exploration.
Arm builds CPUs, similar to Intel and AMD, with one major difference. Intel and AMD design the chips, manufacturer them and ship them to systems companies that install them onto a motherboard; Arm designs silicon and then turns the architecture over to other companies to build the chips. This enables the manufacturer to optimize the system for that processor, making it more power- and space-efficient, compared to the pluggable model of an Intel or AMD processor.
An easy way to think of the difference is that Intel and AMD based systems are optimized in software, while Arm systems can be optimized in hardware and software, creating more efficient systems.
This is why Arm has been the preferred CPU for mobile devices for years and can be found in iPhones, Samsung and Qualcomm devices. But the efficiency and improved performance of Arm is just catching on. Microsoft now makes an Arm-based Surface laptop and has released Windows for Arm. Also, Apple recently announced it was planning to switch its future Mac computer systems to Arm processors.
It’s important to understand that, as powerful as GPUs are, they aren’t great at everything. While they handle high-performance computing tasks, like video analytics and AI well, CPUs are still used to boot systems and run a variety of other processes so even the most advanced systems use a combination of CPUs and GPUs and now NVIDIA has both. This lets the company create better end-to-end designs when Arm processors are used. This is similar to the approach it is taking with data center networks with the acquisition of Mellanox.
One of the important aspects of this announcement is that on the acquisition call, NVIDIA CEO Jensen Huang made it crystal clear that Arm will continue to operate with the current open licensing model while maintaining its customer neutral go to market approach.
The open approach has made Arm the company it is today, and NVIDIA won’t disrupt that. However, that doesn’t mean it’s the only approach the company must take. This acquisition opens the door for NVIDIA to take Arm’s design and build the CPUs, similar to the way it does GPUs. Or it could license its GPU design using Arm’s model. Is either better? Not really; it depends on what the customer wants, and I believe there is a market for both and that opens more doors for both companies, particularly NVIDIA, when we are just at the start of the GPU-in-everything cycle.
On the analyst and press call Sept. 13, Huang did a good job of outlining the size and scope of Arm compared to NVIDIA. He stated that last year alone, Arm shipped 22 billion chips. In the same time frame, NVIDIA shipped about 100 million. The latter is a nice number, but it’s orders of magnitude smaller than Arm. The reason is that NVIDIA has historically served very select markets, such as supercomputers, self-driving cars and gaming systems. Now it has exposure to the entire Arm pie.
Similarly, NVIDIA can actively look to use the performance and power efficiency in many of its GPU systems. One low-hanging fruit use case is edge AI systems, where space is limited as is power, because many systems operate on batteries, but these systems need to perform complex tasks.
This acquisition also will be another nail in the coffin of rival Intel. For years, NVIDIA was considered a niche gaming company (which it was), while Intel was the king of silicon (which was also true), but over time and through a number of good decisions by NVIDIA and Intel’s inability to build a GPU, NVIDIA continued to grow while Intel flat-lined. In July 2020, NVIDIA caught Intel with respect to market cap, and both were worth about $250 billion. Today, Intel has slipped to $209 billion, and NVIDIA is at about $300 billion. The acquisition of Arm will help NVIDIA accelerate the replacement cycle of Intel to Arm by building better-optimized systems in which both CPUs and GPUs are needed.
I believe AI is the single biggest disruptive technology to happen since the birth of computing. Speech analytics, video recognition, translation, automation and more will become standard across almost all devices–both consumer and business–and increase the scope of where GPUs are needed.
With the acquisition of Arm, NVIDIA is now able to offer its customers greater flexibility in how things are designed with improved performance. This is a well-timed acquisition by the company, because we are just hitting that inflection point.
Zeus Kerravala is an eWEEK regular contributor and the founder and principal analyst with ZK Research. He spent 10 years at Yankee Group and prior to that held a number of corporate IT positions.
The post Perspective: Why NVIDIA+Arm Shakes Up Chip Industry appeared first on eWEEK.
]]>