Bernard Golden, Author at eWEEK https://www.eweek.com/author/bernard-golden-2/ Technology News, Tech Product Reviews, Research and Enterprise Analysis Thu, 31 Aug 2023 14:21:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 Guide to Building DIY Cloud Solutions https://www.eweek.com/cloud/diy-cloud-solutions/ Thu, 31 Aug 2023 14:21:59 +0000 https://www.eweek.com/?p=222904 Discover the risks and potential complications of a DIY cloud solution. Find out how to avoid common mistakes and pitfalls now.

The post Guide to Building DIY Cloud Solutions appeared first on eWEEK.

]]>
The rise of cloud computing changed the face of information technology forever. Before the cloud, infrastructure access required upfront capital investment and months-long lead times. After the launch of cloud computing, infrastructure was available in mere minutes at a cost measured in pennies per hour.

In turn, this infrastructure revolution spawned Web 2.0 – a range of companies born in the cloud. These innovators launched new offerings that substituted cheap digital technologies for lethargic analog processes, which resulted in wrenching disruption to large companies with storied histories. New verbs sprang up to characterize this disruption: Ubered. Or Airbnbed. Or Netflixed.

Of course, the incumbent players in these industries were unwilling to cede them to the upstarts. So they followed the disrupters into the cloud. But a curious thing happened to most of these efforts: they didn’t pay off.

Enterprise after enterprise launched big cloud initiatives, but found no acceleration in their IT processes and an inability to move at the speed of the disrupters.

It turns out that cloud adoption isn’t enough to move at the speed of Web 2.0. Fast infrastructure needs to be married to fast software practices. Put another way, the software path to production needs to operate at the speed of the underlying infrastructure resources.

Web 2.0 companies have typically constructed highly customized toolchains to squeeze the time required for a software change to go from keyboard to production. Common across Web 2.0 is the heavy use of open source components wired together with company-specific integration components to create a custom DIY solution.

In response, many enterprises have started their own DIY initiatives. But all too many of them never result in a workable solution, leaving the companies hindered in their efforts to move at Web 2.0 speed. The obvious question is why: what prevents so many enterprises from successfully creating their own DIY path to production?

I’ve observed many of these failed DIY initiatives, and would like to share the common issues I’ve observed – and offer suggestions to help tech professionals to create a streamlined path to production.

Also see: Top Cloud Service Providers and Companies

Beware the Downsides of Cloud DIY

What are the issues I’ve seen arise in large enterprise shops that attempt to build a DIY cloud computing path to production?

Here are a few of the reasons these DIY efforts fall short:

Component curation

There is such a dizzying array of open source components available that address various parts of the path to production that just selecting a good set to integrate into the tool chain is a major job.

The eyechart that is the CNCF landscape poses a challenging task just to list the options for different parts of the toolchain, not to mention evaluating and selecting a good one. Of course, there is an alternative – allowing an employee to choose one based on reputation or what someone he or she knows used – but that seems haphazard at best and high-risk at worst.

Underestimated integration costs

Some components have built-in integrations for complementary components so that the integration task would seem to be simple.

However, not all components that might be selected have integrations to other components an organization might choose. And, in any case, in my experience, assuming built-in integrations work well in all use cases and can operate at scale is misplaced confidence. The truth is, what seems like simple integration “glue” software is typically far more work that expected, requiring more labor and longer schedules than initially budgeted.

Failure to address the entire application lifecycle

DIY toolchain efforts commonly get started in one group that wants to “scratch its itch.” The group sets off to automate its work tasks, reducing operational cost and accelerating throughput. That’s fine as far as it goes.

Unfortunately, there is often no work to connect an automated workflow into the upstream and downstream work teams that also contribute to the path to production. So the transfer of one task to another ends up being a manual handoff. Quite often the individual automation work results in no overall speedup, since the majority of time in the path to production is spent between tasks, not within tasks.

Staffing shortages

The flip side of underestimating the difficulty of work and complexity of end-to-end automation is that insufficient staff is assigned to the project. These projects often start as skunkwork-type efforts, siphoning off part-time work from a few people in a work group; this avoids the overhead of “real” project creation like budgeting, scheduling, and so on.

As the real scope of the project becomes clear when other groups that participate in the path to production put forward their requirements, the skunkwork staffing capability is rapidly outstripped. Organizations then need to confront unpalatable choices: abandon the project – and thereby remain mired in a long-duration manual process – or devote real dollars to it, which requires documentation and budget competition.

Talent diversion

Every technology organization only has so many talented engineers. If those engineers are assigned to improving an internal process, they’re not contributing to the larger goals of the parent company.

Very few companies are willing to consciously decide to devote precious, scarce talent to an internal project, worthy though it might be. So the internal DIY project is starved for talent, resulting in delays or incomplete functionality.

Value delay

The beginning of this article pointed out the primary motivation for enterprises to adopt cloud-native practices: responding to market disruption caused by born-in-the-cloud competitors.

Taking on an expensive and complex DIY project that will be plagued by talent and budget shortfalls is a recipe for late delivery and delay in value. While a self-developed software toolchain provides pride of ownership, taking 24 to 36 months to get it into production prevents the parent company from responding to dangerous competition in the here-and-now.

Also see: Top Digital Transformation Companie

Guide to DIY Cloud: Doing DIY the Right Way

“Aha!,” you think. I want to avoid all those problems in creating my company’s automated path to production. But how?

When I talk with large enterprises embarking on a DIY path to a cloud production project, here are the factors I suggest they ensure their plan addresses:

Understand your why

What is the business reason a home-grown system is important? All too many times a DIY project is a way for someone to grow a fiefdom, which serves their interest, but not necessarily the company’s.

So understand what you’re trying to achieve by building your own path to production. And don’t imagine you’re a snowflake, unique in the world, that requires a one-off solution – or at least understand why you’re a snowflake to better gauge the importance of a home-grown system.

Most companies, if they’re honest with themselves, can be perfectly well-served with a generic solution for their path to production.

Incorporate all the siloes

As mentioned before, all too many DIY projects start within a single group and never address the needs of other application lifecycle constituencies. The old cliche is measure twice, saw once; in other words, plan before execution.

Before you start building, ensure the solution helps every group that touches code on its way to production automate its part of the process – or that the resulting system can easily incorporate those other groups’ own automated systems.

Otherwise, at best you run the risk of no speed improvement in your software practices; at worst, you run the risk of organizational strife as those other groups put forward roadblocks and refuse to participate.

Plan for ongoing maintenance

Recognize that you’re building a platform, and every platform requires ongoing work to extend functionality, incorporate bug fixes, and improve operations and resilience. This is a downstream budget commitment and it’s important to plan for it. Otherwise, you end up with an obsolete system, unsatisfactory to users, limping along.

Create a budget

I hope this article has underlined how an important system that supports your path to production is a long-term commitment and not something that can be slapped together by a few staff working part-time.

As you create your cloud project plan, assign headcount and plan for staff costs. Also calculate the infrastructure you’ll need to operate a platform; this means you’ll need development, staging, and production environments to allow platform improvements and pre-production testing.

Calculate your resource opportunity cost

The staff you assign to building your platform could be working on other things. What other things? What value could you achieve by assigning them to work on other tasks?

Every tech organization has an inexhaustible list of to-dos – are there to-dos with greater benefit to the organization or the company at large?

Perform a buy vs. build analysis

As I noted at the beginning of this piece, most Web 2.0 companies build their own custom path to production systems. It makes perfect financial sense for them as they are digital enterprises at their core and run very large volumes of code changes and releases through their systems.

Amortizing the cost of the technical resources and infrastructure across such volume means a home-grown system makes financial sense for them. The question is, does it make financial sense for you?

There are commercial products available that provide end-to-end path to production functionality. You can take your budget and opportunity cost numbers and compare them against the cost of a commercial solution and figure out which approach makes sense for your situation.

Also see: Cloud Native Winners and Losers

Key Takeaways: Building DIY Cloud Solutions

What are the most important points you should take away from this article?

  • Software competence is a key prerequisite in the age of cloud computing. Grafting legacy software development practices onto cloud infrastructure results in no increase in application velocity. Worse, it may result in higher costs than leaving a legacy application undisturbed. To compete in tomorrow’s economy, you need software chops.
  • You need to put a modern path to production system into place ASAP. Industry disruption typified by Web 2.0 companies shows no sign of abating. Failing to begin immediately poses a risk of being left in the dust.
  • Carefully consider your approach. DIY is tempting. It seems easy to get started and low cost via the use of open source components. It’s vital you understand the full implications of a home-grown system in terms of cost, staff commitment, and time to value.
  • Is your plan comprehensive? Ensure your ultimate solution addresses the entire path to production and encompasses all the tasks that a code update passes through on its way into production.

Keep these items foremost in your mind and you increase your odds of success enormously.

The post Guide to Building DIY Cloud Solutions appeared first on eWEEK.

]]>
Cloud Predictions 2023: The Innovation Question https://www.eweek.com/cloud/cloud-predictions-2023/ Sat, 24 Dec 2022 15:30:34 +0000 https://www.eweek.com/?p=221775 The short days of December and the promise of the new year inevitably present a time for reflection and future expectations. I’ve long done an annual piece on cloud predictions and am happy to share the lessons I’ve drawn from 2022 and what I expect to see in 2023. Here are the things I think […]

The post Cloud Predictions 2023: The Innovation Question appeared first on eWEEK.

]]>
The short days of December and the promise of the new year inevitably present a time for reflection and future expectations. I’ve long done an annual piece on cloud predictions and am happy to share the lessons I’ve drawn from 2022 and what I expect to see in 2023.

Here are the things I think you should be paying attention to going forward, cloud-wise.

Cloud Providers Aren’t Innovating, and That’s a Good Thing

Within the analyst and commentariat there seems to be a feeling that cloud computing has settled into a rut. They seem to feel that, yes, cloud is a powerful trend, lots of companies are moving applications to the cloud, and AMG (Amazon, Microsoft, and Google) are, well, really big businesses, but…it’s gotten kind of boring.

The recent AWS Reinvent conference is a case in point. My Twitter stream was full of reactions to the keynotes that could be summarized as “meh — this is all just incremental improvements to what already exists.” There was a definite sense of nostalgia for the old Reinvents where AWS would announce unexpected innovations like Lambda serverless functions or a satellite base station service.

Fairly typical of this reaction to Reinvent was my long-time friend Brian Gracely’s CloudCast podcast, in which he shared that he felt that Reinvent wasn’t that exciting. As I understood the discussion, his theme is that the foundation of cloud computing is built out now and unlikely to show real innovation going forward. Again, not to say that there’s not lots of activity in incremental improvements and product extensions, but that cloud’s exciting days are past. The analyst group Forrester echoed that point of view with a blog post describing the extensions to core services that AWS announced at the show.

I don’t disagree with this perspective. There was nothing mind-blowing launched at Reinvent. I interpret this less-innovative time differently, though.

I think it’s good news that AMG aren’t launching fantastic new services left and right anymore. Because that means, as Brian pointed out, that the core capabilities of cloud computing are now in place and stable. In turn, that is great news for users.

Why? Because when the infrastructure into which one will place applications is transforming rapidly, it’s difficult to create reliable plans — after all, one might design an application architecture today and find out in six month’s time that it’s obsolete and needs to be reworked. That’s not fun and, more to the point, tends to bring an attitude of wait-and-see into play.

Why not just live with the way things are until one can be more confident that the architecture will stick and the application can be deployed with a lengthy lifespan expectation?

Now that the fundamentals of cloud computing are in place, users can feel more assurance about the stability of their cloud environment, and that’s a good thing.

Data Centers Empty as Users Climb the S Curve

The S curve is a well-established theory of technology adoption. Essentially, a new offering experiences slow growth over time as the market learns about it and evaluates how best to adopt it. Eventually the market reaches a critical mass of knowledge and the growth curve climbs steeply as more and more users adopt it. Finally, once most everyone who can benefit from the new offering has adopted it, the growth curve once again flattens out. Each of these curve slope changes is referred to as an inflection point — the point at which growth prospects change significantly.

While AMG experienced rapid growth for the past decade, from my perspective the industry is just entering or is early into the steep growth part of the curve. I say this despite having recently discussed their impressive quarterly financial results, which show plenty of current growth.

I maintain that AMG growth is going to increase because of my sense that many companies — large companies — are refiguring their previous strategy in which they planned to move some portion of their application portfolio to the cloud while keeping a large proportion on-prem. Now I’m hearing more about companies deciding to wholesale evacuate their data centers in favor of running everything in the cloud.

I think the reason for this is that these companies have concluded that owning their own data centers is a poor use of capital — data centers are not a differentiating capability and devoting expensive capital to owning them harms financial results and reduces future opportunity. After all, a dollar spent on a data center is a dollar that cannot be spent on streamlining a supply chain, improving manufacturing productivity, or launching into new markets.

The upshot of this trend is that data centers will empty to the benefit of AMG.

Containers and Cloud-Native: The New Must-have Accessory

Cloud computing originally offered up more easily provisioned virtual machine-based computing resources — thus the moniker “infrastructure as a service” (aka IaaS). This streamlined a lethargic part of the application value stream while leaving undisturbed other elements of the stream. Users saw an immediate benefit to their IT practices without having to disrupt their existing processes and tooling.

We’ve left those days well behind us. The emerging best practice for application development is centered on containers, which require fewer compute resources to operate, instantiate far more quickly, and are more portable than virtual machine-based compute environments.

That’s great. However, the shift to containers is a forcing function for wholesale change in application lifecycle processes and the tooling used to move code from development to production. I wrote about this shift last year in a piece I called Why Cloud Means Cloud-Native.

In that piece, I said:

“Over time, the cloud-native cohort has developed a set of best practices for lifecycle management, spanning the use of a sophisticated code management platform through to automated monitoring and management of application components to provide scale and resilience. Every process and milestone has been streamlined to provide fast, automated execution and enable touchless production placement once a developer’s fingers leave the keyboard.”

If IT organizations aspire to match the cloud-native cohort, it means they will need to completely transform their existing application lifecycle practices. I see many organizations struggle with this, as they look to incrementally improve piece parts of the life cycle rather than recognizing the need for a comprehensive transformation.

The upshot of all this is that cloud computing has gone far beyond IaaS and if an organization wants to meet the top performer benchmark it will require a focused organizational effort.

Cloud Providers Are Innovating, and That’s a Good Thing

Wait, wut? Bernard, you just got done telling us that cloud providers aren’t innovating, and that’s a good thing? Now you say they are? And that’s a good thing?

What’s going on here?

It’s true that the cloud providers have, by and large, built out their core services and have settled into ongoing incremental improvement in those services.

Nonetheless, there is a lot of innovation going on in the cloud providers. It’s just that innovation is built on and around the foundation of cloud computing.

Let me offer a couple of examples.

At Reinvent, AWS announced a new “omics” service aimed at genetics analysis. Omics is a pre-integrated platform to allow life sciences companies to import massive amounts of data. Omics automatically formats it into useful schemas, runs analytics on managed infrastructure, and stores important variant information relevant to disease analysis and new treatment development.

Omics even enables secure sharing to allow collaboration between disparate participants and organizations, crucial in an era where innovative treatments require cross-domain collaboration. The new service relieves users from the burden of a lot of complex set up, configuration, and software management, and allows them to get on with the job of improving our lives.

This demonstrates an appreciation of the detailed requirements of an entire industry and creation of a solution broadly applicable to many participants.

A more focused platform for innovation came with a Microsoft/London Stock Exchange announcement. The London stock exchange is making a very large long-term commitment to Azure, and the two organizations will collaborate on creating new solutions for the financial services industry based on a collection of Microsoft tools like PowerBI, Excel, and Azure Machine Learning.

The clue that this is something more, and heralds real innovation, is a discreet statement deep down in the blog post: Microsoft has purchased four percent of LSEG and installed its head of Azure on its board. That implies a much deeper collaboration and bespeaks a commitment to joint development of specialized tools designed to give LSEG competitive differentiation.

The way to view this innovation is as part of an inevitable upward move as the providers engage with customers to provide innovation at the frontier of what can be achieved based on the foundation of previously-absorbed core infrastructure innovation. We can expect to see plenty more of this type of innovation going forward.

In other words, you think the cloud providers have delivered a lot of innovation? You ain’t seen nothing yet.

The War for Talent Becomes the War for Market Share

Competition for cloud talent is an evergreen topic in the industry, with many observers bewailing the shortage of skilled personnel capable of building, yes, cloud-native systems.

Just how important that talent is can be understood by a paper Amazon (the ecommerce division, not the AWS cloud computing division) just published. The paper’s authors describe how they applied deep reinforcement learning to inventory management.

The net result? They improved inventory management 12% with no sacrifice in product availability. In the world of retail, where margins are pennies on the dollar, reducing inventory working capital requirements by that amount is a huge improvement.

If you’re a competitor to Amazon’s ecommerce division, now you have to figure out how you can respond. Obviously, you need access to massive amounts of compute, necessary for effective machine learning, which means hyperscale cloud computing. You need the technical staff to build the machine learning pipeline and data management. And, of course, you need machine learning specialists who can build and test ML models to optimize outcomes.

So table stakes is the talent pool to implement the technology stack and operations to meet the new benchmark of inventory management. (If you want to assign your company’s boffins to implement the Amazon systems, here is a link to the paper).

But technical staff isn’t enough. You also need to integrate the output of the inventory management upstream into your ordering processes and downstream into your physical warehouses. Even further upstream you need to have business managers monitoring the overall system to determine product mix and promotion mechanisms.

In other words, technical talent is necessary but not sufficient to meet the war for market share that a system based on technology along with integrated business processes can create.

Sounds hard, eh? Well, AWS, bless its heart, has decided to help. At Reinvent it announced a preview of AWS Supply Chain, which appears to implement an ML-based inventory management system.

Of course, taking full advantage of it will require treating the entire commerce value chain as an integrated system, and no cloud service can deliver that. That’s a business management problem. And the successful companies in the future will have to apply end-to-end value stream practices to win the war for market share. But its foundation is the right technical talent delivering cloud-native capabilities.

There you have it. My predictions for 2023. Summed up: the early foundation of cloud computing is complete and explosive growth of cloud infrastructure is directly ahead. Looking just beyond that are a set of innovative cloud services and business processes that build upon that foundation to create transformational economic outcomes.

The post Cloud Predictions 2023: The Innovation Question appeared first on eWEEK.

]]>
Hyperscale Cloud Providers Q322 Results: The Cliff Approaches? https://www.eweek.com/cloud/hyperscale-cloud-providers-q322-results-the-cliff-approaches/ Wed, 02 Nov 2022 22:56:48 +0000 https://www.eweek.com/?p=221555 AMG (Amazon, Microsoft, and Google, the leading cloud providers) reported their quarterly results last week, with many decrying the results and predicting that cloud computing has downshifted permanently to a lower growth rate. AWS revenues, for example, came in at $20.5B, lower than analyst estimates of $21.1B. As can be seen from the below chart, […]

The post Hyperscale Cloud Providers Q322 Results: The Cliff Approaches? appeared first on eWEEK.

]]>
AMG (Amazon, Microsoft, and Google, the leading cloud providers) reported their quarterly results last week, with many decrying the results and predicting that cloud computing has downshifted permanently to a lower growth rate.

AWS revenues, for example, came in at $20.5B, lower than analyst estimates of $21.1B. As can be seen from the below chart, AWS’s steady up-and-to-the-right trajectory definitely leveled off.

AWS-results-1

As can be seen by looking at aggregated AMG revenues, the three giants totaled just over $40B ($41.4B, to be exact). Aggregated growth, however, dropped to 24%.

AWS-results-2

As a result, many pundits took to the airwaves — particularly outlets like CNBC — to discuss how these results reflect a permanent state of affairs. They opined that that the cloud providers will remain mired in mid-20 percent growth rates, consigned by the law of big numbers and, well, an unspecified anti-cloud customer malaise, to a dimmer future.

Some intimated that this quarter is just the first downward step in a future that will show growth rates coming to rest in the mid-teens — good but a far cry from the spectacular numbers of the past.

So is that it? Are AMG financials about to drop off a cliff? Is cloud computing about to become boring?

Maybe so. But drilling down a bit deeper may tell a different story.

Also see: Why Cloud Means Cloud Native

Recession, Exchange Rates, and Fear, Oh My!

You might have noticed that the last couple of years have been a wild ride. We have had a terrible, ongoing pandemic challenging existing business and living arrangements, creating a business environment so unsettled as to make planning difficult. In such an environment, businesses pull back and hunker down, reducing their appetite for investments in new technology.

Associated with COVID have been supply chain woes. Manufacturers had a difficult time obtaining parts. Shipping companies couldn’t offload all their containers. Trucking companies experienced driver shortages. I can attest to this supply chain misery; I was recently told a new dishwasher I wanted to buy had a one year (!) backorder timeframe.

Added to supply chain disruptions is a general labor shortage — it seems every business large and small is unable to hire enough people and as a result pays more to employees it can land. So wages are up.

The net result of all of this disruption has been inflation. Consequently, the US Fed is applying its traditional medicine — increased borrowing costs via rate hikes. These hikes have brought on recession fears. This is another reason businesses have slowed their cloud adoption plans — after all, if a recession occurs, company revenues will drop and corporate spend needs to be reduced.

A Specific Factor 

Beyond this natural tendency of avoiding risk during unsettled times, there is a specific factor that affected AMG growth rates this quarter. As noted, the US Fed has increased interest rates sharply. Beyond making borrowing money in the US more expensive, this has made the US dollar more attractive to international investors, and the result is that most global currencies have depreciated against the dollar. 

This depreciation has affected AMG quarterly results. The exchange rate changes over the past few months mean that quarterly growth rates look smaller, driven by converting cloud services priced in, say, Euros, to the higher dollar exchange rates.

Microsoft, for example, stated that Azure revenues would have been 7% higher were it not for exchange rate changes. Synergy stated that had exchange rates remained stable and had the Chinese market remained on a more normal path, then the growth rate percentage would have been well into the thirties.”

In other words, taking “well into the thirties” as implying at least 33%, the growth rate, instead of 24%, would have been at least 9% higher. One guesses that adding 9% to AWS revenues would have comfortably put them above financial analyst predictions, resulting in far less sturm and drang media commentary.

The Synergy analyst went on to say “Despite that all three have increased their share of a rapidly growing market over the last year, which is a strong testament to their strategies and performance.”

I think it’s fair to say that lost in the general financial analyst anxiety is the fact that, despite the disappointing numbers, AMG are in overall terrific condition. To make the point very obvious: this is a $180B market growing at 24%. That is by far the most important trend in the industry.

this is a $180B market growing at 24%. That is by far the most important trend in the industry.

CFOs Say “Enough!”

An interesting breeze in the cloud zeitgeist the past few months has been a rumor of a cloud backlash from CFOs and corporate finance groups. I’ve heard this from industry friends and observers. The driving force of this backlash: cloud is too expensive. The response: a demand that cloud migration stop.

Some of my contacts say this is taken as far as insisting applications currently deployed in the cloud be deployed back into company data centers.

The rationale for this demand goes something like this:

  • Cloud was supposed to solve a bunch of problems with our on-prem environments — like extended resource deployment timeframes and poor resilience.
  • We issued a mandate that everything move to the cloud really quickly. 
  • Now that we’ve moved to the cloud, it’s costing a lot, which we don’t like.
  • So let’s stop moving to (or move back from) the cloud.

Left unsaid in this rationale is the common expectation that using cloud computing is cheaper than running your own data centers. I don’t know if one should believe this in general, but for sure expecting to move applications in a hurried manner and have them run better and cheaper is naive, to say the least.

The truth is, running an application cost-effectively in the cloud means understanding the characteristics of cloud environments and architecting and operating the app consistently with those characteristics. That is why I wrote an article last year called “Why Cloud Means Cloud-Native,” in which I describe achieving the best application results means adopting a new set of practices, commonly referred to as “cloud-native.”

The best case for this recent trend is as a reaction to the rocky economic times we’re experiencing. When finances are tough, it’s appropriate to trim spend as much as possible. It’s typically cheaper to maintain an app as-is on-prem than to make the investment to move it to the cloud, either unchanged or rearchitected to cloud-native. 

But thinking that halting cloud adoption is a long-term smart move is wrong. After all, there are powerful reasons to move out of one’s own infrastructure environment — remember the “extended resource deployment timeframes and poor resilience” mentioned above? Those problems stay unsolved if the app never moves. 

I predict that this cloud backlash will prove a short-lived phenomena. In tough economic times, the CFO has the upper hand and can impose spend discipline; in better times, which will inevitably re-emerge, the business units are able to demand action as a way to grow revenues.

Also see: Top Cloud Companies

What Does the Future Hold?

So today we have turmoil and a newly-found enthusiasm for reducing cloud spend. Does this mean AMG’s future is less bright?

A hint comes from IDC. Analyst firms have the widest field of view in the industry and should be able to predict how cloud computing looks over the next decade.

In an interesting blog post discussing the less-than-stellar quarterly cloud results, the author included this chart:

AWS-results-3

In the lower right hand corner is a box containing “4x Growth in New Cloud-native Apps by CY25” along with a footnote number. In the footnote (in very tiny print, for which I apologize) it says “IDC — 750 Million New Logical Applications,” along with a report identifier. Left undefined is exactly what a logical application is. But let’s take it as representing a complete set of code implementing a business function.

Based on this reading, I interpret the number as an IDC prediction that users will deploy a huge number of new applications into AMG over the next three years. If one applies a very conservative annual cost per application of $1,000, that would come to $750 billion, an astonishingly large number.

However, from an arithmetical perspective, that number is roughly four times the current AMG revenues of an annual $160B. So it’s internally consistent with the as-is state.

I have to say that I am not convinced of IDC’s prediction, but not based on a smaller future potential opportunity for AMG. As I projected in another article, AMG have a very believable TAM of $4T, with a potential TAM of $8T or more. 

So I don’t disagree with IDC on the magnitude of the opportunity they foresee. Where I part ways with the firm is how quickly this opportunity can be realized. There are constraints on how quickly applications can be moved into AMG. Writing (or rewriting, or lifting-and-shifting) the applications, migrating data, putting operational support into place — not to mention AMG themselves installing enough infrastructure to host so many applications — make a three year journey to 750 million applications very suspect.

But I agree with IDC on the overall scale of the opportunity. The potential for AMG probably accounts for their continued capital investment today despite poor current results. As this article points out, both Amazon and Microsoft said during their quarterly results calls that they plan to increase capital spend for their services this year, despite the “poor” results and ongoing economic turmoil. That’s the mark of a company that expects lots of future growth, not that a single sub-par quarter portends a longer-term trend.

…both Amazon and Microsoft said during their quarterly results calls that they plan to increase capital spend for their services this year, despite the “poor” results and ongoing economic turmoil. That’s the mark of a company that expects lots of future growth, not that a single sub-par quarter portends a longer-term trend.

I don’t know if AMG will re-attain their former growth rates. It can certainly be argued that they’re starting an inevitable drop in revenue growth, if for no other reason that they make so much revenue now that a slowdown is likely. But that shouldn’t be interpreted as a bleak future. Cloud adoption is a long-term, irresistible trend. Short term problems aren’t going to hold back the future for long.

The post Hyperscale Cloud Providers Q322 Results: The Cliff Approaches? appeared first on eWEEK.

]]>
Cloud Earnings Results: Best of Times – or Worst? https://www.eweek.com/cloud/cloud-q2-22-results/ Wed, 10 Aug 2022 17:00:43 +0000 https://www.eweek.com/?p=221294 Recently, the Big Three of cloud computing – Amazon, Microsoft, and Google, also known as AMG – announced their quarterly results. Their quarter could be summarized using the beginning words of Dickens’ A Tale of Two Cities: “It was the best of times, it was the worst of times.” Each of the providers showed strong […]

The post Cloud Earnings Results: Best of Times – or Worst? appeared first on eWEEK.

]]>
Recently, the Big Three of cloud computing – Amazon, Microsoft, and Google, also known as AMG – announced their quarterly results. Their quarter could be summarized using the beginning words of Dickens’ A Tale of Two Cities: “It was the best of times, it was the worst of times.”

Each of the providers showed strong revenue growth (see below), with numbers that most businesses would envy – so, the best of times. On the other hand, those growth numbers are down from the previous quarter – so, the worst of times?

Also see: Top Cloud Companies

One might ask the question, why did the quarterly growth rate slow, and what – if anything – does it imply about AMG’s future prospects? And what does it mean for ongoing cloud adoption by big enterprises, whose spend represents a preponderance of total IT spend?

One obvious factor that all of the providers noted in their results discussions is that the strengthening dollar caused revenues earned in non-dollar markets to shrink due to exchange rates. Satya Nadella stated that exchange rate issues caused a drop in overall revenue of $595 million. So a small amount of the reduced growth rate can be attributed to macroeconomic factors external to AMG.

Cloud Q2 2022

Quarterly growth rates for the top cloud vendors continue their rapid upward pace. 

Cloud Adoption: Pedal to the Metal

What about the rest of the growth rate drop? Are enterprises slowing their cloud adoption, impairing future AMG growth rates?

From my perspective, there is no evidence of enterprises reducing their cloud adoption plans. In fact, I believe that, if anything, enterprises are increasing the scale of the future AMG adoption. Nadella reinforced this by saying “We are seeing larger and longer-term commitments and a record number of $100 million-plus and $1 billion-plus deals this quarter.” 

One would guess that the A and G parts of AMG would also be seeing the same kind of adoption commitments.

This scale of commitment reflects the changing nature of cloud adoption – it’s moved from individual applications or even a class of digital transformation applications. At this scale of spend, enterprises are moving large percentages of their total application portfolio to the cloud, often as a result of data center downsizing or evacuation. 

Consequently, it’s unlikely that AMG is seeing a drop in enterprise demand. As Nadella’s quote indicates, if anything, enterprise demand is growing.

Also see: Cloud Native Winners and Losers 

Slippery Road Ahead

Notwithstanding the increased enterprise appetite to put larger application portfolios in the cloud, it’s clear that AMG growth rates did drop during the quarter. What could account for this reduction, given that enterprises aspire to move more to AMG?

I believe that enterprises are encountering practical challenges to their desire to accelerate cloud adoption, and these challenges reflect real-world issues that often stand in the way of enterprise cloud aspirations.

First, there are budget issues – investment has to be made in application migration while those same applications have to be kept up and running even as the migration activities proceed. Each company can only afford to ‘double spend’ on a part of its application portfolio, which results in longer migration roadmaps.

Second, most companies find the process of application migration more complex than initially assumed. The very term ‘lift and shift’ seems to promise low effort in moving an application to the cloud, but the term elides the totality of tasks required to move an application to the cloud, even one for which no significant code or architecture changes are envisioned. 

To name just a few, there are all the core services that need to be in place for an application to operate in a cloud environment: identity management, security enforcement for items like intrusion detection, data migration and/or implementation of a different database or storage.

Imagine this list blown out to an action item list of 15 or 20 separate actions, each of which must be assessed and implemented for each migrated application, and one can easily understand why lift and shift, as alluring as it sounds, ends up being more work than first imagined.

The practical result of this slippery road is that application deployment timelines begin to run longer than first planned, with a knock-on effect slowing down overall portfolio migration roadmaps.

What this implies for AMG growth rates is not a permanent reduction, but a pushing out of planned adoption to later quarters, with the potential of extending or even increasing out-period growth rates.

Also see: Best Machine Learning Platforms 

Still Pretty Good Times

So, to return to the Dickens quote, are these the best of times…or the worst of times?

Pundits in the technology and financial industries are quick to pounce on any shortfall to predicted financial results to proclaim earth-shattering effects. According to them, reduced AMG growth rates means dark times ahead for the hyperscalers with additional scary monsters lurking due to interest rates increases, enterprises confronting a recession, and, no doubt, the cancelation of favorite streaming video shows.

One has only to look at the chart below, which outlines Amazon’s cloud revenues and growth rates since 2015, to see that AWS has been on a multi-year tear. Its revenue growth rate has bounced around, but the overall revenue levels have continued to exhibit an ‘up and to the right’ movement. 

Cloud revenues 2022

For a business of its size, AWS is enjoying a remarkable growth rate. 

Microsoft and Google are achieving the same kind of enviable results, albeit on somewhat smaller revenues. Just as a single swallow does not a summer make, so too does a single quarter not an unbreakable trend make. 

Amazon Web Services is now a $80B+ business growing at over a third year over year. Frankly, many companies would gratefully accept this kind of growth rate on smaller revenue streams, and most companies have resigned themselves to far lower growth rates on vastly lower revenue numbers. 

The more important lesson to take away from this quarter is that the direction of cloud adoption is clear, even if the curve slope is a bit indeterminate. The challenges hindering adoption are clear and must be addressed, but they are not permanent nor insurmountable.

The post Cloud Earnings Results: Best of Times – or Worst? appeared first on eWEEK.

]]>
Cloud Revenues Q1, 2022: Surfing an Ever Larger Wave https://www.eweek.com/cloud/cloud-revenues-q1-2022/ Mon, 16 May 2022 17:49:51 +0000 https://www.eweek.com/?p=220959 Recently, the three major US cloud providers, Amazon, Microsoft, and Google (which I refer to as AMG), announced their revenue numbers. As per usual, they were impressive. The table below shows how each of them did: As of last quarter, the three are running at nearly a $37 billion quarterly rate. Just as impressive is […]

The post Cloud Revenues Q1, 2022: Surfing an Ever Larger Wave appeared first on eWEEK.

]]>
Recently, the three major US cloud providers, Amazon, Microsoft, and Google (which I refer to as AMG), announced their revenue numbers. As per usual, they were impressive. The table below shows how each of them did:

cloud-revenue-q122

As of last quarter, the three are running at nearly a $37 billion quarterly rate. Just as impressive is their growth rate of around 40%. Over the past two years, each experienced increased growth rates due to their customers responding to the pandemic. However, even with business conditions trending toward normal as pandemic restrictions taper off, the three have maintained their pandemic-era growth stats. 

I do this AMG revenue analysis each quarter, and it’s instructive to look back over previous quarters’ results. For example, here is a corresponding chart from Q318:

cloud-revenues-q318

In just 3.5 years, Amazon, Microsoft, and Google have tripled, quadrupled, and sextupled cloud revenues, respectively. Even allowing for pandemic-accelerated numbers, these growth figures are eye-popping. 

Performing this comparison illustrates how we’ve become numb to how much AMG are growing. They are now on a $120B+ run rate, and at 40% growth, that means next year’s revenue number will be on the order of $170B.

The question remains, of course, if these growth rates are sustainable. Many people expect these numbers to drop significantly; I have heard industry observers predict that they expect AGM growth rates to drop to 15% going forward. What should we make of these reduced expectations?

First of all, 15% on the current revenue base is nothing to sneeze at. Second, I’m not sure what basis these observers have for their prediction. Certainly, when I’ve asked for evidence or plausible reasons, it’s been crickets. In any case, what one can say with certainty is that AMG have experienced sustained high growth rates for a decade plus and, at the moment, we aren’t seeing any reduction.

Also see: Top Cloud Companies

Cloud Repatriation: More Smoke than Fire

Another ‘trend’ I’ve heard plenty about from industry pundits is the phenomenon of application repatriation — bringing applications back from AMG environments to on-premises private data centers.

The reason usually cited for repatriation is financial – that the customer has found that running applications in the cloud ends up being much more costly than anticipated, so much so that they choose to move them back to save money. I’ve also heard from some customers considering repatriation because of intellectual property concerns – they’re reluctant to place high value IP into cloud-based applications because the provider might obtain proprietary information from those applications and use it to compete with its customers.

Many of these pundits foresee a future in which large swathes of applications move back into corporate data centers. Just this morning I saw a survey of IT executives which claimed that in two year’s time, 12% of all applications currently residing in the cloud would move back home.

So, how real a phenomenon is repatriation? 

Organizations have migrated applications from the cloud back on-prem. The phenomenon clearly exists, and one can find examples of repatriated applications. 

Based on the results announced this quarter, one can say that, no matter how large the repatriation trend is, it’s swamped by the trend moving the other direction, from on-prem to the cloud. After all, if repatriation were to represent, say, 5% of all application deployment decisions, and AMG are growing at 40% net, then overall there must be 45% growth of new cloud deployments to subtract the 5% repatriation trend and end up at an overall 40% growth rate.

So, despite the fact that repatriation can definitely be found, it represents a tiny percentage of application deployment decisions, since the overwhelming flow toward the cloud can be easily seen in the net AMG growth rate.

Also see: Tech Predictions for 2022: Cloud, Data, Cybersecurity, AI and More

Cloud Future? More Up and To the Right!

As I noted above, AMG have been growing at 40% or more for over 10 years. And notwithstanding the educated guesses of industry pundits, nothing currently seems to be hindering the AMG growth rate.

But how about the future? What will next year’s growth rate be? And the year after? And the year after that

In a phrase: more up and to the right.

In addition to the pandemic-driven digital transformation trend I remarked on earlier in the piece, another IT trend has come forward during the past couple of years: the mainstreaming of cloud adoption.

Enterprises of every stripe are developing plans to reduce their data center footprint via consolidation and evacuation. And the place all those applications will go? You guessed it: AMG. I’ve been amazed at how many enterprises are forming plans to go ‘all-in’ on public cloud. As I noted in a recent piece, there are several trillion dollars worth of enterprise IT spend poised to move to AMG. Such a huge number obviously can’t be achieved in one or two or even three years. It’s just too large a task.

But it does mean that there is virtually unlimited demand, and therefore, unlimited growth ahead of AMG. The growth bottleneck for AMG is likely to be capacity to take on so much demand, which should create some interesting dynamics as customers, desperate to complete data center evacuations, confront capacity-constrained cloud providers. Anyone who’s tried to hire a plumber or accountant recently knows what it’s like when nothing is available no matter how urgent the need.

Bottom line? Expect AMG to continue their astonishing growth for the foreseeable future. 

Also see: Top Digital Transformation Companies

The post Cloud Revenues Q1, 2022: Surfing an Ever Larger Wave appeared first on eWEEK.

]]>
DevOps and Shifting Left the Right Way: 3 Tips https://www.eweek.com/enterprise-apps/devops-and-shifting-left/ Thu, 21 Apr 2022 20:32:41 +0000 https://www.eweek.com/?p=220850 To paraphrase Charles Dickens, “it was the best of ideas, it was the worst of ideas.” What am I referring to? DevOps and how it’s come to be interpreted. The best idea of DevOps is infrastructure as code, known as IaC. Instead of manually building application environments, a lengthy and error-prone process, IaC defines the […]

The post DevOps and Shifting Left the Right Way: 3 Tips appeared first on eWEEK.

]]>
To paraphrase Charles Dickens, “it was the best of ideas, it was the worst of ideas.” What am I referring to? DevOps and how it’s come to be interpreted.

The best idea of DevOps is infrastructure as code, known as IaC. Instead of manually building application environments, a lengthy and error-prone process, IaC defines the “how” of building the environment in a template, and then automatically builds that environment using the template definition.

This occurs at computer speed rather than human speed, and, just as important, is done consistently every time, vastly improving application quality. Done right, DevOps can vastly increase application velocity.

This approach to application development and deployment became known as “shift left,” because it moves post-development tasks earlier in the application lifecycle.

Also see: DevOps, Low-Code and RPA:  Pros and Cons 

Yet DevOps Challenges Abound

However, while DevOps’s infrastructure as code was its best idea, it also – as it has been commonly implemented – one of the worst.

All too often, developers were told that they should take responsibility for creating the IaC templates. There’s some logic to this; after all, an application’s developer should know its infrastructure requirements best, right?

On the other hand, this also makes developers responsible for understanding production networking requirements, large scale storage configurations, and resiliency resource management. Due to this onerous demand, it’s fair to say that, depending on the complexity of an application’s production environment, DevOps is not a panacea.

Nevertheless, inspired by the shift left mantra, many IT organizations decided it makes sense to move other tasks earlier in the application lifecycle. So developers became responsible for testing. And security. And patch management.

Unfortunately, as commonly pursued, these tasks were not treated “as code.” That is, the groups formerly responsible for them passed on responsibility to developers along with the manual checklists typically used to perform the group’s tasks. So developers took on lots of manual effort in areas they had no particular expertise.

And guess what? A developer doing something manually doesn’t get the task done any faster, especially if executed with low subject matter expertise. So the potential speed of a DevOps approach often remains much lower than desired.

Also see: Why Cloud Means Cloud Native

Shift Left the Right Way

So what is the path forward for a shift left, “as code” approach? Are applications destined to remain mired in manual processes conducted by overwhelmed developers?

In a word, no. But organizations need to shift left the right way. Here are three tips on how to do that.

1) Automate All the Tasks

If it makes sense to do infrastructure as code, it makes sense to do testing as code, security as code, and patch management as code. In other words, apply the logic of DevOps to all the steps in the path to production.

Naturally, this means applying development skills to these tasks, which requires, well, developers. Expect the profile of subject matter specialists (e.g., a QA staff member) to change to incorporate programming experience. This also means managing each task automation as its own application, with its own lifecycle management.

2) Treat the Path to Production as an Automated Product

The technology organizations with the fastest path to production treat the entire process as an integrated product to be automated across its various sub-steps. This means automated handoffs between intermediate steps and removing manual approvals. I’m looking at you, Change Control Boards.

In the real world, most manual approval steps are formal rituals that tick review boxes automatically. If the handoff from one step to another can be reduced to an automatic nod, it can be reduced to an automated handoff as well, with well-defined exception handling.

This also means that the path to production requires management of the entirety as a product itself, with architecture review to ensure all the automated subsystems play well with one another.

If this sounds like work and investment, you’re right. Without this, however, the path to production will remain slow, with speed at both ends (via Dev and Ops) bookending the same old slow manual steps in the middle.

3) Shift Even Further Left

While automating all the tasks – and automating the overall process – are good steps, it’s still a challenge to ensure good security if vulnerable or obsolete code forms the foundation of an application. As the old saying goes, garbage in, garbage out.

Living with these kinds of security exposures is made even worse when a vulnerability becomes known or someone attacks it. What ensues is a mad scramble to update code bases and roll the updates into production.

This problem is endemic when developers start with a blank slate, downloading libraries and components directly from the Internet. It’s shocking how many container-based applications are built with images downloaded from DockerHub, despite the fact that it’s well-known that many of the most popular ones contain outdated and/or vulnerable code.

A much better approach is to provide developers with prepared code bases that are known to be up-to-date and assessed to be free of vulnerabilities. The mechanisms for this are known as templates, frameworks, or accelerators. Essentially, the developer downloads the template into a preferred IDE and begins with a safe foundation of code into which the functionality of the application is incorporated.

Once the application update is complete, it enters the automated process described above. Application artifacts are created and moved through the different stages in the lifecycle until finally deployed.

This code hygiene approach can be extended into the rest of the lifecycle, by having a process sitting outside of a specific application pipeline to monitor announced vulnerabilities. If a vulnerability is announced and a patch made available, an application build and deploy process kicks off which automatically updates the relevant artifact and puts it into production.

This avoids the manual tracking of what libraries and components each application contains. It also avoids the crisis management associated with trying to ensure every relevant application is updated, which inevitably misses some and results in vulnerable applications never getting fixed.

This emerging industry term for this “shift further left” is the secure software supply chain, and is an approach that will become more common going forward, especially as more and more business processes shift to digital mechanisms. I’ve only touched on it here, and hope to delve further into the topic in a future column.

The truth is, DevOps is both a good idea and a bad idea, depending on how applied. Sprinkled onto a lethargic application lifecycle, it solves little. Applied as an automation concept as part of an application assembly line, it’s a powerful tool to enable digital transformation.

Also see: Digital Transformation Guide: Definition, Types & Strategy

The post DevOps and Shifting Left the Right Way: 3 Tips appeared first on eWEEK.

]]>
Cloud Providers Deliver Spectacular Earnings: What’s It Mean for Customers? https://www.eweek.com/cloud/cloud-providers-deliver-spectacular-earnings-whats-it-mean-for-customers/ Thu, 10 Feb 2022 22:37:20 +0000 https://www.eweek.com/?p=220429 AMG (Amazon, Microsoft, Google) announced their quarterly financial results over the past couple of weeks. The companies’ cloud financials were, in a word, spectacular. Looking at AWS, which is a bellwether for the sector, it turned in $17.8 billion in revenue for the quarter, and its growth rate inched up to 40%, which is 1% […]

The post Cloud Providers Deliver Spectacular Earnings: What’s It Mean for Customers? appeared first on eWEEK.

]]>
AMG (Amazon, Microsoft, Google) announced their quarterly financial results over the past couple of weeks. The companies’ cloud financials were, in a word, spectacular.

Looking at AWS, which is a bellwether for the sector, it turned in $17.8 billion in revenue for the quarter, and its growth rate inched up to 40%, which is 1% higher than last quarter, and a full 12% higher than the year ago quarter.

Taken overall (see table below), AMG delivered $34.3 billion in Q4 revenue, with an average growth rate of 43%.

Also see: Top Cloud Service Providers & Companies 

We have become so inured to the scale and growth rate of AMG that it’s easy to overlook just how astonishing these results are. 

The fact is, these are very, very large businesses. AWS pulled in more revenue than IBM, which is number 42 on the Fortune 500 list. Even Google’s cloud business, number three of the big three, is a huge business.

Even more astonishing is their growth rate. An average growth rate of 43% means these huge businesses are going to be, well, even huger in short order. AWS should see around $70 billion in revenue for 2022; if it continues to grow at a similar rate as 2021, it would see 2023 revenues of just over $100 billion, good for 27th place on the Fortune 500. Microsoft and Google’s cloud revenues would place them well up the list as well.

Also see: Why Cloud Means Cloud Native 

Can AMG Keep It Up?

One might attribute 2021’s very high growth rates as an exogenous boost due to the COVID pandemic. After all, AMG growth rates were lower in 2019 and early 2020 before the pandemic quickly grew the use of online services, most of which were deployed in one of the big three providers. So with the (fingers crossed) move back to more typical living patterns, perhaps the demand fueling these high growth rates will abate and they will drop back to pre-COVID levels. 

Another reason growth rates might drop is the sheer difficulty of continuing such high growth rates. The larger an organization is, the harder it is to grow. While AMG might not need to grow headcount over 40% to support 43% growth, it’s clear that each of them is adding thousands of staff per year. Recruiting, training, and integrating that many new people might impose a limit to AMG growth.

Beyond that, the sheer difficulty of organizing and coordinating so many people, so much technology, and so many business initiatives might prove so difficult as to reduce organization efficiency. This would constrain AMG capability of serving customer demand, even if it still supported 40+% growth opportunities.

Summed up, will the ‘law of large numbers’ catch up with AMG and drop their growth rates to a more pedestrian pace? That would change the nature of these companies – they would still be large and very impressive businesses, but they would come to resemble other commercial behemoths like Toyota or McKesson: impressive but unremarkable as business entities.

AMG Begs to Differ

What is the perspective on potential growth rates from the providers themselves? After all, they should have a good understanding of demand curves.

With the experience of supporting millions of customers onboarding onto their platforms, they have insight about usage patterns and growth rates. They also are able to forecast new customer growth based on sales interactions. And they all have very sophisticated strategy and economics groups that model a range of exogenous factors that might affect customer adoption and growth.

Given all that, what do AMG believe about their future prospects?

In the words of the old classic, they believe their future’s so bright, they gotta wear shades.

How do we know this? We look to see what the investment in their future looks like by examining their capital investment. The past few years have seen significant growth in all three providers’ capital spend, with all three topping $20 billion for 2021, with Amazon putting nearly $60 billion of capital to work last year. That may be a record for a single company’s one year capital investment.

Of course, the year’s investment is not exclusively dedicated to any of the three’s cloud computing services. Each operates significant online properties not related to providing computing services. Amazon also invests in a lot of real estate in the form of fulfillment centers, nearly doubling its total number of centers over the past two years.

Nevertheless, a significant portion of each company’s total capital investment last year was spent on its cloud services. Even if only a third of $20 billion was invested in each company’s cloud services, that still is a huge number, demonstrating confidence that sufficient demand will exist to justify the investment. 

In a phrase, AMG believe “if you build it, they will come.”

Also see: Best Practices for Multicloud (That Cloud Providers Prefer You Not Know) 

What This Means for Enterprises

The huge revenue numbers, accelerating growth rates, and confidence in future growth implied by enormous capital investment mean one thing: enterprises are increasingly shifting their application deployment strategies to a cloud-first – or, indeed, a cloud-only – future.

My sense is that many enterprises have reevaluated their previous strategies and shifted from an assumed 80% on-prem and 20% cloud deployment assumption to a 20% on-prem and 80% cloud assumption.

Obviously, this aligns with the large capital investment dollars being spent by AMG.

But from the perspective of enterprises, this shift carries knock-on, or second-order effects.

Managing Massive Migrations

First, enterprises will need to develop plans on how to manage such significant migrations. For every application currently deployed on-prem, they will need to decide where it will live in the future. Given that most enterprises pay scant attention to existing applications in favor of focusing on new or more-frequently updated apps, this will require significant employee and leadership mindshare over the next couple of years.

The Fading Data Center

Second, the pace of AMG growth means that corporate data centers are going to be emptied in favor of using provider infrastructure. Enterprises will need to balance depreciation schedules, new contract signing with one of the big three, and real estate disposal opportunities to decide how quickly to make the redeployment decision. And as with any rapidly-changing market, those who leave it late will get the worst terms, with asset write-offs in the offing.

New Architecture Needed 

Third, enterprises will need to develop application architecture plans for what to do with existing apps. Most of them were designed for a world of constrained, static infrastructure, and are ill-suited for transient resources in a world of effectively infinite capacity. Which ones should be dropped with little change into cloud environments, despite suboptimal results, and which ones deserve greater investment to retool them to better integrate with cloud infrastructure?

Needed: Cloud Experts

Finally, enterprise IT organizations must create a cloud-ready workforce. Many staff members have skills attuned to the traditional practices of old-style applications and infrastructure. In the future 80% of employees will need cloud skills.

I’ve seen how this skill transformation can be implemented, and it requires just as much investment as the applications with which these employees work. Many IT organizations fail to develop the comprehensive plans needed to support a 20% on-prem / 80% cloud future, but failing to do so consigns the organization to years of poor migration execution. 

Also see: Tech Predictions for 2022: Cloud, Data, Cybersecurity, AI and More 

The post Cloud Providers Deliver Spectacular Earnings: What’s It Mean for Customers? appeared first on eWEEK.

]]>
5 Tech Trends for 2022: Digital Transformation, Cloud and Talent Wars https://www.eweek.com/cloud/technology-trends-2022/ Wed, 05 Jan 2022 17:33:37 +0000 https://www.eweek.com/?p=220207 Every December, I spend time thinking about where the software ecosystem will go over the year ahead. For the past decade, that’s meant focusing on cloud computing, as it’s long been the dominant software trend.  The past year, however, (and 2020 as well) has been rather different—driven, of course, by COVID-19. The pandemic and its […]

The post 5 Tech Trends for 2022: Digital Transformation, Cloud and Talent Wars appeared first on eWEEK.

]]>
Every December, I spend time thinking about where the software ecosystem will go over the year ahead. For the past decade, that’s meant focusing on cloud computing, as it’s long been the dominant software trend. 

The past year, however, (and 2020 as well) has been rather different—driven, of course, by COVID-19. The pandemic and its effect on health, working practices, and consumption patterns has significantly changed the role of technology. And to my mind, permanently altered how technology will be used in our society and economy going forward.

Here are five trends to look for in the tech industry in 2022.

Also see: Top Cloud Service Providers & Companies 

1) Digital Transformation Drives Chip and Software Production

The cliché that COVID-19 accelerated digital transformation, compressing 10 years of growth into a single year, contains a profound truth—that our economy is shifting toward a software-centric operational model.

This was clearly demonstrated by all the measures companies took to respond to the shift of a large part of the workforce from office to home. The obvious beneficiaries of this were video conferencing and virtual event companies. Other beneficiaries were grocery chains, whose revenues skyrocketed as people stopped going to restaurants; many of these chains rolled out new applications to support huge online ordering volume, delivery, and curbside pickup. All of these examples show how software became the key enabler of business processes.

However, digital transformation is moving well beyond online interaction supporting analog transactions. Software is now being placed into physical products, transforming them into software-centric devices with functionality driven by digital interaction.

Software is now being placed into physical products, transforming them into software-centric devices with functionality driven by digital interaction.

Nothing symbolizes how physical products are becoming digital offerings with atoms attached than the automobile industry. After the shutdowns of 2020, auto buyers came out in force in 2021—and ran directly into low availability caused by shortages of critical computer chips.

So painful is this to Ford that it entered into a strategic partnership with Global Foundries to ensure future availability. This marks a significant shift for the company which, like most auto manufacturers, used to treat chip procurement as a low-value activity best pursued in an arms-length buyer/vendor fashion.

The chip shortage highlights that cars now depend on digital processing, from engine management to suspension response right through to user interaction—all of them and a hundred more auto features rely on digital processing, which means chips.

It also means software because, after all, the chips are only useful insofar as they enable applications controlling all those features to operate. In turn, this means that writing software has become a critical prerequisite for an automaker to compete in the next decade.

The auto industry is one vivid example of how software is moving into a central role in products, but this shift is occurring in every industry. 2021 represents the inflection point of the digital transformation S-curve, which heralds dramatic growth and impact of an emerging technology.

Whether software replaces, complements, or infuses its products, every company must come to grips with how it will pursue digital transformation. The importance of this issue cannot be overemphasized: This will soon be an existential question for companies, and failing to get this right means a dim future for those unable to succeed.

2) Applications Get Dynamic 

The new breed of digital-forward applications break all the assumptions underlying traditional applications: they were thought to require predictable load, limited user population, well-understood infrastructure scale requirements, and reliable hardware.

Digital-forward applications can experience widely varying loads driven by unpredictable usage patterns and even more unpredictable user populations. One COVID-driven example cited by McKinsey & Company includes a fast-casual-restaurant chain which saw its online orders jump from 50,000  to 400,000 per day.

Because of this, it’s difficult to predict just how much infrastructure will be required to maintain application availability and performance, which therefore requires the ability to add and shed computing resources quickly.

In turn, this means most of the legacy application development and operations processes are made obsolete. They were designed for a world of predictability and infrastructure rationing, and imposing heavyweight processes on infrastructure access was seen as a feature, not a bug.

Finally, these digital-first application requirements mean they will be deployed into infrastructure environments that can support them—which means public cloud environments. This destroys any assumption about infrastructure reliability because as one cloud provider’s CTO noted “everything fails, all the time.”

The net-net of this is that digital transformation developments in application characteristics forces cloud adoption, which implies adoption of cloud-native application practices.

For enterprises, this means they must adopt the application practices of the cloud-native companies, such as frequent application updates, automated processes, and resilience through redundancy and easy failover. The upskilling this will force into enterprise IT shops will be a key issue for them, as it will require changes to talent recruitment and retention well beyond what most companies have in place for what has traditionally been viewed as a low-impact cost center.

3) Hyperscalers’ Continued Revenue March – Up and to the Right

Given the enormous shift toward digital transformation, what is the likely consequence?

One obvious consequence is that this will boost the growth of hyperscale cloud providers because the de facto deployment location for cloud-native applications is a public cloud environment.

It is estimated that hyperscalers’ core growth total addressable market (TAM) is somewhere between three and five trillion dollars, which is well beyond what most people estimate.

It is estimated that hyperscalers’ core growth total addressable market (TAM) is somewhere between three and five trillion dollars, which is well beyond what most people estimate.

However, this TAM is based on the current spend of traditional IT practices, which deploy applications into on-premises data centers. Those practices are full of friction and require substantial up-front capital investment, both of which serve to dampen adoption. Most organizations find infrastructure procurement such a burden they pursue only the most obvious, highest-priority use cases. Every other use case falls by the wayside because it’s too much work to justify them.

The kinds of applications that typify digital transformation are those that, in the past, would not have passed the “most obvious, highest-priority” screen. They would have been those with unproven potential, which meant, in most IT organizations, their advocates would have found the justification process too onerous and given up pushing them.

Today, changes in cloud computing have increased digital transformation priorities and will increase the overall demand for computing resources. This will increase cloud infrastructure demand well beyond the TAM of displacing traditional infrastructure. It’s estimated that this additional demand could double overall cloud revenue, toward the order of $10 trillion dollars.

4) A Changing Role for IT: Running the Business

As described above, traditional IT has been lumped into the corporate cost center bucket—expenditures that are necessary but not especially connected with marketplace success. In other words, that bucket holds everything not focused on building and selling a company’s products or services. Every company’s approach to cost centers is the same: spend as little as possible.

However, this will change significantly as more of every IT organization’s efforts focus on digital-first applications. This is because these applications directly interact with customers or improve products to make them more attractive to the market. They are directly tied to revenue and, because of that, are subject to very different spend filters.

The question asked of digital-first applications is not “How much will it cost?” but “How much will it make?” For those applications that show a positive contribution to revenue or profit, the issue will be how much can be invested and how quickly.

This changes the role of IT, which can be summarized in the phrase “IT’s job changes from supporting the business to running the business.”

For senior IT leadership, this imposes a range of necessary actions: 

  • Closer collaboration with product teams to ensure the right digital functionality is built into the company’s offerings.
  • Better analysis of how users actually use the product, so it can be modified to increase customer engagement and thereby revenues.
  • Increased emphasis on application resilience to reduce interruptions to revenue flow.

Essentially, IT must change from an order-taking organization to a collaboration partner organization. Some leaders and organizations will make this transition, and their parent companies will thrive; others will find the change too challenging, and their failure will affect not just them but will damage their parent company’s future.

5) The Talent War Goes from Lukewarm to Scalding

The technical talent war has been an evergreen topic for years. IT organizations have had problems in recruiting technical talent, with accompanying challenges to project delivery timescales.

Notwithstanding this constant theme, 2022 will supercharge the issue of hiring and keeping talent.

Obviously, one cause is that IT staffing requirements are growing due to digital transformation. As software replaces, complements, or infuses market offerings, more software needs to be written, deployed, and managed. So, one reason the talent war temperature is moving up is just general demand for technology personnel.

However, there are additional reasons the war for talent is going to get blistering hot.

Digital-first applications require specific skills even scarcer than general IT talent. Writing microservice apps designed for elasticity and failure resilience calls for skills present in only a small percentage of the overall technical talent pool.

Writing microservice apps designed for elasticity and failure resilience calls for skills present in only a small percentage of the overall technical talent pool.

As demand for digital-first applications grows against a small percentage of the pool, it will be harder and harder to successfully recruit staff to a given technology organization. Enterprise IT shops are at an additional disadvantage given their historical reluctance to bid up wages for this category of employee. 

It used to be that old guard IT organizations had some geographic protection in this competition. For example, if you were a regional retailer located in Grand Rapids, the competition to hire someone with technical skills was relatively lower than in a large tech hub. That has changed, though.

One of the unanticipated results of COVID-19 has been the growth of remote work; suddenly, one could be employed by a top cloud-native company and happily reside in Grand Rapids. While some large technology firms like Google have put forward policies designed to induce/urge employees back into their offices, remote work appears to be here to stay.

For those formerly geo-protected companies, this means the competitive pool for desirable employees has grown to include a much broader range of companies. And many of those companies treat employees like a competitive advantage and have no hesitation at bidding up their salaries.

Consequently, one can easily predict that talent access will be a critical issue for technology organizations across the land, and companies will have to adjust to the new reality of attracting candidates who have more employment options than ever.

The Future in a Nutshell: Faster, Ever Faster

The coming year will see more change in IT than we saw over the pre-COVID decade. Driven by the torrid growth in technology adoption by businesses seeking to respond to changing customer demographics and preferred interaction modes, software will be a core competence for every company.

The consequences of these changes will be massive. Every company will need to decide how willing it is to restructure its assumptions and plans in light of these changes. There’s no hiding—it’s only a question of whether a company chooses to accept or resist the changes.

The post 5 Tech Trends for 2022: Digital Transformation, Cloud and Talent Wars appeared first on eWEEK.

]]>
The Public Cloud Market: How Big Will It Get? https://www.eweek.com/cloud/public-cloud-computing-market-how-big-will-it-get/ Mon, 08 Nov 2021 18:12:41 +0000 https://www.eweek.com/?p=219767 All three of the leading cloud computing providers are now large – and rapidly growing – businesses. It’s hard to know exactly how large Azure is, since Microsoft lumps it into a cloud services category that includes Office 365, but I arbitrarily assigned half of the category’s Q3 revenue to it; it might be larger […]

The post The Public Cloud Market: How Big Will It Get? appeared first on eWEEK.

]]>
All three of the leading cloud computing providers are now large – and rapidly growing – businesses.

It’s hard to know exactly how large Azure is, since Microsoft lumps it into a cloud services category that includes Office 365, but I arbitrarily assigned half of the category’s Q3 revenue to it; it might be larger or smaller by one or two billion dollars (Microsoft did announce that Azure had grown 50% in the quarter, so its number in the second column is accurate). In total, the three did just under $30 billion in revenue this past quarter, with a clear path to $120 billion in revenue over the next year.

AWS, the largest of the three, is now on a $60 billion annual run rate. Its growth rate – which captures the direction and velocity of this industry shift in a single number – was 39% for the quarter, and represents its fourth consecutive quarterly increase. It’s easy to acknowledge this achievement, but hard to recognize just how impressive it is. AWS is – by far – the fastest growing technology company of its size in history. In fact, it might be the biggest, fastest growing company in human history, with no sign of slowing down on the horizon.

It seems clear that AMG’s growth path will continue for the foreseeable future. The shift to public cloud computing is the dominant trend in the industry and it’s only going to get bigger going forward.

The Role of Cloud Has Changed

In its early days, cloud computing served the most adventurous users – early adopters drawn by its easy access, usage-based pricing, and low cost. There was very clearly a ton of pent-up demand for this type of service.

The startup community has been transformed by cloud computing. In the old days, standing up a new online service required significant capital investment, and huge growth – the hope of all startups – necessitated massive capital requirements to stand up even more infrastructure. The cloud changed all that. Today, a personal credit card can launch a new company, and scaling it requires far less capital. Predictably, this has resulted in an explosion of new companies exploring innovative ideas.

Likewise, as enterprises began to launch digital offerings (and compete with startups invading their space with those new online services!), the groups responsible for these offerings naturally followed the startup community onto the cloud.

Both of these types of users would fall into the category of innovators and early adopters, as shown in the traditional market segmentation diagram associated with “Crossing the Chasm.”

We are now well past that stage of cloud adoption. Mainstream enterprise and government use – represented as pragmatists and conservatives in the above chart – now accept public cloud computing as a viable choice: capable, secure, and cost-effective.

However, many enterprises have moved beyond viewing the cloud as a viable choice, one solution among several options. I’ve heard of many companies changing their strategy to define cloud as their preferred deployment location, with some going so far as to develop a plan to empty their data centers completely and operate everything in the cloud.

This is clear evidence of a changed cloud market, with adoption having moved to pragmatists and even conservatives, as those companies confront the limitations of their traditional on-prem preference.

And, as the chart above illustrates, these two market segments represent the majority of users, which explains the boffo financial results of the cloud providers. The providers are benefiting from cloud finally being accepted by mainstream enterprises and becoming an acceptable deployment choice for every kind of application, not just “digital” ones.

The Cloud Future: Full Speed Ahead

Given the results of the quarter, and the evidence of increasing enterprise appetite for cloud computing, the obvious question is “how tall will this tree grow?” Many people over the past few years have predicted slowing growth for AMG; after all, they are huge businesses and intuition and past industry experience would seem to imply that they would start to see a drop in their growth rates.

Set against that, of course, is the fact that AWS has actually increased its growth rate for each of the last four quarters. Some would, I suppose, attribute that to pandemic-induced growth. Microsoft’s CEO Satya Nadella said “The COVID-19 pandemic has accelerated the digitisation process by at least a decade,” which obviously meant good news for AMG. Consequently, one might think of the past year as a temporary upward blip in a downward growth trend, given that AWS had actually seen a decrease in its growth rate in the eight quarters prior to Q220.

Part of what affects growth is how large a market an offering aims at. In any new market, growth is easier early on when the highest payoff sales are made; later in a market adoption cycle sales get more difficult as lower-productivity applications of the new technology are made. Naturally, growth slows when the benefits of purchase are lower.

This means that the overall potential for AMG sales as well as what their growth rate is likely to be going forward rests a lot on just how much opportunity there is in becoming the de facto enterprise infrastructure environment. In Silicon Valley-speak, the question is how large is the TAM — the Total Addressable Market?

One often hears the phrase “the enterprise market is a trillion dollars.” A good, albeit imperfect, way to assess this phrase’s accuracy is to look at the estimates put out by technology analyst firms. They survey large numbers of enterprises, perform some number-crunching, and develop overall market size estimates.

Gartner recently published its estimates for 2021 and 2022 as can be seen in the table below.

Adding its spending categories of data center systems and enterprise software gives a rough idea of how much on-prem spending enterprises do. For 2021, enterprises will spend just shy of $800 billion; for 2022, they will spend $870 billion, which isn’t that far off from the trillion dollars conventional wisdom.

So is a trillion dollars the TAM that AMG have? If their yearly run rate is around $120 billion, it would seem they have quite a bit of market yet to swallow.

However, this underestimates how much spend AMG can address. Data center equipment is typically on a three to five year depreciation cycle, while enterprise software carries ongoing maintenance and upgrade costs across a similar period. So AMG’s TAM is not one trillion dollars, but the sum total of what enterprises spend across the depreciation/support/upgrade cycle, which is more like $3 to $5 trillion.

In other words, there’s plenty more market that AMG can go after, and plenty more room to grow, and to grow at high rates.

These TAM estimates are based on one for one takeout — a dollar not spent on traditional infrastructure can be spent on AMG. That completely overlooks the potential increased addressable market made possible by the reduced friction and usage-based pricing offered by AMG.

The scale of the unaddressed market is typically overlooked and unexamined. But a clue to what happens when reduced friction and lowered costs meets untapped demand can be found in the theories of a 19th century British economist — William Stanley Jevons, whose “Jevons Paradox” I wrote about in 2015.

In his study of British coal consumption, Jevons noted that people actually consumed more coal as coal prices dropped, which is somewhat counterintuitive. After all, if something gets cheaper, wouldn’t people take the savings and buy something else? In fact, Jevons found they bought more coal and used it for purposes they couldn’t previously afford.

Consequently, it’s plausible that AMG face a TAM of something like $10 trillion — $3 trillion to $5 trillion of existing traditional spending, and $5 trillion to $7 trillion of new computing appetite previously unserved by historical IT consumption practices.

I believe that enterprise uptake of cloud computing has lots of room for growth, and I predict that AMG growth rates will increase as the benefits of cloud computing get better understood and more and more use cases (i.e., previously unaddressed demand a la Jevons Paradox) are discovered. The torrid growth we’ve seen over the past few years is nothing more than a prologue to even headier growth in the future.

The post The Public Cloud Market: How Big Will It Get? appeared first on eWEEK.

]]>
Why Cloud Means Cloud-Native https://www.eweek.com/cloud/why-cloud-means-cloud-native/ Mon, 27 Sep 2021 16:06:52 +0000 https://www.eweek.com/?p=219530 “Only when the tide goes out do you discover who’s been swimming naked.” — Warren Buffett The most important change in computing over the past 15 years has been the rise of cloud computing. The large hyperscale providers – I call them AMG, for Amazon, Microsoft, and Google – have been on an explosive growth […]

The post Why Cloud Means Cloud-Native appeared first on eWEEK.

]]>
“Only when the tide goes out do you discover who’s been swimming naked.” — Warren Buffett

The most important change in computing over the past 15 years has been the rise of cloud computing. The large hyperscale providers – I call them AMG, for Amazon, Microsoft, and Google – have been on an explosive growth curve and show no sign of slowing down.

To take one example, look at Amazon Web Services. As the chart below shows, it is now on a $60B annual run rate, which means it could be a $100B revenues business in 2023. Its competitors are smaller, but growing at similar or even faster rates. So cloud computing is perhaps a $150B business – with no end of growth in sight.

The question is: why there has been such a wholesale shift in where applications are deployed? What about cloud has made it so irresistible?

To my mind, the key reason cloud computing has grown so much and so rapidly can be found in the venerable NIST definition of cloud computing — specifically the very first cloud characteristic NIST states in its definition:

On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider. 

Cloud computing makes resources available on demand — fill out a web form or submit an API call and infrastructure is available in minutes. It’s hard to overstate just how different that is from the infrastructure timeframes typical of traditional data centers. I recently heard of a shop in which the expedited VM provisioning process makes them available in 24 weeks — imagine how long the regular process takes!

Predictably, removing friction from the process of accessing computing infrastructure causes people to enthusiastically use more of it. Much, much more. That’s why Amazon has a $60 billion business growing at 30% per year.

There is just one problem: most enterprises don’t see any improvement in their software throughput despite infrastructure being available at the snap of the fingers. 

And that’s where Buffett’s pithy aphorism comes into play. His insight is that a change in one condition can expose a shortcoming in another. In his industry, this traditionally means that when the business cycle ebbs, companies with too much debt get caught in a liquidity crunch and have to sell assets to generate cash.

Cloud Native and Accelerated Application Lifecycles

In the IT world, fast infrastructure access reveals lethargic software lifecycle practices. In the old days, that didn’t really matter. Taking weeks to get a new release put into production means little when hidden by 24 week ‘expedited’ infrastructure provisioning. Once cloud computing came into play, though, everyone sees that IT processes are ‘swimming naked.’

The question then became “How can I accelerate my application lifecycle to match the speed of my cloud infrastructure?” And you know who solved that problem? Cloud-native companies — companies built assuming rapid infrastructure availability with processes tuned to roll out software just as fast as a cloud provider could turn up computing resources.

Of course, these cloud-native practices didn’t just show up the day Netflix or Pinterest first deployed applications into a cloud environment. They were incrementally designed and implemented, improved over time to meet the demands of a digital-first business.

Over time, the cloud-native cohort has developed a set of best practices for lifecycle management, spanning the use of a sophisticated code management platform through to automated monitoring and management of application components to provide scale and resilience. Every process and milestone has been streamlined to provide fast, automated execution and enable touchless production placement once a developer’s fingers leave the keyboard.

The net result is that this cohort can deploy thousands of code changes into production each day. 

The key to all of this is the relentless examination of the application lifecycle process for inefficiencies and improvement opportunities, all with the aim of removing any manual steps intruding into rapid code updates.

Now that enterprises are wholesale moving to cloud computing, they too will need to optimize their application pipelines. Many, of course, will resist that need, claiming that the constraints like regulation and risk management preclude removing human participation in the pipeline process. 

They’re wrong. Companies like Stripe and Redfin operate in high regulation environments and seem to do ok. And my former company, Capital One, certainly qualifies as an enterprise, and it has adopted cloud native practices quite successfully — and achieved great business results to boot.

The need to adopt cloud-native practices will be all the more important as competitors embrace them. The cost advantages and revenue growth opportunities will make it imperative that enterprises become cloud-native practitioners; otherwise, they risk competitive disadvantage.

Best Practices for Cloud Native 

So, what should enterprises do to adopt cloud-native practices? Here are some tips:

  • Recognize that moving to the cloud is the start of a journey, not an arrival at a destination. One of the biggest mistakes I see in technology organizations is treating cloud computing like a data center at the end of a wire — an easy-to-access infrastructure that requires no additional changes in standard operating procedures. This represents a failure of imagination and not understanding that the logic of cloud computing extends well beyond convenient virtualized computing resources.
  • Iteratively remove bottlenecks. I hope this piece has convinced you that getting the full benefits of cloud computing requires examining and streamlining the complete application lifecycle. It’s not a one-and-done process though. Just as cloud-native companies like Netflix had to incrementally improve their practices, so too will every enterprise. Removing one bottleneck exposes the next roadblock to full automation. Prepare for ongoing work and investment in your pipeline
  • Don’t forget Day 2. Many organizations believe that getting code into production quickly is nirvana. It’s definitely a huge improvement, but remember that every application inevitably requires additional functionality, bug fixes, and security patches. Plan for ongoing streamlined deployments and application component updates. Also don’t forget your pipeline components and container platform. If you’ve implemented these elements of your environment via use of open source, the components and platform executables will themselves need to be updated and patched — so be sure to plan for their ongoing management.

Cloud computing represents a profound shift in the way infrastructure is used. What used to take months in preparation now is done by a cloud provider in minutes. It’s critical that you understand how comprehensive the changes implied by cloud computing are. Plan and execute a cloud-native plan to achieve real cloud success. After all, you don’t want the tide to go out and show everyone that you aren’t wearing a swimsuit!

The post Why Cloud Means Cloud-Native appeared first on eWEEK.

]]>