The post Microsoft Announces Windows 365 Cloud Desktop at Its Inspire Partner Event appeared first on eWEEK.
]]>Last year’s Inspire event was in the middle of the COVID pandemic, and Microsoft’s focus was on returning to the workforce and schools with inspiring and innovative tools and features. You can read my coverage of Microsoft’s Inspire event from last year here.
If you are a follower of my content, you are most likely familiar with the digital transformation technologies that are reshaping businesses swiftly and without looking back. The COVID pandemic and the need to Work From Home (WFH) have both been catalysts in making full digital transformations in just about every business area. Microsoft has been refining its 365 suite over the past couple of years and this week announced its new Cloud PC offering for business. Let’s take a look at what it is and what it seeks to accomplish.
Remote Windows desktop isn’t a new concept and can be accomplished many ways, so I was most interested in what makes Microsoft’s more unique aside from owning the OS and zero dollar COGS.
The Cloud PC is one Microsoft solution to solve the hybrid work paradox, which, simply put, is a paradox where workers want flexible remote work options and in-person collaboration simultaneously. This paradox makes it difficult for organizations to quickly provide a secure work platform while also giving employees the freedom to work remotely or in the office.
Post-pandemic, people realized that workflows can be done outside of the office and still be productive. Solutions to this paradox have been to bring a work device home to use or only work from home. Organizations then face the problem of having employees work over insecure networks.
Windows 365 helps solves this paradox by offering organizations full Windows 10 or 11 platforms to stream from the cloud. Windows 365 offers organizations the ability to give employees cloud PCs connected securely to the work network and accessible anywhere there is a Wi-Fi connection. It’s more secure as no data actually resides on the client device as a video is streamed of the session resident in the Azure cloud.
The same work environment with the same applications, tools, settings, and personalized Windows, and users can take it home and pick up where they left off. Microsoft says each Cloud PC provides an instant-on boot experience. Since the only reliable connection the user needs for the cloud PC is the user’s connection to the PC and not the PC’s connection to the internet, it has fast and reliable speeds. Microsoft says users can stream personalized applications, tools, data, and settings from the cloud across PC, Linux, Mac, iOS, or Android platforms, with many of these platforms having applications to them.
User experience has always been the challenge whether it be VDI, RDP or any other remote desktop or app execution. Connectivity has typically been the culprit as your experience is only as good as the connection. This will be important for enterprises to consider before committing. Microsoft did tell me it was creating a disconnected mode, container-based, that would enable users to continue work even in a disconnected environment. That is differentiated and new.
Windows 365 is a simpler iteration of Azure’s Virtual Desktop (AVD). Where AVD is a more flexible virtual desktop infrastructure that gives IT maximum control, Windows 365 is simple with a personalized Windows 10 experience. While AVD is priced based on consumption, Windows 365 has per-user pricing.
It is also easier for IT to manage and assign Cloud PCs by categorizing and assigning need-specific Cloud PCs to employees. Admins can also scale up or down a user’s Cloud PC based on need at any point. I like the two options as smaller businesses want a simpler experience with less knobs and gauges.
Windows 365 scales processing power based on a user’s needs with different computing, storage, and business app iterations. Microsoft says it will offer a Windows 365 Enterprise edition and Windows 365 Business edition with simplicity on IT to choose and configure the right Cloud PC iteration being the primary difference.
Microsoft says Enterprise IT can use Microsoft endpoint manager to manage and deploy Cloud PCs for their organization. Small businesses can use a simple, self-service model so that no experience in virtualization needs. Cloud PCs and regular PCs show up alongside each other on Microsoft Endpoint Manager. Microsoft says you don’t need to learn new IT tools to manage Cloud PCs.
Windows 365 also offers data analytics to monitor the health and performance of user’s PCs. Microsoft says its Watchdog service continually run diagnostic checks to ensure connections are running at all time. This is new and differentiated.
When I asked Microsoft about the lack of GPU configurability, they told me that this is coming. This is good as without GPUs, you wouldn’t want to provide the service to anyone with a workstation doing complex development, 3D modeling or even programming.
Finally, IT can change configurations on the fly, up or down, based on how a user uses the service and I thought that was valuable as it saves money and delivers an optimal experience. I’d like to see this feature automagically upgrade or downgrade users in a similar fashion to “auto balancing” of cloud workloads.
Since Cloud PCs have a continuous connection to the work network, so there is no need to worry about the personal device compromising the network when streaming. Windows 365 also follows a Zero Trust Architecture by only storing information in the cloud rather than the streaming device. It also uses multi-factor authentication to ensure that login attempts are verified and integrated with Microsoft Azure Active Directory.
Microsoft says you can pair MFA with dedicated Windows 365 conditional access policies to assess login risks instantly inside Microsoft’s Endpoint Manager. All data at rest and in transit has encryption in Windows 365 from end to end.
Microsoft has addressed many but not all problems and concerns that organizations might have with deploying Cloud PCs to its users. I think Windows 365 does an excellent job of mirroring its scalability with its simplicity. Cloud PCs can be managed and deployed with complex configurations or managed simply with data analytics and Zero Trust Security. Windows 365 is competitively priced and priced differently from other virtualization platforms, priced per user per month rather than computing resources.
Windows 365 is a direct competitor with Amazon’s Workspaces DaaS solution (Desktop as a Service). You can read more here on Amazon Workspaces, but one of the differences I notice right off the bat was the difference in pricing. Amazon Workspaces is priced based on the minutes used in Workspaces rather than a flat monthly rate for Windows 365. I have tested out Amazon Workspaces before, and it has tested in real-world cases, but I will have to get my hands on Windows 365 to compare the two better.
Windows 365 seems like a powerful tool that answers the hybrid paradox of desiring to work remotely while also working collaboratively in person. Microsoft has emphasized the simplicity of its service so that medium and small businesses can acquire the service while also having integration in the depth of an enterprise.
I am interested to see the adoption rate of Windows 365 in the coming years. I think Windows 365 will solve many problems some businesses have with digital transformation. Virtual desktops have many benefits over the traditional PC setup, and I will be interested in seeing how Microsoft partners shift and adjust to this new form of hybrid work. The connectivity to QoS ratio has always been important in remote desktop and apps and I’m intrigued how Microsoft will help solve this.
Note: Moor Insights & Strategy co-op Jacob Freyman contributed to this article.
The post Microsoft Announces Windows 365 Cloud Desktop at Its Inspire Partner Event appeared first on eWEEK.
]]>The post VMware’s VP Craig Connors Discusses SASE and its Secure Web Gateway appeared first on eWEEK.
]]>To help better understand the trends in SASE, I recently interviewed Craig Connors, VMware’s CTO and VP of Service Provider and Edge, in my most recent ZKast interview. See highlights below.
Watch the video:
Highlights of the interview:
The post VMware’s VP Craig Connors Discusses SASE and its Secure Web Gateway appeared first on eWEEK.
]]>The post Interview with HPE Greenlake SVP & GM Keith White | eSPEAKS appeared first on eWEEK.
]]>Listen to the podcast:
Watch the video:
The general manager of HPE GreenLake discusses the rapid changes in the cloud market, including how cloud users are focusing on growth past-pandemic.
→ Chapters
James Maguire on Twitter: https://twitter.com/JamesMaguire
Keith White on Twitter: https://twitter.com/KeithWhite_HPE
eWEEK on Twitter: https://twitter.com/eWEEKNews
eWEEK on Facebook: https://www.facebook.com/eWeekNews/
eWEEK on LinkedIn: https://www.linkedin.com/company/eweek-washington-bureau
The post Interview with HPE Greenlake SVP & GM Keith White | eSPEAKS appeared first on eWEEK.
]]>The post Interview with MinIO CEO Anand Babu Periasamy | eSPEAKS appeared first on eWEEK.
]]>Chris interviews Anand Babu Periasamy, CEO and Co-Founder of MinIO. Periasamy gives an overview of MinIO, its deployment options, and SUBNET Helath.
→ Chapters
Chris Preimesberger on Twitter: https://twitter.com/editingwhiz
Anand Babu Periasamy on Twitter: https://twitter.com/abperiasamy
eWEEK on Twitter: https://twitter.com/eWEEKNews
eWEEK on Facebook: https://www.facebook.com/eWeekNews/
eWEEK on LinkedIn: https://www.linkedin.com/company/ewee…
The post Interview with MinIO CEO Anand Babu Periasamy | eSPEAKS appeared first on eWEEK.
]]>The post How The Third Generation Amazon Echo Show 10 Will Become The Future Desktop PC appeared first on eWEEK.
]]>What makes this generation of Echo different is that it has a 10 inch screen that attempts to follow you around the room, making it ideal for use where and while you are doing projects. It reminds me a lot of the 2nd Generation Apple iMac, which I maintain was the best desktop PC design of all time.
Let’s talk about whether the Third Generation Amazon Show could become the perfect future desktop PC.
What made the Second-Generation iMac uniquely beneficial is that it placed all the system weight in the base and had a swivel mount for the attached flat panel display.
This unique design made it incredibly stable, far more stable than the iMacs that came after it (and most other all-in-one configurations), and the swiveling display made it far easier to put the display where you needed it even if that was on the other side of the desk. This design made this Second Generation offering far safer and far more helpful if you tended to move around the desk or the working space rather than just sitting in the same place all day, every day.
As noted, the Third Generation, Echo Show 10, has similar structural advantages to the Second Generation iMac. Its weight is in the base with a screen that is supported above the base.
However, unlike the old iMac, the Echo Show 10’s screen will automatically follow you around the room, freeing you up to be mobile while listening to music, watching videos, or searching on the web. You interact with it hands-free, and while its AI isn’t yet that smart, it is generally adequate for things like entertainment or looking at directions while building or cooking something. Its functionality is limited by its relatively small (compared to PCs) screen and evident entertainment focus. Still, nothing says it has to stay focused on entertainment, and two weeks ago, I wrote about how Amazon is already exploring the laptop PC space with a Fire Tablet.
Speech and voice have had several significant problems. It takes a lot of time to train a system for your voice, and the results tended to lack punctuation, so you had to spend a ton of time in training and then in editing after dictation.
But as we advance Artificial Intelligence, our ability to quickly adapt to an individual speaker, automate editing, and add punctuation will reach a point where many of us may prefer speech input to keyboard input in a few years. In short, we can begin to dictate to our PCs things we’ve used a keyboard and mouse for in the past.
Now I grew up when secretaries and dictation weren’t unusual. For most, it was beneficial to be able to pace around the office while doing dictation. But you’d still want to look at the screen from time to time to see if the computer was accurately capturing what you were saying and successfully executing the commands you were giving it.
So having a screen that followed you around the room would be helpful, though adding automatic vertical tilt and locomotion would also help keep the computer close to you while you are moving. Initially, just allowing the screen to follow you would provide most of the flexibility you’d likely need to pace while doing dictation.
Connecting it back to AWS for its intelligence and capabilities would further provide for system longevity and open additional possibilities for subsidies in what is already, at just under $250, an aggressively priced digital assistant offering.
Eventually, we’ll move to head-mounted displays and wearable technology for this function. Still, the computational power to blend your environment with what you are working on is very resource-intensive. We don’t yet even have a single example of the headset that would be needed, let alone the Cloud service suite of applications allowing for complete hands-free work. But this implementation could address most of the requirements and became an ideal platform to create the hands-free software we’ll need when head-mounted displays become viable.
The Third Generation Echo Show 10 is arguably the first proper personal robotic solution that has hit the market. Yes, it is limited to swiveling its screen, but there is no doubt it will be followed by products that are more mobile, more capable, and tied to an AWS back end that can provide a level of Artificial Intelligence we’ve never seen before.
You add the ability to translate speech into text with punctuation coupled with the natural language capabilities that NVIDIA was talking about at their GPU Technology Conference earlier this year, and you have the opportunity for something revolutionary. A personal desktop PC with a voice interface as a default and a screen will allow you to walk and work simultaneously.
At the very least, I expect this would get us off our collective butts and allow us to work healthier over time. Amazon seems to be dancing all around the next-generation PC without crossing the line, much like Apple danced around Smartphones with the iPod Touch and then caught the Smartphone market sleeping when they announced the iPhone.
We have another iPhone-like revolution coming, and it looks like Amazon is setting up to take a page out of Apple’s book to get there first with a cloud-connected hands-free solution. Oh, and if you think this is impossible, remember we thought Apple taking the Smartphone market away from Nokia, Microsoft, Research in Motion (BlackBerry), and Palm was impossible too at one time. We don’t think it is impossible anymore.
Unlike Apple, Amazon subsidizes their hardware, suggesting they’ll enter with the As-A-Service model that the traditional PC OEMs are just wrapping their arms around.
The post How The Third Generation Amazon Echo Show 10 Will Become The Future Desktop PC appeared first on eWEEK.
]]>The post Five Hidden Costs in Code Migration: How to Avoid Surprise Expenses appeared first on eWEEK.
]]>The fundamental challenge is to create code that performs the same business process or returns the same result on the new platform, rather than simply making the old code run on the new platform. This traditionally involves a long, manual process of copying the data, converting the code, testing the code and verifying that the migrated code has the same behavior as the original code.
Philosophically there are three different migration approaches – listed in increasing order of risk:
1. Lift and shift
2. Lift, adjust and shift
3. Total redesign
Whatever approach you take, keep in mind the Five Hidden Costs in code migration:
Migration projects are large, and timelines are unpredictable. The decision to migrate must take into account the cost, but without an accurate estimate of the challenge timelines slip and costs balloon:
Questions to ask include:
Where is the data processing code and how much code is there?
Does everything need to be migrated? Is some code being run for no good purpose?
How do you structure and plan a large migration?
Can the target platform actually perform everything in a similar way to the source platform? If not then major code rewrites are required, project timelines slip and costs over-run.
Typically the target platform has been tested and qualified based on a selection of data processing pipelines. Inevitably when the entire codebase is explored features not available on the target platform are found. Working around these problems requires the support of subject matter experts (SMEs) on the source platform – access to the wide variety of SMEs become blocking issues which delay the code migration.
With accurate, automated code conversion, an entire code base can be qualified against the target platform without relying on the memory of the SME. Discovering all potential issues early in the process helps scope/cost/solve Amdahl’s law.
“Standard SQL” is a myth. It’s tempting to imagine that because a syntax is legal in any two SQL dialects, that it means the same thing and returns the same value. A manual code translation, particularly by an engineer who is not an SME in both dialects, will tend to focus on successful execution first, and rely on a long tail of testing and validation to confirm that the correct calculation is being performed on the new platform. This takes time, and is prone to error.
The hidden cost is not only delays in migrating code, but also issues/errors that are discovered long after the migration is complete. A typical manual migration has errors show up for around a year after switching to the new platform, and it’s common for an automated migration to discover bugs in SQL code left over from a previous migration, or otherwise latent and undiscovered in core infrastructure code.
The cost of ownership of code is not only in authoring and testing it, but also maintaining the code. If quality and consistency is not maintained then there will be ongoing, unexpected maintenance costs on the target platform.
Migration teams must learn accurate, effective and consistent coding patterns on an unfamiliar target platform. With an obvious steep learning curve, the result is often inconsistent code quality, particularly in the first pipelines migrated. This problem is exacerbated at the start of the project resulting in technical debt on what should be a fresh, clean platform. Issues are most frequently encountered around a misunderstanding of the behavior of date-time operations, and changing coding practices as experience is developed.
The result is a long tail of issues discovered in testing towards the end of the migration, which in turn leads to timeline slips and cost overruns, and the worst cases are caused by timeline pressures reducing the testing, leading to bugs remaining in production code which “executes fine,” but does not have the same behavior as the code on the original platform.
Switching over to the new platform is a major milestone that migration project managers strive to achieve. The best outcome that can be hoped for is “things just work.” Of course there is a major risk that problems will arise which stop the core of an enterprise’s data processing.
To avoid stopping the world, the legacy and target platforms can be run in parallel, and the results be confirmed correct. This simplifies testing and increases confidence in the correctness of execution on the new platform. Typically this is completed in stages on segments of the data processing infrastructure. The challenge is to correctly “size the segments” to avoid the project timeline exploding — if the segments are too small then testing cycles take too long. If they are too large then identifying issues takes too long.
If you can address these five hidden costs in code migration then you are on track to a successful migration project.
About the Author:
Shevek is the CTO of Compilerworks
The post Five Hidden Costs in Code Migration: How to Avoid Surprise Expenses appeared first on eWEEK.
]]>The post Lenovo’s New/Updated Server Portfolio Defines Edge-to-Cloud Innovation appeared first on eWEEK.
]]>For those reasons and others, the new and updated servers recently announced by Lenovo’s Infrastructure Solutions Group (ISG) are particularly impressive. The announcements underscore that Lenovo ISG is more than the legacy System x assets and they are now providing Edge to Cloud solutions for all sizes of Lenovo customers.
Let’s take a look at those offerings, where they stand in marketplace terms and what the company has accomplished.
Lenovo ISG has made three major announcements over the past month:
What do these new solutions mean for Lenovo in terms of the market? Overall, they reflect both how the company has successfully leveraged the products and IP derived from the System x acquisition and is delivering new innovations designed to benefit customers. In addition, the new offerings reflect Lenovo ISG’s continuing, successful collaborations with strategic partners.
In the first case, while the new solutions build on the established ThinkSystem brand, they also reflect key shifts that have occurred in enterprise data center requirements during the past half decade. Those include tapping into substantial increases in compute performance enabled by new generation CPUs and GPUs, as well as gaining advantage from muscular improvements in memory, networking and interconnect technologies.
In addition, enterprises of every size and kind are rapidly adopting and adapting to working with a variety of cloud technologies and public cloud service providers, with hybrid environments leading the way. These same customers are also seeking to enhance the performance of vital business applications and workloads, scenarios that HCI appliances are designed to address, as well as specially optimized solutions for workloads, such as the SAP HANA in-memory database.
Finally, organizations are increasingly examining how artificial intelligence (AI) and related machine learning technologies can benefit their businesses. Some large-scale enterprises are exploring these areas broadly, partnering with vendors with significant research efforts. However, far more companies are looking for practical solutions that can deliver measurable benefits quickly, effectively and dependably. That point goes to the heart of Lenovo’s longstanding strategic efforts with innovative software, hardware and silicon partners.
Lenovo ISG’s latest servers, HCI solutions and appliances touch all these areas. They illustrate the company’s continual efforts to improve its server and system portfolios. At the same time, the new ThinkSystem and ThinkAgile offerings reflect Lenovo’s successful collaborations with strategic partners, including Intel, NVIDIA, AMD, VMware, Nutanix, NetApp and SAS.
In essence, these new solutions offer the company’s myriad global enterprise clients practical support for their immediate business requirements, from Edge to Cloud, from traditional servers to hyperconverged offerings – optimizing for needed applications. Just as importantly, they demonstrate Lenovo ISG’s dedication to evolving and adapting to meet its customers’ future needs.
About Pund-IT®
Pund-IT® (www.pund-it.com) emphasizes understanding technology and product evolution and interpreting the effects these changes will have on business customers and the greater IT marketplace. Though Pund-IT® provides consulting and other services to technology vendors, the opinions expressed in this commentary are those of the author alone.
The post Lenovo’s New/Updated Server Portfolio Defines Edge-to-Cloud Innovation appeared first on eWEEK.
]]>