The post eWEEK eSPEAKS Video Podcast with James Maguire appeared first on eWEEK.
]]>Todd Blaschka, COO and CRO at TigerGraph, explains the advantages of graph analytics and graph databases.
Mohit Aron, CEO of Cohesity, looks at the current state of ransomware, and provides advice about managing data for optimum security and productivity.
Four industry experts discuss the growing challenges in data governance, and highlight the issues that need to be resolved for better data management.
Ram Venkatesh, Chief Technology Officer at Cloudera, discusses the company’s shifts, and explains how the Cloudera Data Platform serves a hybrid environment.
Mickey Bresman, Co-founder and CEO of Semperis, provides three tips for securing your Active Directory against cyberthreats.
Simon Jelley, GM & VP of Product at Veritas Technologies, explains why ransomware is so difficult to defend against – and outlines critical best practices to lessen the threat.
Radhika Krishnan, Chief Product Officer for Hitachi Vantara, explains the role of data fabrics and discusses how data storage and data analytics are merging.
Nabil Bukhari, CTO and CPO of Extreme Networks, discusses how today’s networks need AI for proper monitoring, and also dives into how the democratization of technology is changing the tech sector in profound ways.
Jonathan Martin, the president of Weka, discusses how today’s exponential data growth is reshaping data storage, particularly in the analytics sector.
Margaret Lee, GM & SVP, Digital Service and Operations Management at BMC Software, discusses how AISM pairs with technologies like AIOps, and forecasts the future of AISM.
Daniel Hernandez, General Manager, Data and AI at IBM, discusses how data fabric enables a cohesive data strategy to better enable artificial intelligence.
Dave Frampton, VP of Security Solutions at Sumo Logic, discusses the new threat surfaces that companies need to focus on protecting.
Four top industry thought leaders discuss the key issues in the CIO-CMO relationship, and whether the challenges are a tech or a human problem.
Vincent Berk, CTO and Chief Security Architect at Riverbed, discusses network visibility and how to transform network and application data into actionable security intelligence.
Bipul Sinha, Co-Founder and CEO of Rubrik, explains Zero Trust’s role in blocking ransomware attacks, and discusses security in a multicloud world.
Irfan Khan, president, HANA Database & Analytics at SAP, discusses the value of an end-to-end technology platform that incorporates analytics throughout.
Krishna Subramanian, co-founder and president, Komprise, talks about using Data Management-as-a-Service to manage data in a multi-cloud environment.
Philip Cooper, VP of Product, Tableau, discusses the trend toward making data analytics tools available to employees throughout the organization, not merely the C-suite.
Ross Brown, VP of Product Marketing for Oracle Cloud, discusses trends in multicloud, and points out key differentiators for Oracle Cloud.
Armon Dadgar, Co-Founder and CTO of HashiCorp, discusses why “castle and moat” is outdated. Plus: what’s the future of cybersecurity in a multicloud world?
Three leading experts provide a deep dive into NetDevOps, including its history, best practices and common challenges, along with a look to the technology’s future.
Jaspreet Singh, CEO of Druva, spoke about key trends and challenges facing companies in cloud-based data backup.
Bernard Golden, Executive Technical Advisor at VMware, talks about cloud’s evolution, the rise in complexity in IT, and the advice he gives to his VMware cloud clients.
Karl Strohmeyer, Chief Customer and Revenue Officer for Equinix, provides a portrait of the rapidly evolving IT infrastructure market.
Four major thought leaders in the CIO community discuss the remarkable changes created by the pandemic. Which shifts are temporary, and which will be ongoing?
High levels of data integrity and data quality enable a data analytics process to offer truly accurate actionable insight.
Kaustubh Das, VP and GM of Cloud and Compute at Cisco, discusses why a coherent overall approach is so essential in an enterprise cloud deployment.
Paul Roehrig, Head of Strategy, Cognizant Digital Business & Technology, discusses the complexities of digital transformation – and also defines this oft-used word.
Kevin Gosschalk, CEO and Founder, Arkose Labs, discusses the state of online fraud, including the pandemic’s effect.
Alation CEO Satyen Sangani talks about all things Data Catalog. We’ll look at what exactly a data catalog does, and talk about some tips and best practices for optimizing a data catalog.
Manuvir Das, Head of Enterprise Computing at NVIDIA, discusses the current state and future trends in enterprise artificial intelligence.
Deepak Patil, Senior Vice President, Cloud Platforms & Solutions, at Dell, discusses the importance of hybrid cloud, and also looks at future directions in the enterprise cloud market.
An industry leader discusses how the explosive growth in data and AI is driving equally fast growth in memory and compute capacity.
Ciaran Byrne, VP of Product Management at OpsRamp, discusses the AIOps market, including the key challenges facing this fast-growing sector.
Bob Friday, CTO of Juniper’s AI-Driven Enterprise Business, talks about taking steps to help your business deploy AI, focusing on the need for quality data.
The general manager of HPE GreenLake discusses the rapid changes in the cloud market, including how cloud users are focusing on growth past-pandemic.
The post eWEEK eSPEAKS Video Podcast with James Maguire appeared first on eWEEK.
]]>The post Debunking Nagging Cloud Adoption Myths appeared first on eWEEK.
]]>Cloud computing has evolved in many ways from its origin as application service providers in the late 1990s, making it a new norm for existing and net new applications. More companies–as they gain cloud maturity–are shifting from cloud-first toward an all-cloud/cloud-only model. Cloud evolution is happening everywhere. Software plus hardware shifted to appliances which quickly shifted to as a service. We see this in cloud computing, data warehousing, CRM and IT service management in the cloud.
Some companies, however, are still playing “cloud catch-up.” There are persistent cloud adoption myths that may be to blame, acting as barriers and preventing companies from leveraging the superpowers of the cloud to boost efficiency, security and innovation.
This eWEEK Data Points article uses industry information from Chadd Kenney, a former executive at Pure Storage and EMC, who is now vice president and chief technologist at startup Clumio, a data backup and recovery software-as-a-service (SaaS) provider.
Many people think the cloud is more expensive than it is, and that buying something as a service is more expensive than an on-premises solution. In actuality, cost is only an issue when companies are using the cloud inefficiently or not fully leveraging the cloud to their benefit. It’s impossible for any enterprise to compete (i.e. build their own solution) with what the public cloud offers in terms of innovation, infrastructure efficiencies, flexibility, scalability, etc.
Most companies don’t have the manpower, talent or time to continually tinker with and optimize cloud solutions. This is where leveraging the innovation of the public cloud can help, instead of using it like another co-location and only replacing the on-prem infrastructure. SaaS and PaaS solutions offload some inefficiency and optimize for cloud while increasing the focus on strategic business aspects versus infrastructure.
Many pros think that once they move apps to the cloud, that is the end game. But true innovation means constant evolution. You must continue to integrate, iterate and innovate. One of the biggest mistakes companies make is failing to adapt quickly, as new technologies emerge. If you just look at your cloud journey as “lift and shift” being the final destination, you aren’t deriving the full benefits of the cloud.
Consider all the cloud-related innovation to date: The local data center has shifted to co-location and then to the cloud (AWS, Azure, GCP, Oracle). Computing has evolved from bare metal servers to virtualization and now to serverless. The database has made the journey from software and storage to database appliances to the cloud (AWS RDS). Backup has evolved from legacy on-prem hardware and software to hyper-converged appliances to a backup-as-a-service model.
That’s the perception, but it’s not reality. It’s been reported that 66% of IT professionals say security is their most significant concern when adopting an enterprise cloud computing strategy (source: Forbes). The cloud in general is very secure; that’s a major part of the cloud services business. Amazon’s responsibility is to provide security of the cloud. Your job (as an enterprise) is to provide security in the cloud.
Sometimes people mess up on the security in the cloud, which causes news reports about the cloud being hacked. The large cloud providers already implement compliance programs for HIPAA, PCI DSS, FEDRAMP, SOX and many others. Every time a provider adds a new service or feature, those compliance certifications must be re-upped to ensure they meet the requirements of clients.
This re-upping process is difficult and expensive for enterprises to take on as a DIY task, and the result would still not be as effective as what they could get “out of the box” from the cloud provider.
An all-cloud model eliminates extra hardware and software to size, configure, manage or buy, and this is the way forward. By debunking the aforementioned cloud adoption myths, more enterprises can focus on larger business initiatives rather than on the heavy lifting of racking, stacking and powering servers.
If you have a suggestion for an eWEEK Data Points article, email cpreimesberger@eweek.com.
The post Debunking Nagging Cloud Adoption Myths appeared first on eWEEK.
]]>The post BigPanda Provides Free 90-Day Access to IT Ops Platform appeared first on eWEEK.
]]>The free 90-day accelerator program is called Ops from Home. The purpose is to give IT operations, network operating center, DevOps and site reliability engineering teams access to BigPanda’s event-correlation and incident-automation platform with no obligation for participants after the three-month time period.
Back on March 16, the White House issued its Coronavirus Guidelines for America, requiring that the essential critical workforce and companies that provide health care services, pharmaceutical companies and food supply organizations maintain their normal work schedules. Many of the IT Ops teams responsible for keeping the lights on are working from home, struggling to support and ensure high availability of their most critical services.
The BigPanda IT Ops from Home accelerator program aims to help these organizations as well as select large enterprises that are struggling to maintain high service availability.
IT Ops from Home is designed for teams struggling with multiple monitoring tools, duplicate alerts and incidents about the same IT problem, hard-to-identify root causes, a lack of automation, minimal situational awareness across teams, dealing with system outages due to a high volume of traffic, and collaboration among distributed teams.
The program offers the following benefits:
For more information, go here.
The post BigPanda Provides Free 90-Day Access to IT Ops Platform appeared first on eWEEK.
]]>The post A CEO’s Life Lessons Learned After Beating COVID-19 appeared first on eWEEK.
]]>I’m a lucky guy. I’ve survived an ordeal that has killed tens of thousands of others, and I’ve learned a few things about myself along the way … things I think will make me both a better CEO as well as a better person.
It started on St. Patrick’s Day, March 17. I had recently traveled to San Francisco and Washington, but on this day, I felt ill with a fever, cough and shallow breathing. And while I wasn’t sure of what was wrong, I’d seen all the stories about the novel coronavirus and decided to take no chances. I immediately self-isolated at my home in South Carolina, even though there appeared to be a multitude of misinformation at the time. I moved quickly, but still had my doubts that I would test positive. I was wrong.
I was tested four days later by my primary care provider. Six days after that, I got the official word: Yes, I’d tested positive for the COVID-19 virus. Nobody else in my family showed symptoms, but we couldn’t be sure: No one had been tested because the tests weren’t available. My primary concern was the safety of my family and making sure that no one else had gotten sick from me. Thankfully, that was the case.
But as the CEO of my company, Edge Technologies, I also had my concerns about the company; if I was not able to help direct the company, who would? Given that I was sleeping more than 12 hours a night, from 6 p.m. to 8 a.m. as my body tried to fight off the virus, that wasn’t an idle concern. That was especially true as many of our existing clients came at us, all asking for an increase in our ability to serve their needs, as their own workers (and millions of others around the world) began working from home.
Thankfully, I’ve recovered and am back to work. But I was struck along the way with several points that I learned, or re-learned, as a CEO:
I also learned a few things about myself, things that I never would have taken the time to consider before this enforced timeout:
As I said at the beginning of this article, I’ve been through an ordeal. I know it’s nowhere as serious as many of the others who’ve been infected, or who have paid the ultimate price. I hurt for them and for their families. I know I’ve been fortunate to have recovered as easily and as quickly as I did, and I’m determined to use the lessons I had the time to think about and learn during that time to move forward and be a better CEO. And a better person.
Jim Barrett is Chief Executive Officer at Edge Technologies based in Northern Virginia.
The post A CEO’s Life Lessons Learned After Beating COVID-19 appeared first on eWEEK.
]]>The post Why When It Comes to Recovery, Backup Useless, DR Priceless appeared first on eWEEK.
]]>While simple backups may be sufficient for an individual, traditional backups alone are not enough for businesses. A primary purpose of having a secondary set of data is getting a business up and running after data loss, data corruption or ransomware, and backup systems are simply not up for the task of disaster recovery (DR).
It’s time to rethink data protection. Archiving data and sending it off to some faraway place with the hope that it will never be needed again is antiquated. Businesses can no longer afford to wait weeks, days or even hours to restore their data. Recovery must be instant. Thanks to the public cloud, DR and backup have been radically transformed in ways that make this possible.
In this eWEEK Data Points article, Sazzala Reddy, co-founder and chief technology officer at Datrium, explains why backup is useless for DR and how to do DR right to recover quickly from disasters big and small.
One: It’s a Schrödinger’s backup situation: The state of a backup is unknown until you have to restore from it.
Two: Backup systems are built for backing up data, not for recovery. It will take you days or weeks to recover your data center from backup systems. Mass recovery was never the design goal of backups.
Times have changed. In today’s on-demand economy, we expect our IT systems to always be up and running. Any amount of downtime impacts customers, employees and the bottom line.
Downtime can be caused by floods, tornados, fires, accidental human error and other unexpected events. However, ransomware, a new and rapidly growing phenomenon, is emerging as a leading cause of downtime. According to The State of Enterprise Data Resiliency and Disaster Recovery 2019, disasters ranging from natural events to power outages to ransomware affected more than 50% of enterprises in the last 24 months. Among these disasters, ransomware was identified as the leading cause with 36% of respondents reporting having been the victim of an attack.
The sharp increase in ransomware attacks and other data threats has made backup useless and DR more important than ever before. While there are newer backup systems on the market today, they still aren’t capable of rapid and reliable recovery. Today, speed of recovery and how quickly you can get back online after an event are the name of the game—and winning requires a comprehensive DR-focused strategy.
While backups are a great first step, they are not an effective DR strategy due to the sheer amount of time and manual labor required to recover from traditional backups after a disaster. Imagine 100 terabytes of data stored in a backup system, which can restore at 500MB/sec (which is generous). In a disaster scenario, it will take two-plus days to copy the data from the backup system into a primary storage system. Effective and fast DR requires automation in the form of disaster recovery orchestration.
Step 1: Access to the right data in a different infrastructure (backup and storage vendors sometimes forget that DR is not just about this step).
Step 2: Bring up the workloads, in the right order, on the right systems, dealing with differences in networking, etc. This is vastly more practical and automatable for virtualized workloads than it is for physical.
Step 3: Fail everything back to the originating site with the same concerns for workload sequencing, mapping, etc. (These last two steps require runbook orchestration, which is a key component to comprehensive DR.)
Because no business can afford to lose access to their systems for hours, days or even weeks, effective disaster recovery needs instant RTO (recovery time objectives), and the bottom line is that legacy backup systems were not designed for that. Effective DR solutions need to deliver instant RTO restarts.
The public offers on-demand compute and elastic storage. You can get your data to a geographical region of choice on low-cost media and spin up compute when disaster strikes, so you can work with that data. Additionally, you only pay for resources when you use them in a disaster or testing. That’s how the cloud is supposed to be used—elastic and pay as you go. It’s like only paying for insurance after you’ve had a car accident.
A key part of DR is getting the data to a second site that’s unaffected by the disaster and has compute resources available for post-recovery operation. To do this, your backups need to be deduplicated and stored in the cloud in a steady state, such as a public cloud like AWS S3. Then, in the event of a disaster, runbook automation instantly turns these backups into live VMs and you get instant RTO for 1000s of VMs.
By leveraging the public cloud and new technologies, it is now possible to converge backup (low-cost media, granular recovery) and DR (orchestration software, random I/O performance). This truly simplifies DR with an approach that enables instant failover of an entire data center with the push of a button, eliminating the need to cobble together all of the backup and DR software pieces manually.
If you have a suggestion for an eWEEK Data Points article, email cpreimesberger@eweek.com.
The post Why When It Comes to Recovery, Backup Useless, DR Priceless appeared first on eWEEK.
]]>The post eWEEK Moves to New Publisher, TechnologyAdvice.com appeared first on eWEEK.
]]>Foster City, Calif.-based digital marketing service provider QuinStreet had been eWEEK’s home for the last eight years after it acquired the technology news, trends and product information site from Ziff Davis Enterprise in February 2012.
Terms of the transaction were not announced.
In fact, QuinStreet—which already has highly profitable businesses in insurance, financial services and education—sold all 39 of its B2B tech publications—eWEEK, Datamation, Webopedia, IT Business Edge and eSecurityPlanet among them—to TechnologyAdvice, a Nashville, Tenn.-based B2B marketing company that creates opportunities for technology buyers to find the best business technology and technology vendors to connect with their ideal customers. Editorial operations at the publications will remain unchanged in the short term.
TechnologyAdvice is a privately held company, founded by CEO Rob Bellenfant in 2006, that has built its own demand-generation business steadily and continues to grow. Unlike QuinStreet, TechnologyAdvice concentrates only on B2B marketing services, with deep experience in working with enterprise IT businesses—which is eWEEK’s sole focus.
TechnologyAdvice was named to the Inc. 5000 list of “America’s Fastest-Growing Private Companies” in 2014, 2015, 2016 and 2017.
At the same time on May 7, TechnologyAdvice announced it acquired Quebec-based Project-Management.com. Project-Management.com serves practitioners and technology companies in the project management industry with technology reviews, training and thought leadership content.
The acquisitions of Quinstreet’s B2B business and Project-Management.com will round out the existing TechnologyAdvice marketing capabilities with:
“These acquisitions help us further our purpose, which is to create opportunities for technology buyers, technology vendors, our team members, and our communities,” TechnologyAdvice’s Bellenfant said. “Our ability to serve new and existing B2B technology clients in innovative and meaningful ways has just exploded. These deals solidify our expansion from specialized lead-generation services to a full-service media company that can offer clients a range of media products across the funnel and to technology companies of any size.”
eWEEK, whose predecessor, PC Week, was a weekly newspaper (and later a very popular weekly magazine until 2011) founded in Boston in 1983, updated its name in 2000 after its coverage extended into the enterprise sector, far beyond the PC segment. It is among the longest-running IT trade publications in the world and continues to attract a loyal readership of IT managers, C-level executives, software developers and IT segment investors.
Editor Chris Preimesberger, who’s been with the publication since 2004, will remain in charge of the publication’s editorial operations.
“It’s wonderful and reassuring that TechnologyAdvice sought us out and wants to invest in our mission to bring the latest, most relevant IT product/services/company information and trends to our readership,” Preimesberger said. “As enterprise IT continues to branch out and become more and more complicated to buy, use and explain, the need for competent news and analysis from a third-party publication with a respected history becomes increasingly important to technology buyers.”
eWEEK will continue to showcase its well-known and respected writers and analysts, including Wayne Rash, Charles King, Rob Enderle, Zeus Kerravala, Peter Burris, Brian Solis, Frank Ohlhorst, Eric Kavanagh and others on a daily basis. Preimesberger, who created #eWEEKchat and eWEEK’s Innovation section in 2013 and features such as IT Science case studies and eWEEK Data Points articles in 2016, said he expects to add some new names to the writing/analysis lineup, in addition to new types of content to the publication.
In fact, eWEEK has already started its new eSPEAKS video-interview series, which will be publishing on the eWEEK YouTube channel soon. A new podcast series is also in the works.
The new email address for Preimesberger is chris.preimesberger@technologyadvice.com. An alternate address, cpreimesberger@eweek.com, is also operational.
The post eWEEK Moves to New Publisher, TechnologyAdvice.com appeared first on eWEEK.
]]>The post Why White-Box Models in Enterprise Data Science Work More Efficiently appeared first on eWEEK.
]]>However, not all data science platforms and methodologies are created equal. The ability to use data science to make predictions and take decisions that optimize business outcome requires transparency and accountability. There are several underlying factors such as trust, having confidence in the prediction and understanding how the technology works, but fundamentally it comes down to whether the platform uses a black-box or white-box model approach.
Black-box testing or processing is a method in which the internal structure/design/implementation of the item being tested is not known to the tester. White-box testing or processing is a method in which the internal structure/design/implementation of the item being tested is known to the tester.
Once the industry standard, black-box-type machine-learning projects tended to offer high degrees of accuracy, but they also generated minimal actionable insights and resulted in a lack of accountability in the data-driven decision-making process.
On the other hand, white-box models offer accuracy while also clearly explaining how they behave, how they produce predictions and what the influencing variables are. White-box models are preferred in many enterprise use cases because of their transparent “inner-working” modeling process and easily interpretable behavior.
Today, with the advent of autoML 2.0 platforms, a white-box model approach is becoming a trend for data science projects. In this eWEEK Data Points article, Ryohei Fujimaki, Ph.D. and founder and CEO of dotData, discusses five key factors why white-box data science models are superior to black-box models for deriving business value from data science. DotData is a provider of full-cycle data science automation.
It is important for both analytics and business teams to understand the varying levels of transparency and their relevance to the machine learning process. Linear and decision/regression tree models are fairly transparent in how they generate predictions. However, deep learning (deep neural network), boosting and random forest models are highly non-linear and difficult to explain for black-box models. While black-box models can have a slight edge in accuracy scores, white-box models offer far more business insights that are critical for enterprise data science projects. White-box transparency means that the exact logic and behavior needed to arrive at a final outcome is easily determined and understandable.
Data scientists obviously are math-oriented and tend to create complex features that might be highly correlated with the prediction target. For example, consider the following feature vector for customer analytics: “log(age) * square-root(family income) / exp(height).” One will not be able to easily explain its logical meaning from the viewpoint of customer behaviors. In addition, deep learning (neural networks) computationally generates features. It is not possible to understand such deep non-linear transformations; thus, incorporating this type of feature will make the model a black box. In today’s regulatory environment, the need to explain the key variables driving business decisions is important. White-box models can fulfill this need and thus are gaining in popularity.
Model consumers are using ML models on a daily basis and need to understand how and why a model made a particular prediction, to better plan how to respond to each prediction. Understanding how a score has been derived and what features contributed allows consumers to optimize their operations. For example, a black-box model may indicate that “Customer A is likely to churn within 30 days with a probability of 73.5%.” Without a stated reason for the churn, a business user will not have enough information to determine if the prediction is reasonable. In contrast, white-box models typically give a different type of answer, such as, “Customer A is likely to churn next month because Customer A contacted the customer service center five times last month and usage decreased by 25% in the past four months.” Having the specific reasoning behind the prediction makes it much easier to determine the validity of the prediction, as well as what action should be taken in response.
In enterprise data-science projects, data scientists and model developers have to explain how their models behave, the stability and the key variables that are driving the prediction model. Therefore, explainability is absolutely critical for model acceptance. White-box models produce prediction results alongside influencing variables, making predictions fully explainable. This is especially precarious in situations where a model is used to support a high-profile business decision or to replace an existing model. Model developers have to defend their models and justify model-based decisions to other business stakeholders.
As more organizations adopt data science into their business process, there are increasing concerns about accountability and decisions made based on information that is personal and can sometimes be interpreted as discriminatory. As they provide increased transparency and explainability, white-box models help organizations stay accountable for their data-driven decisions and maintain compliance with the law and any potential legal audits. In contrast, black-box models exacerbate this issue, where less is known about the influencing variables that are actually driving final decisions.
If you have a suggestion for an eWEEK Data Points article, email cpreimesberger@eweek.com.
The post Why White-Box Models in Enterprise Data Science Work More Efficiently appeared first on eWEEK.
]]>The post How to Make Sure Your VPN Access Remains Seamless appeared first on eWEEK.
]]>In the past, VPNs were known to cause various levels of grief in many organizations because they can be tricky to implement and maintain. But they’re also very important components in enterprise security, and implementations have improved markedly in recent years when it comes to user-friendliness.
Many organizations have used VPNs for years to provide seamless connectivity without compromising security for employees who travel or work remotely. These VPN endpoints are typically set up to support 5% 10% of a company’s workforce at any given time. Ongoing VPN support for 100% of the workforce at companies around the world is unprecedented, and this “new normal” is putting unforeseen stress on both corporate and public networks.
There are important steps companies can take to address these challenges so that connecting to enterprise networks doesn’t leave employees frustrated during a time when stress levels are already high. These same best practices can support an enduring strategy for managing an increasingly mobile and remote workforce as the nature of work shifts.
This eWEEK Data Points article is based on industry information supplied by Karthik Krishnaswamy, director of product marketing at NS1.
VPNs are intrinsically designed to be encrypted tunnels that protect traffic, making them a secure choice for enabling remote work. Even with the increased number of people connecting to VPNs, this remains true. However, cyber-criminals do take advantage of times of chaos to attack corporate infrastructure like VPNs.
The strategy cyber-criminals typically employ is to obtain a person’s network credentials to access the VPN and, by extension, the employer’s networks and systems.
With so many more VPN users, the pool of potential victims who lose their credentials is higher than ever before. Knowing this, companies can ensure they properly secure their VPNs by enabling and requiring two-factor authentication as a second layer of protection.
With two-factor authentication, even if a cyber-criminal obtains an employee’s login credentials, they won’t be able to access the VPN or network without additional information, such as a one-time-use security code sent to a preselected mobile number or, ideally, to a token application. While no security measure can 100% guarantee complete security, setting up two-factor authentication can make it much more difficult for a cyber-criminal to take advantage of increased VPN usage.
Once a company has secured its VPN endpoints, it may find that the current infrastructure does not adequately support its entire workforce. A report from Atlas VPN estimates that VPN usage could increase by 150% as the coronavirus continues to spread. Companies can manage the increased demand by adding endpoints in multiple regions to cope. Depending on the company’s VPN architecture, this can be done through a cloud provider by increasing seats, by adding licenses to the existing VPN hardware solution, or by purchasing and deploying new VPN servers. One may also be able to enable VPN capabilities on existing edge network devices. This may be a great short-term solution for some as it allows for an increase in capacity without incurring additional capital expenses.
While increasing the number of VPN servers will help ensure a company has the capacity to accommodate more employees working remotely, there may still be issues with performance or availability if all the users log in to the same VPN server.
To accommodate this increased demand, organizations can optimize VPN server use. In many cases, it is up to the employee to randomly choose an endpoint from a list. Employees continue connecting to a “default” endpoint for days or weeks, regardless of usage or capacity.
Worse yet, if the user cannot connect to their normal endpoint due to high traffic volume, the client will often select a backup without consideration to location or load, resulting in slowness or outright disconnections.
Lastly, continuous monitoring is a crucial step to making sure your VPN connections remain accessible and performant for employees. Many tools provide valuable insight that can help companies evaluate and adjust capacity as needs change. Consistent monitoring can also demonstrate trends about when employees are connecting the most often, and from which geographies. This allows companies to better plan for times of high volume, create strategies for when to add more VPNs based on employee growth plans and set up informed traffic routing rules, optimizing VPN usage long term.
By adding VPNs, traffic steering at the DNS layer, securing the endpoints and consistently monitoring performance, employers can deliver the same seamless network and technology experiences that employees expect when they are in the office. In a time of uncertainty and worry, this can help reduce the stress of working remotely while also creating a resilient network.
If you have a suggestion for an eWEEK Data Points article, email cpreimesberger@eweek.com.
The post How to Make Sure Your VPN Access Remains Seamless appeared first on eWEEK.
]]>The post IT Science Case Study: Using AI to Quickly Resolve Enterprise IT Issues appeared first on eWEEK.
]]>Unless it’s brand new and right off various assembly lines, servers, storage and networking inside every IT system can be considered “legacy.” This is because the iteration of both hardware and software products is speeding up all the time. It’s not unusual for an app-maker, for example, to update and/or patch for security purposes an application a few times a month, or even a week. Some apps are updated daily! Hardware moves a little slower, but manufacturing cycles are also speeding up.
These articles describe new-gen industry solutions. The idea is to look at real-world examples of how new-gen IT products and services are making a difference in production each day. Most of them are success stories, but there will also be others about projects that blew up. We’ll have IT integrators, system consultants, analysts and other experts helping us with these as needed.
Known for its people-first approach to financial services, Freedom Financial Network recognized the need to prioritize its own employees’ experience by resolving their IT issues within seconds—not days. Today, Moveworks enables those employees to get the IT support they need, straight on Slack, in real time.
Name the problem to be solved: Conventional IT support channels—such as email, portals and forms—require help desks to read through thousands of requests before routing each one to the right subject-matter experts, who must then converse back and forth with employees to resolve their issues. This process not only generates constant routine work for IT teams, it also causes the average IT issue to take three days to resolve, hindering productivity across the entire company. Known for its innovative approach to banking, Freedom Financial sought an equally innovative solution to improve its employees’ support experience.
“Employees are frustrated when they have to navigate a bunch of portals,” said Mark Tonnesen, CIO at Freedom Financial Network. “If we can give them an AI assistant that finds things for them, rather than making them learn to navigate more menus, we’re unlocking productivity.”
Describe the strategy that went into finding the solution: Freedom Financial runs a large customer service operation, which means lots of hiring and training new entrants to the workforce. This can be a challenge because these workers—the majority of whom are digital natives fresh out of college—have high expectations for the tools they use and little experience with traditional enterprise software.
The answer Tonnesen’s team envisioned was a simple, intuitive chat interface for IT and business processes—powered by natural language understanding (NLU)—that translates employees’ requests to specific actions in enterprise tools. The Freedom Financial Network IT team chose Moveworks to implement this vision: giving employees the ability to instantly resolve their IT issues from a conversational interface in Slack.
“Moveworks elevates messaging platforms from being just communication tools to being a place where employees go to take actions in all kinds of enterprise systems,” Tonnesen said. “When you have a solution that can diagnose and resolve employees’ issues in just a few seconds, that really changes the game for IT support.”
List the key components in the solution: With Moveworks, Freedom Financial Network employees simply describe their IT request in a Slack message to the Moveworks bot, which uses advanced NLU and conversational AI to understand the issue and deliver a resolution. Some of Moveworks’ key capabilities include:
Describe how the deployment went, perhaps how long it took and if it came off as planned: As a financial institution, Freedom Financial Network is risk-averse when deploying new technologies, so it was important to Tonnesen’s team to roll out Moveworks in an incremental way. Tonnesen says, “We set big goals, but my approach is always to start small, show success and expand from there, and that’s what Moveworks let me do.”
In close collaboration with the Moveworks Customer Success (CS) team, Freedom Financial Network first deployed Moveworks’ ability to manage email distribution lists, followed by its password-reset and question-answering functionalities.
“Moveworks takes the complexity out of IT support for our employees,” said Tonnesen. “Now they simply chat with a conversational bot in Slack to get their issues resolved and get work done.”
Describe the result, new efficiencies gained and what was learned from the project: Freedom Financial Network has a focus on unlocking people’s potential, both for their customers and for their employees. With Moveworks, the company has delivered on this vision: unlocking the potential of employees by freeing up their time. Now, resolution of their issues and questions is fast and automatic and they no longer have to navigate an IT portal to get help. Meanwhile, for the IT team, the benefits have been equally liberating. IT agents have won back valuable time that they’re devoting to more important work, such as building out additional automated solutions.
Indeed, the demonstrated value of AI led Tonnesen to conclude that IT support is just the beginning; he hopes to expand his utilization of Moveworks to other business units in future. Asked what he’d like to see next in the Moveworks solution, he says, “Everything! I wish Moveworks would hire hundreds of engineers tomorrow and automate my HR and finance operations as they’ve enabled me to automate my IT operations.”
Describe ROI, carbon footprint savings and staff time savings, if any: By using Moveworks, Financial Freedom Network was able to:
Go here to read the full case study (PDF).
If you have a suggestion for an eWEEK IT Science article, email cpreimesberger@eweek.com.
The post IT Science Case Study: Using AI to Quickly Resolve Enterprise IT Issues appeared first on eWEEK.
]]>The post IT Science Case Study: Using Robotics to Triple Sales Volume appeared first on eWEEK.
]]>Unless it’s brand new and right off various assembly lines, servers, storage and networking inside every IT system can be considered “legacy.” This is because the iteration of both hardware and software products is speeding up all the time. It’s not unusual for an app-maker, for example, to update and/or patch for security purposes an application a few times a month, or even a week. Some apps are updated daily! Hardware moves a little slower, but manufacturing cycles are also speeding up.
These articles describe new-gen industry solutions. The idea is to look at real-world examples of how new-gen IT products and services are making a difference in production each day. Most of them are success stories, but there will also be others about projects that blew up. We’ll have IT integrators, system consultants, analysts and other experts helping us with these as needed.
This eWEEK IT Science article is based on industry information from Chris Huff, chief strategy officer of Kofax.
Name the problem to be solved: Founded in 2009 and based in Atlanta, Redwood Logistics is an industry-leading supply chain solutions provider. It offers a wide range of transport and freight management services to a variety of customers in the manufacturing, retail and distribution sectors specializing in less-than-truckload (LTL) transport.
Prior to its acquisition by Redwood Logistics, LTX Solutions was fulfilling 3,000 orders per month with a 12-person delivery team. During this time, tracking and auditing were handled manually, and employees would then individually relay updates to customers. To keep pace with business growth, LTX Solutions determined that it would need to hire at least five more people to complete this follow-up work.
“We were eager to streamline routine processes, while also maintaining current staffing levels. It became clear that robotic process automation [RPA] could help us with our scalability challenges, and we began exploring the leading technical solutions on the market,” said Redwood Logistics Vice President of Business Development Andrew Gleason.
Describe the strategy that went into finding the solution: To drive greater efficiency into the business, LTX Solutions implemented Kofax RPA smart software robots that were able to automate key business processes. The organization was impressed with the flexibility and speed at which individual robots could be deployed.
The first Kofax RPA smart software robot was developed and deployed within two hours, based on previously developed templates. Today, there are 60 Kofax RPA smart software robots inherited from LTX Solutions and in use within Redwood Logistics, with plans to expand into other business operations.
“Before we knew it, we had automated our entire tracking process, in less time than it would normally take to process orders for the day,” Gleason said.
List the key components in the solution: To drive greater efficiencies, Redwood Logistics implemented Kofax RPA smart software robots to automate key aspects of its business processes. The company was particularly impressed with the flexibility of the Kofax solution, as well as the speed at which individual robots could be deployed. Components included:
“Kofax RPA transformed our business. It eliminated a lot of time-consuming manual work in a very short space of time, and our internal users are grateful they don’t have to do so much paper-chasing,” Gleason said.
Describe how the deployment went, perhaps how long it took, and if it came off as planned: “We sat down with the Kofax team, and within two hours we had a working robot,” Gleason said. “They walked us through the whole process of building one from the Kofax RPA templates, and we were amazed by just how simple it was. The Kofax team was very diligent with their support, and explained to us the ins and outs of how it worked. Before we knew it, we had automated our entire tracking process in less time than it would take to process orders for one day.”
Following this initial success, Redwood Logistics quickly expanded its use of RPA. Today, there are 60 Kofax RPA smart software robots in use at Redwood Logistics, and the company plans to extend the solution across its broader business in the near future.
Describe the result, new efficiencies gained and what was learned from the project:
By the numbers:
Since adopting Kofax RPA, Redwood Logistics has tripled order volumes while keeping staffing costs steady.
“We used to process 3,000 orders a month, and now it’s over 10,000,” Gleason said. “That’s 300% growth in business volume since implementing Kofax RPA, and there’s still plenty of room for us to expand our operations further while maintaining our core team.”
Redwood Logistics has automated the majority of its tracking and auditing processes, allowing employees to focus attention elsewhere. “The tracking process runs near-completely automatic now,” Gleason explained. “We still have people check on things once in a while to ensure the RPA estate is functioning well, but otherwise the system just runs, and our customers get live updates on the ETAs of their orders much faster than when we were performing those tasks manually.”
The auditing process is much faster as well. Instead of having to manually check each invoice for anomalies or surcharges, Kofax RPA robots automatically flag extra charges or discrepancies and alert the company’s accounts department so they can handle them directly. That means Redwood Logistics can resolve any ambiguities or extra charges faster, and customers receive more accurate bills than was previously possible, helping them keep their accounts in proper order.
Describe ROI, carbon footprint savings and staff time savings, if any: Using Kofax RPA, Redwood Logistics was able to triple order volume while keeping costs steady. Prior to streamlining operations, Redwood Logistics processed 3,000 orders a month, jumping to more than 10,000. That is a 300% growth in business volume. “We achieved the full ROI for our investment in just nine months,” Gleason said.
The company’s entire order-tracking process is now almost completely automatic. This saves employees time while also improving the overall customer experience. Customers are now able to receive live updates on the estimated time of arrival (ETA) for their orders much faster than when the tasks were being performed manually.
If you have a suggestion for an IT Science Case Study, email mailto:cpreimesberger@eweek.com.
The post IT Science Case Study: Using Robotics to Triple Sales Volume appeared first on eWEEK.
]]>