Glosario KW | KW Glossary
Ontology Design | Diseño de Ontologías
Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL
U (DATA CENTER)
U (OPEN SOURCE)
U (WEB SERVICES)
Posted by Margaret Rouse
Uncloud is the removal of applications and data from a cloud computing platform.
In recent years, organizations ranging from small and medium-sized businesses to large enterprises have turned to the cloud to run applications, store data and accomplish other IT tasks. Over time, however, an organization may elect to uncloud one, a few or, possibly, all of its cloud-based assets. Examples could include shutting down a server instance in a public cloud and moving the associated software and data to an in-house data center or colocation facility. De-cloud is another term used to describe this reverse cloud migration.
In the process of unclouding, the cloud customer or, potentially, a channel partneracting on its behalf, will work with the cloud vendor to extract the customer's applications and data. The task involves locating the data and mapping the application's dependencies within the cloud vendor's infrastructure. The unclouding customer -- and its channel partner -- may encounter higher levels of complexity in the case of a public multi-tenant cloud setting. A customer may have to wait for the cloud vendor's scheduled downtime to migrate its applications and data or the cloud provider may limit the customer's use of migration tools so as not to interfere with the application performance of other customers
Customers may cite a number of reasons for wanting to uncloud. Factors include security issues, liability concerns and difficulty in integrating cloud-based applications with on-premises enterprise applications and data. Frustrated expectations with respect to the cloud's cost efficiency may also influence de-clouding decisions. Anecdotal evidence suggests that customers citing cost as a factor may elect to move applications to an in-house, hyper-converged infrastructure as the better economic choice.
Reverse migration on the rise: Channel partners see customers uncloud
by John Moore
Channel partners report that a small but increasing number of customers are moving some or all of their applications off the cloud.
Channel partners say some of their customers have begun to uncloud and are asking for help migrating back to in-house data centers or colocation facilities.
While cloud computing, in general, remains a high growth area, a counter trend of reverse migration has started to surface. Organizations, industry executives said, cite a number of reasons for moving some or all of their applications off the cloud: security and compliance concerns, frustration over elusive cost savings, and the changing data center economics of hyper-converged architecture.in
At Trace3 Inc., cost and hyper-convergence played key roles in one customer's off-the-cloud migration. Trace3, based in Irvine, Calif., focuses on data center, big data and cloud technologies. Mark Campbell, research principal and director of Innovation Research at Trace3, said a retail client recently completed a "back-sourcing" exercise in which it migrated its entire cloud footprint into a colocation data center, where the retailer could control the infrastructure.
"Cost was the primary driver," Campbell said of the retail customer. "They were estimating they could save 40% over their cloud IaaS [infrastructure as a service] and PaaS [platform as a service] provider by building their own private cloud built on hyper-converged and commodity infrastructure," he said.
Campbell noted that he hasn't had the opportunity to follow up with the company to see whether it actually realized the projected savings.
Getting off the cloud
Nevertheless, other Trace3 customers have taken steps to uncloud, a pattern Campbell began noticing last year. He said a few customers -- numbered in the dozens, out of a client base of some 2,000 companies -- have encountered issues in the cloud.
"The vast majority of our customers have moved at least some of their enterprise applications to the cloud, and the vast majority of those are continuing in the cloud," Campbell said. "There is a small minority, however, that are moving some or all of their applications back into their own data centers or colocation sites more under their control."
"We are exiting the honeymoon stage, and that is always a rude awakening -- and expectation readjustment -- for both parties."
Paul Dippell CE, Service Leadership
Irwin Teodoro, senior director of data center transformation atDatalink Corp., a data center services provider based in Eden Prairie, Minn., has also observed declouding among his company's customers. He said for every 10 companies pursuing some form of cloud computing, he has seen two or three looking to get out of the cloud.
"This is definitely a trend we are going to see more of."
The counter-cloud migration may signal a resetting of expectations among channel partner customers. Paul Dippell, CEO of Service Leadership Inc., a company based in Plano, Texas, that provides a financial and operational benchmark for channel companies, said cloud vendors tell customers that their offerings are "wonderful, weightless, agile, low cost, mobile [and] fantastically free of the impediments of past computing models."
But that vision doesn't always line up with reality.
"What the customers are experiencing is different enough that a material number of customers are declouding or significantly changing -- reducing -- their cloud strategies to regain a more solid computing foundation and rational cost," Dippell said.
"I don't expect cloud to fail, by any means, and I do expect it to grow," Dippell said. "But we are exiting the honeymoon stage, and that is always a rude awakening -- and expectation readjustment -- for both parties."
Dippell added that he's heard anecdotal accounts of solution providers winning new customers by agreeing to decloud them.
When customers uncloud: Top factors
A number of factors influence migration decisions. Unforeseen security issues, for example, may drive some applications back in-house. In general, risk and liability concerns are tempering enthusiasm for the cloud, said Dan Liutikas, managing attorney at InfoTech Law Advocates P.C., and chief legal officer and secretary at CompTIA.
Channel partners, as well as customers, are questioning whether cloud is the correct answer for every customer. While the cloud adoption wave continues, more and more service providers are weighing whether cloud is the right approach for a particular customer or a subset of customers, Liutikas said. The latter includes companies in highly regulated industriessuch as healthcare and financial services.
"Sometimes … on-premises is the better answer based on their customers' needs," he said.
Organizations may also struggle to achieve deep integration between their cloud applications and their on-premises legacy applications and data, according to industry executives. But beyond legal and technical hurdles,cost has become a sticking point for some cloud users.
Unexpected cloud costs may stem from a customer's failure to quantify all the necessary services in its initial calculations. Campbell said most customers tend to be accurate in estimating traditional infrastructure and capacity costs for servers, storage capacity and bandwidth, among other components. But on the other hand, they tend to underestimate the cost of items beyond their data centers. Those items include the cost of creating multiple storage snapshots to back up data, the cost of data replication and the cost of restoring data.
"This leads to budgetary surprises," Campbell said.
Cloud sprawl can also stress budgets.
"Much like [virtual machine] sprawl, it is not uncommon for the initial targets of a cloud installation to grow as both the IT and business discover new applications, features and snap-of-the-fingers capacity bursts," Campbell explained. "These all add line items to the monthly bill."
In addition, cloud offerings may run afoul of conventional budget controls.
Campbell said traditional IT organizations built their financial processes and controls to monitor big-ticket items such as projects and large Capexpurchases and smaller items such as consumables and onetime Opexexpenditures handled on an approval basis.
"This works great in a data-center-centric operation, but imagine the befuddled expression on the comptroller's face when he gets his first 23,000-line-item bill from Amazon," Campbell said. "It is very hard to even decipher what these expenditures are for, let alone garner business justification."
Customers disappointed with cloud cost savings may end up migrating applications to hyper-converged infrastructures.
"Some [companies] are pulling in applications from the cloud to their data centers," Campbell said. "If they do that, we are seeing hyper-convergenceas being one of those enabling mechanisms."
In addition to cost, corporate culture can play a role in a cloud reversal.
"Executives who are not fully aware of the concepts of the cloud feel somewhat apprehensive that data is somewhere else and feel lack of control," Teodoro said.
Managing the declouding challenge
Assisting customers as they back out of the cloud can prove difficult. Teodoro said public clouds in which multiple customers share a common infrastructure represent the greatest challenge. Dealing with maintenance windows is one issue. A customer can't just extract an application based on its own ad hoc maintenance timetable; they have to wait for the cloud provider's scheduled downtime.
"You can't move when you want to move," Teodoro said. "You've got to move at somebody else's pace and schedule."
Determining a cloud-based application's dependencies with respect to the cloud provider's infrastructure is another consideration. A channel partner working on an off-the-cloud migration project needs to figure out what virtual machines the application resides on and identify the virtual LANs and subnets in the compute infrastructure to which the application can be traced, Teodoro explained. The goal: extract the application without breaking something in the environment.
"The keys for us are really to understand the dependencies in the environment -- down to the infrastructure -- and find ways to carve out the environment into smaller chunks or workgroups," Teodoro said.
Another complication: Migration tools can help channel companies uncloud customers, automating the tasks of data gathering, analysis and forensics. But in a shared, multi-tenant cloud, service providers can't use their own tools, since they could impact a cloud provider's other clients, Teodoro said.
Seeking a happy balance
Customers juggling multiple IT environments provide yet another degree of difficulty. Jim Piazza, vice president of service management at CenturyLink Inc., which offers colocation, public cloud and IT services, said customers such as software as a service providers may offer multiple versions of their software to support different clients. And those different versions may be hosted on different computing platforms: in-house private clouds, colocation centers and public clouds, for example.
"It's an interesting mix … that is really quite a challenge to manage," Piazza said.
Piazza said CenturyLink, based in Monroe, La., provides customers a service catalog to help them keep track of what version of their software is deployed where. In addition, the company has built interconnects between customers' colocation footprints in CenturyLink facilities and CenturyLink's public cloud. The service catalog and interconnects enable the company's clients to move their end customers from one platform to another, Piazza said.
Piazza likened migrating customers and their workloads among the various platforms to supporting a 3D jigsaw puzzle.
For Campbell, the cloud conundrum boils down to harmonizing the computing platforms now available to customers.
"It's finding that happy balance -- what lives best in the cloud and what lives best in-house."
Continue Reading About uncloud (de-cloud)
Dig Deeper on Cloud integration services and cloud enablement services
Unified Endpoint Management
Unified Endpoint Management (UEM)
Unified endpoint management (UEM) is an approach to securing and controlling desktop computers, laptops, smartphones and tablets in a connected, cohesive manner from a single console. Unified endpoint management typically relies on the mobile device management (MDM) application program interfaces (APIs) in desktop and mobile operating systems.
Microsoft's inclusion of MDM application program interfaces in Windows 10 made unified endpoint management a possibility on a large scale. Prior to the release of Windows 8.1, there was no way for MDM software to access, secure or control the operating system and its applications.
In Windows 10, the tasks IT can perform through MDM software include:
Mobile device management is significantly less robust than traditional Windows management tools, however. Examples of tasks information technology (IT) administrators can't perform through Windows 10 MDM APIs include:
Many vendors market UEM as a feature of their broader enterprise mobility management (EMM) software suites and some EMM vendors have made strides to close the gap between MDM and traditional Windows management tools. For example, MobileIron Bridge allows IT administrators to use MDM to deploy scripts that modify the Windows 10 file system and registry and perform other advanced tasks, including deploying legacy.exe applications.
Other vendors that support UEM include VMware, Citrix, BlackBerry and Apple. Apple's Mac OS X operating system has included MDM APIs since at least 2012, when AirWatch and MobileIron announced support. Today, all of the major vendors that offer UEM also support OS X.
Pulling Insights from Unstructured Data – Nine Key Steps
Data, data everywhere, but not a drop to use. Companies are increasingly confronted with floods of data, including “unstructured data” which is information from within email messages, social posts, phone calls, and other sources that isn’t easily put into a traditional column. Making sense and actionable recommendations from structured data is difficult, and doing so from unstructured data is even harder.
Despite the challenge, the benefits can be substantial. Companies that commit to examining unstructured data that comes from devices and other sources should be able to find hidden correlations and surprising insights. It promotes trend discovery and opens opportunities in ways that traditionally-structured data cannot.
Analyzing unstructured data can be best accomplished by following these nine steps:
1. Gather the data
Unstructured data means there are multiple unrelated sources. You need to find the information that needs to be analyzed and pull it together. Make sure the data is relevant so that you can ultimately build correlations.
2. Find a method
You need a method in place to analyze the data and have at least a broad idea of what should be the end result. Are you looking for a sales trend, a more traditional metric, or overall customer sentiment? Create a plan for finding a result and what will be done with the information going forward.
3. Get the right stack
The raw data you pull will likely come from many sources, but the results have to be put into a tech stack or cloud storage in order for them to be operationally useful. Consider the final requirements that you want to achieve and then judge the best stack. Some basic requirements are real-time access and high availability. If you’re running an ecommerce firm, then you want real-time capabilities and also want to be sure you can manage social media on the fly based on trend data.
4. Put the data in a lake
Organizations that want to keep information will typically scrub it and then store it in a data warehouse. This is a clean way to manage data, but in the age of Big Data it removes the chance to find surprising results. The newer technique is to let the data swim in a “data lake” in its native form. If a department wants to perform some analysis, they simply dip into the lake and pull the data. But the original content remains in the lake so future investigations can find correlations and new results.
5. Prep for storage
To make the data useful (while keeping the original in the lake), it is wise to clean it up. For example text files can contain a lot of noise, symbols, or whitespace that should be removed. Dupes and missing values should also be detected so analysis will be more efficient.
6. Find the useful information amongst the clutter
Semantic analysis and natural language processing techniques can be used to pull various phrases as well as the relationship to that phrase. For example “location” can be searched and categorized from speech in order to establish a caller’s location.
7. Build relationships
This step takes time, but it’s where the actionable insights lay. By establishing relationships between the various sources, you can build a more structured database which will have more layers and complexity (in a good way) then a traditional single-source database.
8. Employing statistical modeling
Segmenting and classifying the data comes next. Use tools such as K-means, Naïve Bayes, and Support Vector Machine algorithms to do the heavy lifting to find correlations. You can use sentiment analysis to gauge customer’s moods over time and how they are influenced by product offerings, new customer service channels, and other business changes. Temporal modeling can be applied to social media and forums to find the most relevant topics that are being discussed by your customers. This is valuable information for social media managers who want the brand to stay relevant.
9. End results matter
The end result of all this work has to be condensed down to a simplified presentation. Ideally, the information can be viewed on a tablet or phone and helps the recipient make smart real-time decisions. They won’t see the prior eight steps of work, but the payoff should be in the accuracy and depth of the data recommendations.
Every company’s management is pushing the importance of social media and customer service as the main drivers of company success. However, these services can provide another layer of assistance to firms after diagnostic tools are applied to their underlying data. IT staff need to develop certain skills in order to properly collect, store, and analyze unstructured data in order to compare it with structured data to see the company and its users in a whole new way.
About the author: Salil Godika is Co-Founder, Chief Strategy & Marketing Officer and Industry Group Head at Happiest Minds Technologies. Salil has 18 years of experience in the IT industry across global product and services companies. Prior to Happiest Minds, Salil was with MindTree for 4 years as the Chief Strategy Officer. Before MindTree, Salil spent 12 years in the United States working for start-ups and large technology product companies like Dassault Systems, EMC and i2 Technologies. His accomplishments include incubating a new product to $30million in revenue, successful market positioning of multiple products, global marketing for a $300million business and multiple M&As.
User Interface (UI)
Also see human-computer interaction.
In information technology, the user interface (UI) is everything designed into an information device with which a human being may interact -- including display screen, keyboard, mouse, light pen, the appearance of a desktop, illuminated characters, help messages, and how an application program or a Web site invites interaction and responds to it. In early computers, there was very little user interface except for a few buttons at an operator's console. The user interface was largely in the form of punched card input and report output.
Later, a user was provided the ability to interact with a computer online and the user interface was a nearly blank display screen with a command line, a keyboard, and a set of commands and computer responses that were exchanged. This command line interface led to one in which menus (list of choices written in text) predominated. And, finally, the graphical user interface (GUI) arrived, originating mainly in Xerox's Palo Alto Research Center, adopted and enhanced by Apple Computer, and finally effectively standardized by Microsoft in its Windows operating systems.
The user interface can arguably include the total "user experience," which may include the aesthetic appearance of the device, response time, and the content that is presented to the user within the context of the user interface.
Contributor(s): Mike Dang
This was last updated in April 2005
Posted by: Margaret Rouse
Usuario Final / End User
Usuario Final es el operador del sistema. Hay diferentes perfiles de Usuarios, que van desde el Supervisor hasta el que solamente utiliza ciertas funcionalidades acotadas y puntuales.
UTM vs. NGFW: Unique products or advertising semantics?
UTM vs. NGFW: Unique products or advertising semantics?
by: Michael Heller
In comparing UTM vs. NGFW, organizations find it difficult to see if there are differences between the two products or if it is just marketing semantics.
It can often be difficult to discern the difference between unified threat management (UTM) and next-generation firewalls (NGFW). Experts agree that the lines appear to be blurring between the two product sets, but enterprises that focus on defining each product type during the purchasing process may be making a mistake.
"Service providers for ISPs have different needs than enterprises," said Young. "So, UTM vendors will only offer basic firewall features as a price-play for that market."
Young said those differences in ease of use and support demands still exist today, though they have become more nuanced; there is overlap in the underlying technology of NGFW and UTM, and spec sheets tend to look similar. Young said that the key differences now are more around quality of features, and the level of support from channel partners to meet customer needs.
Young also noted that vendors tend to excel in one market or the other, like Fortinet Inc. with UTM for SMBs, or Palo Alto Networks Inc. with NGFW for enterprises. Few vendors can succeed in both, he said, like Check Point Software Technologies Ltd. has done.
"The confusion came from SMB vendors trying to move into the enterprise market without making channel and quality changes," said Young. "It was an intentional campaign to confuse, but very few end users are confused about what they need. It is either a racecar [NGFW] or a family van [UTM]."
Brazil admitted that the differences between NGFW and UTM can be confusing, even for experienced practitioners, but described UTM as a collection of unrelated security features, one of which is the firewall.
"UTM generally refers to a firewall with a mix of other 'bolted-on' security functions like antivirus and even email spam protection," said Brazil. "These are not access control features that typically define a firewall."
What traditionally has defined next-gen firewalls, Brazil said, is robust Layer 7 application access control, though an increasing number of NGFWs are being augmented with integrated threat intelligence, enabling them to deny known threats based on a broad variety of automatically updated policy definitions.
However, Brazil did caveat his distinctions by saying that a UTM could be considered an NGFW if it met the Layer 7 parameters, and an NGFW that included malware functions could be considered a UTM. Though, he was clear that despite these potential overlap points, he would keep the classifications separate because of a lack of similarities in other respects, like access control.
Brazil said that NGFW will eventually become the standard, and the terms NGFW and firewall will become synonymous. He said UTM will remain an important product for SMBs, especially when a company prioritizes simplicity of deployment over the depth of security and performance, but NGFW and UTM will not converge because of performance and management concerns.
"The idea of a 'converged' network security gateway will continue to have appeal, so vendors will continue to add functionality to reduce cost of firewall ownership to the customer and increase revenue to the vendor," said Brazil. "However, issues with performance and manageability will continue to force separate, purpose-built systems that will be deployed in enterprise networks. As such, there will continue to be enterprise firewalls that should not be considered UTMs."
Mike Rothman, analyst and president for Phoenix-based security firm Securosis LLC., said he believes that UTM and NGFW are essentially the same, and the differences are little more than marketing semantics. Rothman agreed that marketing from vendors caused confusion, but also blamed analysts for adopting the term NGFW and driving it into the vernacular.
He said that early UTMs did have problems scaling performance from SMBs to larger enterprises, especially when trying to enforce both positive rules (firewall access) and negative rules (IPS), but that early NGFW had the same issues keeping up with wire speed when implementing threat prevention. He said that the perceived disparities were used to enforce market differentiation, and persist today, despite these scaling issues not being relevant anymore.
According to Rothman, the confusion lies not only in comparing the two device types, but also in the term "next-generation firewall" itself, which he thinks minimizes what the device does.
"What an NGFW does is bigger than just a firewall," said Rothman. "A firewall is about access control, basically enforcing what applications, ports, protocols, users, etc., are allowed to pass through the firewall. The NGFW also can look for and deny access to threats, like an IPS. So it's irritating that the device is called an NGFW, as it does more than just a firewall. We call it the Network Security Gateway, as that is a more descriptive term."
Rothman said that today's UTMs can do everything an NGFW can do, as long as they are configured properly and have the right policy integration. He said he believes that arguments about feature sets or target markets are examples of aritificial distinctions that only serve to confuse the issue.
"From a customer perspective, the devices do the same thing," Rothman said. "The NGFW does both access control and threat prevention, as does the UTM, just a little differently in some devices. Ultimately, the industry needs to focus on what's important: Will the device scale to the traffic volumes they need to handle with all of the services turned on? That's the only question that matters."
Moving forward, despite differences in opinions, the experts agree that enterprises shouldn't go into a purchasing process by trying to decide whether they need an NGFW or a UTM. Rather, the ultimate goal should always be to focus on the best product to solve their problems.
Rothman said that the distinctions will go away as low-end UTM vendors add more application-inspection capabilities and more traditional NGFW vendors go downmarket by offering versions suitable for SMBs. He also said he doesn't expect an end to confusing vendor marketing anytime soon, so enterprises need to be careful to ignore these semantics and focus on finding the right product to address security needs.
Young said that in the short term, UTM and NGFW will remain separate and will both continue to be mainstays for SMBs and larger enterprises respectively, and the decision around what device to use will be a question of need.
The question of UTM vs. NGFW is still divisive, and experts have different ideas regarding if and where the two technologies diverge when looking at the issue from a vendor perspective. However, when looking at the issue from a customer perspective, the experts agree that focusing on an enterprise's security needs will help to mitigate the confusion and lead to the right product.
"It isn't just about technology, it is about how small company's security is different than a big company's security," said Young. "It's all about the use case, not a 'versus.'"