Glosario KW | KW Glossary

Ontology Design | Diseño de Ontologías

Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

Page: (Previous)   1  2


Picture of System Administrator

Mobile Security

by System Administrator - Tuesday, 14 July 2015, 6:57 PM

Mobile Security

For today's enterprises, mobile security is becoming a top priority. As mobile devices proliferate in the workplace, companies need to be careful that these devices—as well as the networks and data that they access—are adequately protected. With the growing number of threats aimed at ex-ploiting mobile devices, safeguarding them is becoming complicated, but crucial.

In this eGuide, Computerworld UK, CSO, and IDG News Service examine some of the recent trends in mobile threats as well as ways to protect against them. Read on to learn how mobile security measures can help pro-tect your organization.

Please read the attached eGuide.

Picture of System Administrator


by System Administrator - Tuesday, 18 April 2017, 3:49 PM

AgWorks hires Mobilize.Net to retire old Visual Basic language apps

by Joel Shore

Legacy Visual Basic apps built for Windows desktops are in dire need of a reworking for the cloud and mobile age. Laborious manual code conversion is giving way to automation.

At AgWorks, a developer of applications for the farming and agricultural industries, the season had come to plant the seeds that would allow it to grow in the age of cloud computing.

After attempting to take its suite of applications, written in early 1990s Microsoft Visual Basic, and do an in-house rewrite to leverage modern cloud technology, the Davenport, Iowa, company reconsidered that strategy, opting instead for the services of Mobilize.Net, a Bellevue, Wash., cloud computing consultancy that specializes in that very discipline.

"These were four applications written in the Visual Basic language that date from shortly after the time VB first came out [in 1991]," said Matt Gropel, director of technology at AgWorks. The applications, which had to be installed manually on each computer, lacked the connectivity necessary for farms to keep data organized and in sync with the retailers that kept them supplied with seed, fertilizers, pesticides and other materials. "We recognized that while Visual Basic had served us well for many years, it could no longer meet our needs in the cloud," Gropel said.

A suite of applications for Windows, AgWorks handles seven key tasks for farmers and the retailers that serve them. Among those tasks are crop planning, including seed application and fertilizer recommendations from soil test data; scouting, for logging weed, insect, and disease observations; mapping, which generates visual representations of agronomic data; and compliance, which provides historical logging to minimize liability when dealing with federal Environmental Protection Agency or Department of Transportation inquiries.

He giveth then, and taketh away now

Who better to transition businesses away from Visual Basic than the very person who put them there in the first place?

"As a business you want to innovate and add new features. You don't want to take a year just to convert from an older language."

Matt Gropel | director of technology, AgWorks

"We got you into this mess, so who better to get you out of it," said Tom Button current CEO of Mobilize.Net and former longtime corporate vice president of Microsoft's developer division. Yes, it's true; Button, now helping businesses move beyond the venerable Visual Basic language, is the same guy who brought its first six releases to market.

As cloud and mobile technology shifted where computation takes place, it became necessary to build new, interconnected experiences with data that could be accessed, shared and changed in real time by any authorized user. That meant a move away from Visual Basic technology along with Windows itself.

"Windows and .NET are legacy platforms; it's not what people are targeting today," Button said. [Editor's note: Microsoft would disagree.] "We had a great run at Microsoft building up the client-server programming model, but the opportunity now for developers is much broader than that." The cloud, Button said, changes everything. "The vast majority of new application development now is HTML5 front ends that are accessible by any device at any time, accessing a cloud-based back end for business logic and data access."

To ease the pain of transitioning away from legacy Visual Basic technology, Mobilize.Net applies artificial intelligence-based pattern matching over large blocks of program code then mapping those patterns into newer technology that is much more true to the original programmer's intent, Button said.

One common technique in the process is to perform the transition, yet maintain the classic Windows look and feel, largely to avoid retraining users. "It's especially important when you have a network of 10,000 insurance sales offices that have all been running the same Windows app for 20 years but now want the operational benefits of HTML5," Button said. "They don't want the phones lighting up with user support questions, so they use the Kendo UI package on HTML5 to perfectly replicate the old Windows."

The dreaded 'Frankenapp'

At AgWorks, an early strategy was to load the Visual Basic application portfolio onto virtual machines. "That allowed us to update just that VM image and then users would download and install onto their own machines," Gropel said. The next phase was to rewrite sections of applications as a web app that could be run through a browser. "We couldn't do that quickly enough and ended up with a Frankenapp, some of the old with some of the new." When a user clicked on a button in the browser, that would launch some of the legacy Visual Basic code. This partial rewrite was done in Visual Studio with C# as the main server-side language, the ExtJS application framework on the client side. "We developed a RESTful API in C# and ExtJS on the front for communicating and retrieving data," Gropel said.

The problem that remained was that users working their acreages of farmland still lacked internet connectivity, resulting in database synchronization problems. Though Gropel said AgWorks was on the right track, his development team, deeply experienced in the company's Visual Basic code, had not done this type of work before. Combined with a need to complete the project quickly, Gropel sought an outside solution. "Mobilize.Net quickly converted our Visual Basic code in an automated fashion that turned into very nicely patterned C# code," Gropel said. "It was much faster than we could have done rewriting the applications manually."

Not all apps are conversion candidates

AgWorks was lucky. All of its Visual Basic apps were well-suited for automated conversion. But, not all are, Button said.

"We have seen applications in manufacturing automation that have serial cards plugged into the PCI bus to gather data off the factory floor and turn that into a programmatically accessible data set. Or if you're programming to the bare metal in Windows, these are not going to make a lot of sense to move to HTML5 on an iPhone or an Android," Button said.

Good candidates are supply chain apps that use a forms-over-data model, such as salespeople who need to get a price quote and have visibility through the supply chain. "There are thousands and thousands of those apps like that written in the 1990s," Button said. "They are all great candidates for putting in HTML5 so users can access them from their iPhone or Android and have the same user experience and access to the same data, whereas before they were tethered to their windows PCs sitting behind their desk on the LAN."

Advice for others

Having lived through the transition, Gropel said other businesses in the same situation, regardless of size, should consider going outside for code conversion.

"We would have been stuck for months just doing conversion and not creating any new functionality if we had to do this ourselves," Gropel said. "As a business you want to innovate and add new features. You don't want to take a year just to convert from an older language."

Joel Shore is news writer for TechTarget's Business Applications and Architecture Media Group. Write to him at or follow @JshoreTT on Twitter.

Next Steps


Picture of System Administrator

Modelado de amenazas de seguridad para la nube

by System Administrator - Monday, 22 December 2014, 2:42 PM

El proceso de modelado de amenazas de seguridad para la nube

por Ravila Helen White

Algunas empresas y consumidores se resisten a aceptar y adoptar la computación en la nube. Sin embargo, la aceptación procede en parte de la comprensión del riesgo, que es en gran medida sobre la comprensión del panorama de amenazas. Por lo tanto, las empresas tienen que definir adecuadamente las amenazas y clasificar los activos de información con un proceso de modelado de amenazas de seguridad.

Definición de amenazas

Antes de poder realizar el modelado de amenazas de nube, las empresas deben entender las amenazas de seguridad de información en un nivel más intrínseco.

Las amenazas son eventos no maliciosos y malintencionados; los últimos dañan activos de información. Los eventos no maliciosos ocurren sin intención maliciosa. Los ejemplos incluyen los desastres naturales, tecnología defectuosa, errores humanos, sobrecargas de energía, factores ambientales indeseables (como HVAC inadecuada), factores económicos, innovación tecnológica que excede la experiencia del personal, innovación que supera la supervisión regulatoria e innovación que excede las medidas de protección.

Eventos maliciosos son aquellos que ocurren por malicia. Los ejemplos incluyen hacking, hacktivismo, robo, abuso de derechos, abuso de acceso y recuperación de activos desechados, tales como buceo en el basurero. Los resultados de daños de cualquiera de estos eventos, cuando los activos de información son violados, expuestos o no están disponibles. Shellshock es un buen ejemplo de una vulnerabilidad que podría dar lugar a interrupciones generalizadas en toda una infraestructura de nube. En infraestructuras de nube, muchas de las tecnologías de punta –tales como firewalls, balanceadores de carga y routers– son aparatos que ejecutan un kernel Linux. Un atacante que gane con éxito el control de tecnologías de punta puede causar interrupciones a los servicios en la nube que soportan. Cuando el objetivo es la recopilación de información, el acceso a tecnología de punta es un trampolín hacia los sistemas internos que almacenan información personal o financiera. Del mismo modo, una variedad de las tecnologías utilizadas en infraestructuras de nube también ejecutan hosts Linux o Unix, ya sea que estén soportando almacenes de datos o un bus de servicios empresariales.

Eventos no maliciosos se producen regularmente y, en algunos casos, son inevitables. Tenga en cuenta los incidentes recientes en los que varios proveedores de servicios reiniciaron instancias para aplicar parches de hipervisor Xen. Para que los parches surtan efecto, los sistemas parchados tuvieron que ser reiniciados. Esos reinicios introdujeron la posibilidad de servicios en la nube no disponibles.

Ya sea malicioso o no malicioso, los proveedores de nube deben estar preparados para evitar interrupciones perceptibles a sus clientes.

Clasificar los activos de información

Una organización debe entender lo que significan los activos de información. Un activo de información es cualquier activo que resultaría en la pérdida de negocios o personales en caso de incumplimiento, exposición o indisposición. Los activos de información pueden incluir datos, tecnología y relaciones. Debido a sus costos, la tecnología se considera de mayor valor que los datos. Sin embargo, sin datos estructurados, es poco probable que la tecnología que almacena y transmite podría ser comprada y sostenida. Los datos son una mercancía para sus propietarios. Los ejemplos de datos son bases de datos de contacto de clientes, información de identificación personal, información de tarjetas de crédito, finanzas de la empresa, finanzas de consumo, dibujos de infraestructura, documentos confidenciales, información de configuración del sistema, información sanitaria e iniciativas estratégicas.

Los datos son más valiosos cuando pueden ser comercializados o utilizados para ganar la confianza de los consumidores para que inviertan en un servicio o producto. Aquí es donde la tecnología entra en escena. Dada la naturaleza dinámica del mercado de negocios y la naturaleza disruptiva de la tecnología, las empresas y los consumidores deben ser capaces de recuperar rápidamente, aún con precisión, transmitir y almacenar los datos tanto en la nube como en las instalaciones.

Las empresas y sus clientes son a menudo afectados de manera similar, cuando los activos de información son vulnerados, expuestas o no están disponibles. Muchas organizaciones, por ejemplo, han tercerizado la nómina o contratación a la nube. Una interrupción de los servicios de nómina en la nube podría causar un problema para los empleados que esperan sus cheques de pago.Las empresas que sufren una brecha típicamente sufren de reputación empañada. Las personas también experimentarán daños a la reputación su información sea accedida y utilizada por otra persona, lo que resulta en una mala calificación crediticia o pérdida financiera personal.

El último activo de información es el conjunto de relaciones comerciales que permiten una mayor ventaja competitiva. La mayoría de las relaciones comerciales implican el intercambio y o el uso compartido de información. Por lo general, ambas partes extienden un nivel de confianza entre los segmentos y los hosts en sus respectivas infraestructuras. Este nivel de confianza se logra idealmente a través de acuerdos contractuales que documenten la certificación no solo de la postura financiera saludable, sino también de las operaciones internas saludables. En el centro, se espera la garantía de las mejores prácticas en la gestión de la seguridad y el riesgo.

Las relaciones se vuelven tensas cuando un incumplimiento resultante de la incapacidad de un socio para cumplir con las obligaciones contractuales afecta la seguridad de los activos de información. Si un socio sale de la relación, ese activo se pierde y debe ser recuperado en otro lugar. El modelo de negocio de muchas entidades de atención de la salud se basa en las afiliaciones (como lo define HIPAA). La entidad cubierta buscará un socio de negocios para proporcionar una especialidad, mejorando así su ventaja competitiva o reduciendo los costos operativos. Se espera que los socios de negocios cumplan con los mismos requisitos de seguridad que la entidad cubierta. Cuando un socio de negocios experimenta una brecha exponiendo la información de salud protegida (PHI), la entidad cubierta también se ve afectada y los pacientes esperan que gestione todos los aspectos de mantener esa PHI privada y segura.


A pesar de los desafíos de seguridad que plantea la rápida evolución de la tecnología de la computación en la nube y las relaciones de negocios, la cuantificación de las amenazas y de los activos es necesaria para comprender el riesgo de la computación en la nube. Proporciona un modelo de seguridad del entorno. Los mismos activos de información, y muchas de las mismas amenazas, existen en infraestructuras no alojadas en la nube. El diferencial es por lo general la escala de los datos y el paisaje extenso para los atacantes.

Sobre la autora: Ravila Helen White es la directora de arquitectura de TI para una entidad de atención médica. Ella es una CISSP, CISM, CISA, CIPP y GCIH, y una nativa del noroeste del Pacífico.


Picture of System Administrator

Molecular Visualizer

by System Administrator - Tuesday, 16 September 2014, 1:33 PM

Ruben Gonzalez Jr.: Molecular Visualizer

Associate Professor and Director of Graduate Studies, Department of Chemistry, Columbia University. Age 42

Ruben Gonzalez Jr. worked his way through a chemistry degree at Florida International University (FIU) in Miami, first at a fast-food joint, then at a video store, where he eventually became assistant manager. Somehow, he also found time for science, spending his last three years in Stephen Winkle’s lab researching changes in the shape of DNA when it switches from a normal, right-handed helix to the opposite, left-handed form.

“I got lucky—hit the jackpot—[when] Ruben decided he wanted to work in my lab,” says Winkle, who knew Gonzalez as the student in his biochemistry class acing all of the tests.

Gonzalez had planned on becoming a high school chemistry teacher after college, but Winkle saw a different path for the young scientist and encouraged him to apply to graduate programs. In 1995, the pair were in San Francisco for a meeting of the Biophysical Society when Gonzalez got the news he’d been accepted into the University of California, Berkeley, where Winkle had done his own graduate work years earlier. They headed across the bay to meet Winkle’s former advisor, Ignacio “Nacho” Tinoco Jr., who immediately sold Gonzalez on RNA. “The idea that this molecule could carry the genetic information like DNA does, but could also fold into really complicated three-dimensional structures that could do chemistry like proteins . . . I fell in love with that idea,” Gonzalez recalls.

In Tinoco’s lab at Berkeley, Gonzalez studied RNA pseudoknots, the simplest known tertiary structure of RNA, consisting of two intertwined hairpin loops. Gonzalez solved the structure of a magnesium ion binding site in a pseudoknot from the mouse mammary tumor virus. He accomplished this by swapping out the magnesium ions, which help stabilize the structure but are not visible using nuclear magnetic resonance (NMR), replacing them with cobalt hexammine ions, which are NMR active.1 “I could actually see where the cobalt hexammine bound to the RNA and detect how it stabilized that particular structure,” says Gonzalez.

“He was not only smart, but he was ambitious and really cared about the science,” Tinoco says of Gonzalez.

Gonzalez went to Stanford University for a postdoc, working under RNA expert Joseph (Jody) Puglisi and physicist Steve Chu to develop single-molecule fluorescence tools that could aid in imaging ribosomes interacting with tRNA during protein translation.2 “[This was] the very first demonstration ever that one could study ribosomes and translation using single-molecule fluorescent approaches,” Gonzalez says.

In 2006, Gonzalez arrived at Columbia University, where he now oversees four postdocs, 11 graduate students, and one undergrad. Much of his group’s current work involves extending discoveries about translation in E. coli to the process in eukaryotes, with an eye toward human health and disease. Gonzalez also continues to innovate on the technological front, most recently by applying single-molecule field-effect transistors (smFET)—carbon nanotubes covalently bonded to the nucleic acids or proteins of interest that can help illuminate molecular structure—to the study of RNA, ribosomes, and translation.3

Gonzalez says he’s excited about how this new tool is going to allow him to dissect the process of translation at an ever-finer scale, in particular at much faster timescales, opening a window on how “ribosomes or other enzymes make decisions about correct or incorrect substrates.”  


  1. R.L. Gonzalez Jr., I. Tinoco Jr., “Solution structure and thermodynamics of a divalent metal ion binding site in an RNA pseudoknot,” J Mol Biol, 289:1267-82, 1999. (Cited 97 times)
  2. S.C. Blanchard et al., “tRNA dynamics on the ribosome during translation,” PNAS, 101:12893-98, 2004. (Cited 311 times)
  3. S. Sorgenfrei et al., “Label-free single-molecule detection of DNA-hybridization kinetics with a carbon nanotube field-effect transistor,” Nature Nanotechnology, 6:126-32, 2011. (Cited 111 times)
Picture of System Administrator


by System Administrator - Wednesday, 7 January 2015, 4:09 PM


he transformation of mobile communications networks into full-fledged IP networks has enabled operators to deliver a plethora of new services, content and applications to their subscribers. However, this IP metamorphosis has also enabled OTT players to gain a foothold at the expense of the mobile network operators.

OTT brands have become synonymous with attractive features and competitive pricing, and mobile consumers regularly turn to well-known OTT providers for myriad services. Leading OTT communications services include Microsoft’s Skype, Viber, Facebook’s Whatsapp, Line and Kik Messenger. Streaming video is routinely delivered to mobile devices from Google’s YouTube, Hulu and Netflix, while streaming audio is supplied by services as iHeartRadio, Pandora, Rhapsody, Samsung Milk, Slacker and Spotify.

The massive and growing proliferation of OTT services has resulted in flattening or declining voice and messaging revenues for mobile operators as OTT providers take more and more revenue share for those services. And while the use of bandwidth-intensive OTT services over mobile networks has driven up overall mobile data usage, it has also put considerable strain on those networks. As a result, operators continually need to dedicate massive amounts of capex and opex to expand capacity and maintaining quality of service (QoS) for all of their subscribers, not just those consuming OTT services.

Please read the attached whitepaper.

Picture of System Administrator


by System Administrator - Friday, 23 January 2015, 7:21 PM


Do What You Could Never Do Before

MongoDB can help you make a difference to the business. Tens of thousands of organizations, from startups to the largest companies and government agencies, choose MongoDB because it lets them build applications that weren’t possible before. With MongoDB, these organizations move faster than they could with relational databases at one tenth of the cost. With MongoDB, you can do things you could never do before. Find out how.


There are hundreds of thousands of MongoDB deployments. Here are a few of the popular use cases.

  • Single View. Real-time views of your business that integrate all of your siloed data.
  • Internet of Things. 40 billion sensors. $19 trillion in revenue. You’re gonna need a bigger database.
  • Mobile. Ship killer mobile apps in weeks, not months. Scale to millions of users. Easy with MongoDB.
  • Real-Time Analytics. Lightweight, low-latency analytics. Integrated into your operational database. In real time.
  • Personalization. Greet your customers like old friends – so they’ll treat you like one, too.
  • Catalog. Catalogs change constantly. That’s an RDBMS nightmare. But it’s easy with MongoDB.
  • Content Management. Store and serve any type of content, build any feature, serve it any way you like. From a single database.


MongoDB stores data using a flexible document data model that is similar to JSON. Documents contain one or more fields, including arrays, binary data and sub-documents. Fields can vary from document to document. This flexibility allows development teams to evolve the data model rapidly as their application requirements change.

Developers access documents through rich, idiomatic drivers available in all popular programming languages. Documents map naturally to the objects in modern languages, which allows developers to be extremely productive. Typically, there’s no need for an ORM layer.

MongoDB provides auto-sharding for horizontal scale out. Native replication and automatic leader election supports high availability across racks and data centers. And MongoDB makes extensive use of RAM, providing in-memory speed and on-disk capacity.

Unlike most NoSQL databases, MongoDB provides comprehensive secondary indexes, including geospatial and text search, as well as extensive security and aggregation capabilities.MongoDB provides the features you need to develop the majority of the new applications your organization develops today.

Watch a Recent MongoDB Presentation

We thought you might be interested in some of our videos from past events and webinars. You can browse our complete library of free, full-length presentations, or check out some of our most popular videos:


Picture of System Administrator

Multi-cloud deployment model

by System Administrator - Monday, 30 November 2015, 5:36 PM


Multi-cloud deployment model acceptance soars

by Joel Shore

Using a multi-cloud deployment model can help servers stay secure if used in consistent methods.

Deploy your company's compute load across multiple clouds, providers and services and you'll be better-protected against complete disaster if a server fails.

That's an increasingly popular and practical notion. As a result, adoption of a multi-cloud approach, sometimes called a cloud portfolio, is growing quickly. In its 2015 State of the Cloud Report, RightScale, a provider of cloud portfolio management services, noted that as of January 2015, 82% of surveyed enterprises are now employing a multi-cloud deployment model, up from 74% just one year earlier. Within that group, a mix of public and private clouds is favored by 55%, while those opting solely for multiple private or multiple public clouds are split almost equally (14% and 13%, respectively).

As companies simultaneously move applications and data to the public cloud, keep others on premises, and integrate with software as a service providers, it's important for them to deploy services in a consistent and repeatable way. "[Fail] to work this way and IT operations will not be able to maintain control," said Bailey Caldwell, RightScale's vice president of customer success.

Consistency through automation

In its August 2015 report, a cadre of nine Forrester Research analysts states that automating is the answer to the fundamental issues of scale, speed, costs and accuracy.

"It's not how well your cloud is organized or how shiny and new it is; it's about how well does that the application and workload perform together."

Roy Ritthaller

Vice president of marketing for IT operations management, Hewlett-Packard Enterprise 

Commenting on the report in relation to cloud deployment, analyst Dave Bartoletti said, "You may have a built a workload for Amazon [Web Services] that you now want to run in [Microsoft] Azure, or replace with a database in Salesforce, or use an ERP system like SAP in the cloud. You need a consistent way to deploy this."

The problem, Bartoletti explained, is that businesses find deployment across these varied platforms difficult largely due to a lack of tools with cross-platform intelligence. "Traditionally, you'd use the tool that comes with the platform, perhaps vCenter Server for VMware vSphere environments or AWS OpsWorks to deploy on Amazon."

The tools landscape is still adapting to the reality of the multi-cloud deployment model. In his October 2015 survey of hybrid cloud management offerings, Bartoletti analyzed 36 vendors, several of which offer tools that manage multi-provider cloud platforms along with application development and delivery.

Switching between cloud environments

Consistency appears to be the keyword for existing in a multi-cloud universe. It matters because nothing stays still in the cloud for very long, including the apps and data you provide and the actual infrastructures, services and pricing of each provider.

"If you want to move applications, data and services among different providers -- and you will as part of a continuous deployment strategy -- it's important to have consistency and a level of efficiency for managing those disparate environments," said Mark Bowker, senior analyst at the Enterprise Strategy Group.

Technical reasons for periodically fine-tuning a deployment strategy include:

  • Availability of new services from one provider that constitutes a competitive or operational advantage
  • Difficulties with a provider
  • A need to mirror deployments across multiple geographies to bolster performance
  • A requirement to ensure that network communications paths avoid certain locales in order to protect data assets
  • A desire to bring analytics services to where the data resides

Non-technical reasons might include changes to a favorable pricing model and the ability of one cloud provider to more fully follow an enterprise's compliance and governance requirements.

Similarly, the degree to which a cloud provider can meet regulatory requirements can lead to redeployment of applications or data from one vendor to another, said Lief Morin, president of Key Information Systems.

"When a business reaches a certain size, it has more leverage to dictate security terms to the provider; otherwise, the provider will dictate them down to the organization. It's a matter of economics and scale," he said. "In a multi-cloud environment, it gets more complicated. More providers means more risk, so it's crucial to work with them to ensure a consistent, standardized policy."

A multi-cloud deployment model should be a quasi-permanent arrangement, because nearly everything changes eventually.

"What you're seeing today is movement toward an application-configured infrastructure environment," noted Roy Ritthaller, vice president of marketing for IT operations management at Hewlett Packard Enterprise (HPE). "At the end of the day, it's not how well your cloud is organized or how shiny and new it is; it's about how well do the application and workload perform together."

While matching the application and load makes sense, the elastic nature of the hybrid cloud environment offers opportunities for continual refinement of where they are deployed, according to David Langlais, HPE's senior director of cloud and automation.

Like a swinging pendulum, a certain amount of back and forth between private and public clouds is natural, he said. "What's important is to design applications in a way that can handle changing deployment models, all the way down to managing the data and connecting to it," he explained. "Decisions that are made initially on the development side have to be handled in production for the long term. It also means understanding the cost profile and recalculating on a regular basis."

Next Steps

Dig Deeper on Cloud integration platform

Picture of System Administrator

Multi-Tenant Data Centers

by System Administrator - Monday, 20 October 2014, 1:52 PM

Four Advantages of Multi-Tenant Data Centers

Increasing demands on IT are forcing organizations to rethink their data center options. These demands can be difficult for IT to juggle for a variety of reasons. They represent tactical issues relating to IT staffing and budget, but they also represent real inflection points in the way enterprises strategically conduct business in the 21st century.

Please read the attached whitepaper.

Picture of System Administrator

Multicloud Strategy

by System Administrator - Tuesday, 7 February 2017, 6:19 PM

For enterprises, multicloud strategy remains a siloed approach

by Trevor Jones

Enterprises need a multicloud strategy to juggle AWS, Azure and Google Cloud Platform, but the long-held promise of portability remains more dream than reality.

Most enterprises utilize more than one of the hyperscale cloud providers, but "multicloud" remains a partitioned approach for corporate IT.

Amazon Web Services (AWS) continues to dominate the public cloud infrastructure market it essentially created a decade ago, but other platforms, especially Microsoft Azure, gained a foothold inside enterprises, too. As a result, companies must balance management of the disparate environments with questions of how deep to go on a single platform, all while the notion of connectivity of resources across clouds remains more theoretical than practical.

Similar to hybrid cloud before it, multicloud has an amorphous definition among IT pros as various stakeholders glom on to the latest buzzword to position themselves as relevant players. It has come to encompass everything from the use of multiple infrastructure as a service (IaaS) clouds, both public and private, to public IaaS alongside platform as a service (PaaS) and software as a service (SaaS).

The most common definition of a multicloud strategy, though, is the use of multiple public cloud IaaS providers. By this strictest definition, multicloud is already standard operations for most enterprises. Among AWS customers, 56% said they already use IaaS services from at least one other vendor, according to IDC.

"If you go into a large enterprise you're going to have different teams across the organization using different cloud platforms," said Jeff Cotten, president of Rackspace, based in Windcrest, Texas, which offers managed services for AWS and Azure. "It's not typically the same product teams leveraging both platforms. It's often different business units, with a different set of apps, likely different people and organizational constructs."

The use of multiple clouds is often foisted upon enterprises. Large corporations may opt for a second vendor when their preferred vendor has no presence in a particular market. Typically, however, platform proliferation is driven by lines-of-business that either procured services on their own or were brought under an IT umbrella through mergers and acquisitions.

"By the time these two get to know each other it's too late and they've gone too far down the path to make the change," said Deepak Mohan, research director at IDC.

An apples-to-apples comparison of market share among the three biggest hyperscale IaaS providers --AWS, Azure and Google Cloud Platform (GCP) -- is difficult to surmise because each company breaks out its revenues differently. Microsoft is closing the gap, while GCP saw a significant bump in 2016 as IT shops begin testing the platform, according to 451 Research. But by virtually any metric, AWS continues to lead the market by a sizable margin that is unlikely to close any time soon.

Nevertheless, the competition between the big three is not always a fight for the same IT dollars, as each takes a slightly different tact to wooing customers. Amazon, though softening to hybrid cloud, continues its stand-alone, all-encompassing approach, while Microsoft has a greater percentage of enterprise accounts as it positions itself to accommodate existing customers' journey from on premises to the cloud. Google, meanwhile, is banking on its heritage around big data algorithms, containers and machine learning to get ahead of the next wave of cloud applications.

"[IT shops] are not evaluating the three hyperscale guys purely on if AWS is cheaper, or which has the better portal interface or the coolest features because there's parity there," said Melanie Posey, research vice president at 451. "It's not a typical horse race story."

The move away from commoditization has also shifted how enterprises prioritize portability. In the past, companies emphasized abstracting workloads to pit vendors against each other and get better deals, but over the past year they have come to prize speed, agility and flexibility over cost, said Kip Compton, vice president of Cisco's cloud platform and services organization.

"We're actually seeing CIOs and customers starting to view these clouds through the lens of, 'I'm going to put the workloads in the environment that's best for that workload' and 'I'm going to worry a lot less about portability and focus on velocity and speed and taking more advantage of a higher- level service that each of these clouds offer.'"

Silos within a multicloud strategy

Even as the hyperscale vendors attempt to differentiate, picking and choosing providers for specific needs typically creates complications and leads to a siloed approach, rather than integration across clouds.

"It's more trouble than it's worth if you're going to do it that way," Posey said. "What ends up happening is company XYZ is running some kind of database function on AWS, but they're running customer-facing websites on Azure and never the two shall meet."

The idea of multicloud grew conceptually out of the traditional server model where a company would pick between Hewlett Packard Enterprise (HPE) or IBM and build its applications on top, but as the cloud evolved it didn't follow that same path, Mohan said.

"The way clouds were evolving fundamentally differs and there wasn't consistency, so integrating was hard unless you did a substantial amount of investment to do integration," he said.

It is also important to understand what is meant by a "multicloud" strategy, whether an architecture supports a multicloud strategy or that workloads actually run on multiple clouds.

"There's a difference between being built for the cloud or built to run in the cloud, and it's difficult from a software development perspective to have an architecture that's cloud agnostic and can run in either one," said Dave Colesante, COO of Alert Logic, a cloud security provider in Houston.

Alert Logic is migrating from a mix of managed colocation and AWS to being fully in the cloud as it shifts to a microservices model. The company offers support for AWS and Azure, but all of the data management ends up back in AWS.

The company plans to design components of its SaaS application to provide flexibility and to assuage Microsoft customers that want the back end in Azure, but that creates limitations of what can be done on AWS.

"It's a Catch-22," Colesante said. "If you want to leverage the features and functions that Amazon makes available for you, you probably end up in a mode where you're hooked into some of the things."

The two key issues around multicloud center on the control plain and the data plain, IDC's Mohan said. A consistent way to manage, provision and monitor resources across all operational aspects of infrastructure is a challenge that's only exacerbated when enterprises go deeper on one platform than another.

On the data side, the concept of data gravity often discourages moving workloads between clouds because it's free to move data in, but expensive to move data out. There are also limitations on the speed and ease by which they can be migrated.

Getting the best of both worlds

Companies with fewer than 1,000 employees typically adopt a multicloud strategy to save money and to take advantage of new services as they become available, but the rationale changes with larger enterprises, Mohan said.

"As you move up the spectrum, the big reason is to avoid lock-in," he said. "We attribute that to the nature of apps that are being run, and that they're probably more business critical IT app run by organizations internally."

The largest organizations, though, seem get the best of both worlds.

"Especially if it's for experimentation with new initiatives, they have much higher tolerance for going deep onto one platform," Mohan said. "For bread-and-butter workloads, volatility and jumping around services is not as important."

At the same time, large organizations that prioritize reliability, predictability, uptime and resiliency tend to favor the lowest common denominators of cost savings and commodity products, he said.

Motorola Mobility takes an agnostic view of cloud and does in fact look to move workloads among platforms when appropriate. It has a mix of AWS, GCP and Azure, along with its own OpenStack environment, and the company has put the onus on standardized tooling across platforms.

"If I can build an application at the simplest level of control, I should be able to port that to any cloud environment," said Richard Rushing, chief information security officer at Motorola Mobility. "This is kind of where we see cloud going."

Ultimately, a multicloud strategy comes down to IT shops' philosophical view, whether it's another form of a hosted environment, or a place to use APIs and put databases in order to take advantage of higher-level services, but can lead to lock-in, he added.

"I don't think there's a right way or a wrong way," Rushing said. "It depends on what you feel comfortable with."

Despite that agnostic view, Motorola hasn't completely shied away from services that tether it to a certain provider.

"Sometimes the benefit of the service is greater than [the concern] about what you want to be tied down to," Rushing said. "It's one of those things where you have to look at it and say, is this going to wrap me around something that could benefit me, but what else is it going to do?"

Experimentation and internal conversations about those tradeoffs can be healthy because it opens an organization to a different way of doing things, but it also forces developers to justify a move that could potentially restrict the company going forward, he added.


Cross-cloud not yet reality

A wide spectrum of companies has flooded the market to fill these gaps created by multicloud, despite some high-profile failures including Dell Cloud Manager. Smaller companies, such as RightScale and Datapipe, compete with legacy vendors, such as HPE, IBM and Cisco, and even AWS loyalists like 2nd Watch look to expand their capabilities to other providers. Other companies, such as NetApp and Informatica, focus on data management across environments.

Of course, the ultimate dream for many IT shops is true portability across clouds, or even workloads that span multiple clouds. It's why organizations abstract their workloads to avoid lock-in. It's also what gave OpenStack so much hype at its inception in 2010, and helped generate excitement about containers when Docker first emerged in 2013. Some observers see that potential coming to fruition in the next year or two, but for now those examples remain the exception to the rule.

What you'd eventually like to get to is data science analytics on platform A, your infrastructure and processing and storage on platform B and something else on platform C, but that's a number of years before that becomes a reality.

Dave ColesanteCOO, Alert Logic

The hardest path to span workloads across clouds is through the infrastructure back end, Colesante said. For example, if an AWS customer using DynamoDB, Kinesis or Lambda wants to move to Azure, there are equivalents in Microsoft's cloud. However, the software doesn't transparently allow users to know the key-value store equivalent between the two, which means someone has to rewrite the application for every environment it sits on.

Another obstacle is latency and performance, particularly the need for certain pieces of applications to be adjacent. Cisco has seen a growing interest in this, Compton said, with some banks putting their database in a colocation facility near a major public cloud to resolve the problem.

Alert Logic's data science teams are exploring what Google has to offer, but Colesante pumped the brakes on the cross-cloud utopia, noting that most companies are still in the earliest stages of cloud adoption.

"What you'd eventually like to get to is data science analytics on platform A, your infrastructure and processing and storage on platform B and something else on platform C," he said, "but that's a number of years before that becomes a reality."

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at

Next Steps

Picture of System Administrator

Multifactor Authentication (MFA)

by System Administrator - Tuesday, 16 June 2015, 9:11 PM

Multifactor Authentication (MFA)

Posted by Margaret Rouse

Multifactor authentication is one of the most cost-effective mechanisms a business can deploy to protect digital assets and customer data.

Multifactor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transaction. 

Multifactor authentication combines two or more independent credentials: what the user knows (password), what the user has (security token) and what the user is (biometric verification). The goal of MFA is to create a layered defense and make it more difficult for an unauthorized person to access a target such as a physical location, computing device, network or database. If one factor is compromised or broken, the attacker still has at least one more barrier to breach before successfully breaking into the target.

Typical MFA scenarios include:
  • Swiping a card and entering a PIN.
  • Logging into a website and being requested to enter an additional one-time password (OTP) that the website's authentication server sends to the requester's phone or email address.
  • Downloading a VPN client with a valid digital certificate and logging into the VPN before being granted access to a network.
  • Swiping a card, scanning a fingerprint and answering a security question.
  • Attaching a USB hardware token to a desktop that generates a one-time passcode and using the one-time passcode to log into a VPN client.


One of the largest problems with traditional user ID and password login is the need to maintain a password database. Whether encrypted or not, if the database is captured it provides an attacker with a source to verify his guesses at speeds limited only by his hardware resources. Given enough time, a captured password database will fall.As processing speeds of CPUs  have increased, brute force attacks have become a real threat. Further developments like GPGPU password cracking and rainbow tables have provided similar advantages for attackers. GPGPU cracking, for example, can produce more than 500,000,000 passwords per second, even on lower end gaming hardware. Depending on the particular software, rainbow tables can be used to crack 14-character alphanumeric passwords in about 160 seconds. Now purpose-built FPGA cards, like those used by security agencies, offer ten times that performance at a minuscule fraction of GPU power draw. A password database alone doesn't stand a chance against such methods when it is a real target of interest.In the past, MFA systems typically relied upon two-factor authentication. Increasingly, vendors are using the label "multifactor" to describe any authentication scheme that requires more than one identity credential.

Authentication factors

An authentication factor is a category of credential used for identity verification. For MFA, each additional factor is intended to increase the assurance that an entity involved in some kind of communication or requesting access to some system is who, or what, they are declared to be. The three most common categories are often described as something you know (the knowledge factor), something you have (the possession factor) and something you are (the inherence factor).

Knowledge factors – information that a user must be able to provide in order to log in. User names or IDs, passwords, PINs and the answers to secret questions all fall under this category. See also: knowledge-based authentication (KBA)

Possession factors - anything a user must have in their possession in order to log in, such as a security token, a one-time password (OTP) token, a key fob, an employee ID card or a phone’s SIM card. For mobile authentication, a smartphone often provides the possession factor, in conjunction with an OTP app.

Inherence factors - any biological traits the user has that are confirmed for login. This category includes the scope of biometric authentication  methods such as retina scans, iris scans fingerprint scans, finger vein scans, facial recognition, voice recognition, hand geometry, even earlobe geometry.

Location factors – the user’s current location is often suggested as a fourth factor for authentication. Again, the ubiquity of smartphones can help ease the authentication burden here: Users typically carry their phones and most smartphones have a GPS device, enabling reasonable surety confirmation of the login location.

Time factors – Current time is also sometimes considered a fourth factor for authentication or alternatively a fifth factor. Verification of employee IDs against work schedules could prevent some kinds of user account hijacking attacks. A bank customer can't physically use their ATM card in America, for example, and then in Russia 15 minutes later. These kinds of logical locks could prevent many cases of online bank fraud.

Multifactor authentication technologies:

Security tokens: Small hardware devices that the owner carries to authorize access to a network service. The device may be in the form of a smart card or may be embedded in an easily-carried object such as a key fob or USB drive. Hardware tokens provide the possession factor for multifactor authentication. Software-based tokens are becoming more common than hardware devices.

Soft tokens: Software-based security token applications that generate a single-use login PIN. Soft tokens are often used for multifactor mobile authentication, in which the device itself – such as a smartphone – provides the possession factor.

Mobile authentication: Variations include: SMS messages and phone calls sent to a user as an out-of-band method, smartphone OTP apps, SIM cards and smartcards with stored authentication data.

Biometric authentication methods such as retina scans, iris scans fingerprint scans, finger vein scans, facial recognition, voice recognition, hand geometry and even earlobe geometry.

GPS smartphones can also provide location as an authentication factor with this on board hardware.

Employee ID and customer cards, including magnetic strip and smartcards.

The past, present and future of multifactor authentication

In the United States, interest in multifactor authentication has been driven by regulations such as the Federal Financial Institutions Examination Council (FFIEC) directive calling for multifactor authentication for Internet banking transactions.

MFA products include EMC RSA Authentication Manager and RSA SecurID, Symantec Validation and ID Protection Service, CA Strong Authentication, Vasco IDENTIKEY Server and DIGIPASS, SecureAuth IdP, Dell Defender, SafeNet Authentication Service and Okta Verify.

Next Steps

Learn more about the benefits of multifactor authentication in the enterprise and read this comparison of the latest multifactor authentication methods.When it comes to MFA technology, it's important to determine which deployment methods and second factors will best suit your organization. This Photo Story outlines your options.

Continue Reading About multifactor authentication (MFA)

Page: (Previous)   1  2