Glosario KW | KW Glossary


Ontology Design | Diseño de Ontologías

All categories

Page: (Previous)   1  2  3  4  5  6  7  8  9  10  ...  63  (Next)
  ALL

BUSINESS

Picture of System Administrator

Methods to improve enterprise search in content management apps

by System Administrator - Saturday, 14 March 2015, 1:59 PM
 

Methods to improve enterprise search in content management apps

It's no longer a struggle for organizations to document and store files -- even in the cloud. More difficult to do is storing that information in a way that makes sense and is easy to retrieve. It's a big problem facing organizations, with many looking to improve their enterprise content management software and search technologies. Luckily, there is hope on the horizon in the form of improved classification techniques.

In this three-part guide, SearchContentManagement contributors outline some of those techniques and what companies can do with them. First, Laurence Hart discusses how text analytics and improved content classification can improve enterprise search. It's a growing trend in content management, and Hart expects "to hear vendors and analysts talking about how analytics engines on the front end can drive success" in the near future. Next, Jonathan Bordoli details two key information tools to better manage enterprise content: search-based applications and taxonomies. Bordoli says you can’t be successful at one without the other. Finally, consultant Steve Weissman offers up several important considerations when first implementing ECM software.

Table Of Contents

  • With analytics, enterprise search falls in line
  • Search-based apps need the engine of taxonomy
  • Building more powerful search atop an ECM system

Please read the attached whitepaper.

Picture of System Administrator

Metología de Marco Lógico

by System Administrator - Monday, 22 December 2014, 10:54 PM
 

Formulación de programas con la metofología de marco lógico

El presente Manual busca ayudar al lector a entender y manejar los principios básicos del enfoque del Marco Lógico y su forma de aplicación para el diseño y posterior evaluación de proyectos y programas. Las razones para escribir el Manual {lcub}u2013{rcub}y para darle el contenido que él tiene- se basan en la renovada inquietud por contar con un instrumento, a la vez versátil y sumamente potente, para apoyar a la llamada Gestión para Resultados, en los programas y proyectos del sector público. Ello no implica que el Manual no sea útil para programas y proyectos del sector privado, sino que su redacción se ubica en un escenario en que sus usuarios esperados son funcionarios y consultores que trabajan en o para el sector público. La Gestión para Resultados es una inquietud compartida por los gobiernos de América Latina y el Caribe ante las urgencias del desarrollo económico, político y social de tipo sustentable, que necesitan los habitantes de la región para mejorar su calidad de vida1. Es un enfoque que tiene algunas características que lo diferencia de las formas de gestión gubernamental más tradicionales, como la gestión por funciones preestablecidas. El método del Marco Lógico, al igual que cualquier otro método, debe aplicarse dentro de un determinado contexto, que es su espacio de validez. Desgraciadamente es muy frecuente que se olvide este precepto básico y que se construya directamente una matriz con cuatro filas y cuatro columnas con la denominación propia de la Matriz de Marco Lógico, sin pasar por ninguna de las fases previas. Lo que resulte será, por cierto, una matriz de cuatro por cuatro, pero no necesariamente una Matriz de Marco Lógico, aunque se decida colocarle este nombre. Lo que define a la Metodología de Marco Lógico no es el producto final, sino el proceso que debe seguirse para llegar a la Matriz de Marco Lógico. Por ello, el presente manual no es un recetario para llenar matrices de cuatro filas y cuatro columnas, sino que explicita algunos caminos metodológicos para concluir con un Marco Lógico y su correspondiente matriz.

Por favor lea el archivo PDF adjunto.

Picture of System Administrator

Microgrid Start Up

by System Administrator - Thursday, 9 July 2015, 4:47 PM
 

eBook | Microgrid Start Up: A Guide to Navigating the Financial, Regulatory, and Technical Challenges of Microgrid Implementation

The development of a microgrid comes with a wide array of issues -- regulatory, technical and financial. This eBook takes a look at all of these topics: regulatory issues to consider prior to pursuing a microgrid project, critical technical issues involved in pursuing microgrid projects, and the two-part evaluation process for investing in a microgrid.

Read this eBook to discover:

  • How to identify key regulatory risks that may affect development of a microgrid project
  • Different ownership models for a microgrid project
  • Basics of evaluating the financial viability of a microgrid project
  • Potential impact of utility rate issues on economics of a microgrid project

Please read the attached eBook.

Picture of System Administrator

Micromanager

by System Administrator - Monday, 9 March 2015, 3:18 AM
 

IT Leadership: Signs You're a Micromanager (And How to Stop)

By Sharon Florentine

Micromanagement may seem harmless, but it's sabotaging your teams, your productivity and morale from within, and stifling your business's ability to grow. Here's how to tell if you're a micromanager, and some steps you can take to overcome this fatal flaw.

Are you never quite satisfied with your team's results? Do you avoid delegating at all costs, often taking on work that's far below your experience and talent level just because you're certain no one else can do it as well as you can? Are you constantly demanding status updates, progress reports and check-ins? It's time to face the facts: You're a micromanager.

Micromanagement might not appear to be a big deal -- you're just trying to make sure tasks and projects are on time, done right and in ways that will benefit the business, right? -- but the truth is, it's incredibly damaging to every aspect of a business, says Stu Coleman, partner and senior managing director at WinterWyman Financial Contracting and a self-proclaimed recovering micromanager.

Micromanagement Stunts Growth, Erodes Morale and Slows Productivity

"At its core, micromanagement chokes off growth. You can't micromanage a team, or an entire business and expect to grow; you have to support, groom and grow leadership within your organization that can take on roles and responsibilities that you used to perform so you can focus on strategic issues," says Coleman.

Micromanagement also negatively impacts employee morale, engagement and productivity, says Bob Hewes, senior partner with oversight for leadership development, coaching and knowledge management, Camden Consulting, and can lead to high turnover rates and even recruiting and talent retention problems.

"Micromanagement is almost never benign. It's a destructive force that goes far beyond a 'management style;' it really gets at morale, at engagement and you'll find that micromanagers often see high turnover of their direct reports," Hewes says.

Productivity suffers, too, as teams and individuals must constantly interrupt their progress to deliver status reports and updates as well as explaining their methods and processes in meetings and phone calls. The likely outcome of these constant status reports is that the micromanager's way is better, faster, and the employee is wrong, even though they may have excellent skills and a wealth of knowledge, says Hewes.

"It also creates a risk-averse culture. Workers aren't going to even want to try new, innovative things if they feel their managers is going to be breathing down their neck, waiting for them to screw up," Hewes says.

It can be difficult for micromanagers to identify their behavior, according to Coleman, because it can often be justified and excused. "I'm doing what's best for the business," "If this project fails, we'll all be in trouble" and 'too much is at stake for me to let the project be anything less than perfect" are all justifications for micromanaging behavior, but while those excuses may ring true in the short term, they're sabotaging your teams from within, says Coleman.

"When these behaviors are happening, it can be difficult to admit to yourself. But you don't have to relax your quality standards and your results-focused approach in order to be successful. Just like in professional football, you have to watch game tapes to see what you did right, what you did wrong and where to improve," says Hewes.

How to Fix It

No one in a leadership position actively wants to be a micromanager, according to Hewes, but if you find yourself faced with overwhelming evidence, there are steps you can take to change your behavior. The first step is always being aware of the problem and having the willingness to take steps to change.

"You need to be open to soliciting feedback on your behavior, and be receptive to changing it. You must recognize that the way to grow, the way to get work done is by gaining more time -- and you get more time by getting work done with and through others," says Hewes.

Once you've recognized the problem, you must enlist the help of your team and direct reports to help you address it and recognize that change isn't going to happen overnight. "You have to tell people what's happening. It can be very helpful to schedule a sit-down meeting to admit you have a problem and ask for help to make it better. You can say, 'I recognize that I've been all over you, and that it's hindering you. I also recognize that you have skills and talents, and those haven't been empowered effectively. So, let's try some new methods,'" Coleman says.

These new methods could mean meeting with teams or individuals less frequently, or allowing them to work through problems, issues and obstacles without consulting you; only reporting back when problems have been identified, addressed and resolved.

Of course, if you've been micromanaging for a long time, there's a chance your teams aren't going to be able to function independently right off the bat, so you must be prepared for some growing pains. "When you finally decide that you're going to empower your people, understand that you've become an enabler. You haven't ever given them a chance to think or work independently, so just when you want them to start making decisions they may be unable to function without the intense supervision and direction you've given them in the past," says Coleman. That's completely understandable, and you must be patient and willing to work through the inevitable learning curve.

You also should consider doing a cost-benefit analysis of your time and where you're spending it, says Coleman. Every time you get the urge to ask for a status report, or to step in and "help" a colleague with an assigned task or project, ask yourself if what you're about to do falls below your pay grade?

"It sounds really terrible on its surface, but sometimes, as a person in a leadership role or in management, you have to think, 'I get paid too much to do this work; I've hired someone at a lower responsibility level who can handle this. Let them earn their paycheck by taking care of this task,'" Coleman says.

While it sounds harsh, what you're really doing is changing your own patterns and that of your direct reports, and establishing greater trust. By backing off and allowing them to do what it is you hired them to do, they are gaining skills, knowledge and experience and are performing their job as specified -- and they're empowered to do so. Or, put another way, "You have to decouple the 'what' from the 'how'. In every project, every job, every task is 'what you want accomplished' and the 'how you want that task accomplished.' A micromanager will overemphasize the 'how' over the 'what' and will be laser focused on minutia to the detriment of the overall outcome," Hewes says.

"You have to let go and understand that there are a lot of different ways to 'how,' a lot of different ways to get things done. Especially in knowledge work, there's a lot of flexibility in the 'how.' It's not like surgery where you have to do certain steps in certain order or else someone dies," says Hewes.

Give It Time

And while it may seem painful and slow going, each step toward asking people to take responsibility for the 'how' and relinquishing some control is a step toward improving your relationships, growing your business, encouraging trust, engagement and productivity.

Link: http://www.cio.com

Picture of System Administrator

Minimizar las interrupciones en las áreas de TI

by System Administrator - Thursday, 8 January 2015, 3:11 PM
 

Cómo hacen las empresas para minimizar las interrupciones en las áreas de TI

Fuente: Networking

Casi tres cuartos de los profesionales de TI encuestados señaló que su organización considera como una meta importante lograr cero interrupciones para sus sistemas computacionales empresariales. Así lo señala un estudio comisionado por SUSE, el cual muestra también que la mayoría de las organizaciones apuestan a la actualización de hardware, aplicaciones y funcionalidad del sistema operativo. Según la encuesta, los fallos en la tecnología fueron por mucho la fuente predominante de interrupciones imprevistas. ¿Qué factores tienen en cuenta los responsables de TI para lograr cero interrupciones?

Casi tres cuartos de los profesionales TI encuestados señaló que su organización considera como una meta importante lograr cero interrupciones para sus sistemas computacionales empresariales, mientras que un total de 89% al día de hoy espera experimentar interrupciones para su carga de trabajo más importante. La brecha entre la necesidad de cero interrupciones y lo que las empresas actualmente están experimentando fue revelada en un reciente estudio encomendado por el proveedor de Linux empresarial, SUSE.

Las buenas noticias es que más de la mitad (54%) de los encuestados indicaron que están efectuando una estrategia para reducir de manera significativa las interrupciones en el sistema en el próximo año, y otro 17% cuenta con una estrategia pero aún no han comenzado a implementarla. Estas estrategias incluyen la actualización o cambio de hardware (55%), aplicaciones (42%) y funcionalidad del sistema operativo (34%).

Las interrupciones del sistema —particularmente interrupciones imprevistas— afectan negativamente a organizaciones de todos los tipos y tamaños, limitan el crecimiento, reducen los ingresos y afectan la productividad —señaló Ralf Flaxa, vicepresidente de Ingeniería en SUSE—. CIOs y profesionales TI reconocen la necesidad de reducir las interrupciones, y deben trabajar con los proveedores de software y hardware que compartan su compromiso para hacer una realidad el lograr casi cero interrupciones”.

Para reducir las interrupciones imprevistas, los encuestados citaron “aprovechar redundancias, como las que ofrece el agrupamiento (cluster) de alta disponibilidad” (51%), “funciones de instantánea de estado y regresar a un estado anterior” (35%) y “actualizar el SO” (32%) como los pasos más probables. Para reducir las interrupciones planeadas, los proyectos incluyen instantánea/regresar a estado anterior (51%), mejores herramientas de actualización (40%) y actualización en vivo (36%).

Otros resultados indicaron:

  • Las cargas de trabajo más importantes a salvaguardar contra interrupciones son correo electrónico, hosts de virtualización y servidores web, seguidos de cerca por las cargas de trabajo específicas, particulares del mercado o industria del encuestado.
  • En contraste, las cargas de trabajo más vulnerables a las amenazas de la interrupción son las cargas de trabajo específicas de la industria, los hosts de virtualización, los servidores web y ERP, debido, muy probablemente, a su impacto a todo lo largo de la empresa.
  • Casi un cuarto de los encuestados dijo que su carga de trabajo más vulnerable eran los servidores web, volviéndola la carga de trabajo con mayor riesgo.
  • La mayoría de los encuestados programan interrupciones planeadas para sus cargas de trabajo más importantes, ya sea mensual o trimestralmente.
  • Interrupciones imprevistas, sin embargo, fueron experimentadas por el 80% de los encuestados. Aquellos que sufrieron interrupciones imprevistas tuvieron el problema, en promedio, más de dos veces al año en su carga de trabajo más importante.
  • Los fallos en la tecnología fue por mucho la fuente predominante de interrupciones imprevistas.

CARGAS DE TRABAJO

El estudio de SUSE indagó sobre qué cargas de trabajo resultan más importantes a la hora de protegerlas contra una interrupción del servicio. Los resultados fueron:

• Específicas de la industria: 22%
• Virtualización: 19%
• Servidores Web: 17% 
• ERP: 16%
• e-mail: 12%
• CRM: 5%
• Archivado/Impresión: 4%
• Analíticas/BI: 3%
• Compliance: 2%

La encuesta también preguntó: ¿qué nivel de tiempos de downtime espera usted para las cargas de trabajo más críticas?

• Ninguna interrupción (Zero Downtime): 11%
• 5 minutos/año: 11%
• >1 hora/año: 19%
• >9 horas/año: 40%
• >90 horas/año: 19%

En el estudio se encuestó a 105 profesionales TI. El estudio completo puede ser encontrado en www.suse.com/attainingzerodowntime. Para más información acerca de los beneficios de la cero interrupción del Linux empresarial, visite www.suse.com/zerodowntime.

Link: http://uruguay.itsitio.com

Picture of System Administrator

Mobile App Security Through Containerization: Top 10 Considerations

by System Administrator - Tuesday, 16 June 2015, 10:24 PM
 

Mobile App Security Through Containerization: Top 10 Considerations

Mobile devices present a unique dilemma to the enterprise. On the one hand, workers empowered with tablets and smartphones can transform the way they do business; they're more agile, closer to customers, and more productive. Bring Your Own Device (BYOD) programs give users the freedom to work on the devices of their own choosing, while still allowing an enterprise to reap the productivity benefits of these always connected, readily accessible mobile devices.

Please read the attached whitepaper.

Picture of System Administrator

Mobile Security

by System Administrator - Tuesday, 14 July 2015, 6:57 PM
 

Mobile Security

For today's enterprises, mobile security is becoming a top priority. As mobile devices proliferate in the workplace, companies need to be careful that these devices—as well as the networks and data that they access—are adequately protected. With the growing number of threats aimed at ex-ploiting mobile devices, safeguarding them is becoming complicated, but crucial.

In this eGuide, Computerworld UK, CSO, and IDG News Service examine some of the recent trends in mobile threats as well as ways to protect against them. Read on to learn how mobile security measures can help pro-tect your organization.

Please read the attached eGuide.

Picture of System Administrator

Modelado de amenazas de seguridad para la nube

by System Administrator - Monday, 22 December 2014, 2:42 PM
 

El proceso de modelado de amenazas de seguridad para la nube

por Ravila Helen White

Algunas empresas y consumidores se resisten a aceptar y adoptar la computación en la nube. Sin embargo, la aceptación procede en parte de la comprensión del riesgo, que es en gran medida sobre la comprensión del panorama de amenazas. Por lo tanto, las empresas tienen que definir adecuadamente las amenazas y clasificar los activos de información con un proceso de modelado de amenazas de seguridad.

Definición de amenazas

Antes de poder realizar el modelado de amenazas de nube, las empresas deben entender las amenazas de seguridad de información en un nivel más intrínseco.

Las amenazas son eventos no maliciosos y malintencionados; los últimos dañan activos de información. Los eventos no maliciosos ocurren sin intención maliciosa. Los ejemplos incluyen los desastres naturales, tecnología defectuosa, errores humanos, sobrecargas de energía, factores ambientales indeseables (como HVAC inadecuada), factores económicos, innovación tecnológica que excede la experiencia del personal, innovación que supera la supervisión regulatoria e innovación que excede las medidas de protección.

Eventos maliciosos son aquellos que ocurren por malicia. Los ejemplos incluyen hacking, hacktivismo, robo, abuso de derechos, abuso de acceso y recuperación de activos desechados, tales como buceo en el basurero. Los resultados de daños de cualquiera de estos eventos, cuando los activos de información son violados, expuestos o no están disponibles. Shellshock es un buen ejemplo de una vulnerabilidad que podría dar lugar a interrupciones generalizadas en toda una infraestructura de nube. En infraestructuras de nube, muchas de las tecnologías de punta –tales como firewalls, balanceadores de carga y routers– son aparatos que ejecutan un kernel Linux. Un atacante que gane con éxito el control de tecnologías de punta puede causar interrupciones a los servicios en la nube que soportan. Cuando el objetivo es la recopilación de información, el acceso a tecnología de punta es un trampolín hacia los sistemas internos que almacenan información personal o financiera. Del mismo modo, una variedad de las tecnologías utilizadas en infraestructuras de nube también ejecutan hosts Linux o Unix, ya sea que estén soportando almacenes de datos o un bus de servicios empresariales.

Eventos no maliciosos se producen regularmente y, en algunos casos, son inevitables. Tenga en cuenta los incidentes recientes en los que varios proveedores de servicios reiniciaron instancias para aplicar parches de hipervisor Xen. Para que los parches surtan efecto, los sistemas parchados tuvieron que ser reiniciados. Esos reinicios introdujeron la posibilidad de servicios en la nube no disponibles.

Ya sea malicioso o no malicioso, los proveedores de nube deben estar preparados para evitar interrupciones perceptibles a sus clientes.

Clasificar los activos de información

Una organización debe entender lo que significan los activos de información. Un activo de información es cualquier activo que resultaría en la pérdida de negocios o personales en caso de incumplimiento, exposición o indisposición. Los activos de información pueden incluir datos, tecnología y relaciones. Debido a sus costos, la tecnología se considera de mayor valor que los datos. Sin embargo, sin datos estructurados, es poco probable que la tecnología que almacena y transmite podría ser comprada y sostenida. Los datos son una mercancía para sus propietarios. Los ejemplos de datos son bases de datos de contacto de clientes, información de identificación personal, información de tarjetas de crédito, finanzas de la empresa, finanzas de consumo, dibujos de infraestructura, documentos confidenciales, información de configuración del sistema, información sanitaria e iniciativas estratégicas.

Los datos son más valiosos cuando pueden ser comercializados o utilizados para ganar la confianza de los consumidores para que inviertan en un servicio o producto. Aquí es donde la tecnología entra en escena. Dada la naturaleza dinámica del mercado de negocios y la naturaleza disruptiva de la tecnología, las empresas y los consumidores deben ser capaces de recuperar rápidamente, aún con precisión, transmitir y almacenar los datos tanto en la nube como en las instalaciones.

Las empresas y sus clientes son a menudo afectados de manera similar, cuando los activos de información son vulnerados, expuestas o no están disponibles. Muchas organizaciones, por ejemplo, han tercerizado la nómina o contratación a la nube. Una interrupción de los servicios de nómina en la nube podría causar un problema para los empleados que esperan sus cheques de pago.Las empresas que sufren una brecha típicamente sufren de reputación empañada. Las personas también experimentarán daños a la reputación su información sea accedida y utilizada por otra persona, lo que resulta en una mala calificación crediticia o pérdida financiera personal.

El último activo de información es el conjunto de relaciones comerciales que permiten una mayor ventaja competitiva. La mayoría de las relaciones comerciales implican el intercambio y o el uso compartido de información. Por lo general, ambas partes extienden un nivel de confianza entre los segmentos y los hosts en sus respectivas infraestructuras. Este nivel de confianza se logra idealmente a través de acuerdos contractuales que documenten la certificación no solo de la postura financiera saludable, sino también de las operaciones internas saludables. En el centro, se espera la garantía de las mejores prácticas en la gestión de la seguridad y el riesgo.

Las relaciones se vuelven tensas cuando un incumplimiento resultante de la incapacidad de un socio para cumplir con las obligaciones contractuales afecta la seguridad de los activos de información. Si un socio sale de la relación, ese activo se pierde y debe ser recuperado en otro lugar. El modelo de negocio de muchas entidades de atención de la salud se basa en las afiliaciones (como lo define HIPAA). La entidad cubierta buscará un socio de negocios para proporcionar una especialidad, mejorando así su ventaja competitiva o reduciendo los costos operativos. Se espera que los socios de negocios cumplan con los mismos requisitos de seguridad que la entidad cubierta. Cuando un socio de negocios experimenta una brecha exponiendo la información de salud protegida (PHI), la entidad cubierta también se ve afectada y los pacientes esperan que gestione todos los aspectos de mantener esa PHI privada y segura.

Conclusión

A pesar de los desafíos de seguridad que plantea la rápida evolución de la tecnología de la computación en la nube y las relaciones de negocios, la cuantificación de las amenazas y de los activos es necesaria para comprender el riesgo de la computación en la nube. Proporciona un modelo de seguridad del entorno. Los mismos activos de información, y muchas de las mismas amenazas, existen en infraestructuras no alojadas en la nube. El diferencial es por lo general la escala de los datos y el paisaje extenso para los atacantes.

Sobre la autora: Ravila Helen White es la directora de arquitectura de TI para una entidad de atención médica. Ella es una CISSP, CISM, CISA, CIPP y GCIH, y una nativa del noroeste del Pacífico.

 

Picture of System Administrator

Multi-cloud deployment model

by System Administrator - Monday, 30 November 2015, 5:36 PM
 

 

Multi-cloud deployment model acceptance soars

by Joel Shore

Using a multi-cloud deployment model can help servers stay secure if used in consistent methods.

Deploy your company's compute load across multiple clouds, providers and services and you'll be better-protected against complete disaster if a server fails.

That's an increasingly popular and practical notion. As a result, adoption of a multi-cloud approach, sometimes called a cloud portfolio, is growing quickly. In its 2015 State of the Cloud Report, RightScale, a provider of cloud portfolio management services, noted that as of January 2015, 82% of surveyed enterprises are now employing a multi-cloud deployment model, up from 74% just one year earlier. Within that group, a mix of public and private clouds is favored by 55%, while those opting solely for multiple private or multiple public clouds are split almost equally (14% and 13%, respectively).

As companies simultaneously move applications and data to the public cloud, keep others on premises, and integrate with software as a service providers, it's important for them to deploy services in a consistent and repeatable way. "[Fail] to work this way and IT operations will not be able to maintain control," said Bailey Caldwell, RightScale's vice president of customer success.

Consistency through automation

In its August 2015 report, a cadre of nine Forrester Research analysts states that automating is the answer to the fundamental issues of scale, speed, costs and accuracy.

"It's not how well your cloud is organized or how shiny and new it is; it's about how well does that the application and workload perform together."

Roy Ritthaller

Vice president of marketing for IT operations management, Hewlett-Packard Enterprise 

Commenting on the report in relation to cloud deployment, analyst Dave Bartoletti said, "You may have a built a workload for Amazon [Web Services] that you now want to run in [Microsoft] Azure, or replace with a database in Salesforce, or use an ERP system like SAP in the cloud. You need a consistent way to deploy this."

The problem, Bartoletti explained, is that businesses find deployment across these varied platforms difficult largely due to a lack of tools with cross-platform intelligence. "Traditionally, you'd use the tool that comes with the platform, perhaps vCenter Server for VMware vSphere environments or AWS OpsWorks to deploy on Amazon."

The tools landscape is still adapting to the reality of the multi-cloud deployment model. In his October 2015 survey of hybrid cloud management offerings, Bartoletti analyzed 36 vendors, several of which offer tools that manage multi-provider cloud platforms along with application development and delivery.

Switching between cloud environments

Consistency appears to be the keyword for existing in a multi-cloud universe. It matters because nothing stays still in the cloud for very long, including the apps and data you provide and the actual infrastructures, services and pricing of each provider.

"If you want to move applications, data and services among different providers -- and you will as part of a continuous deployment strategy -- it's important to have consistency and a level of efficiency for managing those disparate environments," said Mark Bowker, senior analyst at the Enterprise Strategy Group.

Technical reasons for periodically fine-tuning a deployment strategy include:

  • Availability of new services from one provider that constitutes a competitive or operational advantage
  • Difficulties with a provider
  • A need to mirror deployments across multiple geographies to bolster performance
  • A requirement to ensure that network communications paths avoid certain locales in order to protect data assets
  • A desire to bring analytics services to where the data resides

Non-technical reasons might include changes to a favorable pricing model and the ability of one cloud provider to more fully follow an enterprise's compliance and governance requirements.

Similarly, the degree to which a cloud provider can meet regulatory requirements can lead to redeployment of applications or data from one vendor to another, said Lief Morin, president of Key Information Systems.

"When a business reaches a certain size, it has more leverage to dictate security terms to the provider; otherwise, the provider will dictate them down to the organization. It's a matter of economics and scale," he said. "In a multi-cloud environment, it gets more complicated. More providers means more risk, so it's crucial to work with them to ensure a consistent, standardized policy."

A multi-cloud deployment model should be a quasi-permanent arrangement, because nearly everything changes eventually.

"What you're seeing today is movement toward an application-configured infrastructure environment," noted Roy Ritthaller, vice president of marketing for IT operations management at Hewlett Packard Enterprise (HPE). "At the end of the day, it's not how well your cloud is organized or how shiny and new it is; it's about how well do the application and workload perform together."

While matching the application and load makes sense, the elastic nature of the hybrid cloud environment offers opportunities for continual refinement of where they are deployed, according to David Langlais, HPE's senior director of cloud and automation.

Like a swinging pendulum, a certain amount of back and forth between private and public clouds is natural, he said. "What's important is to design applications in a way that can handle changing deployment models, all the way down to managing the data and connecting to it," he explained. "Decisions that are made initially on the development side have to be handled in production for the long term. It also means understanding the cost profile and recalculating on a regular basis."

Next Steps

Dig Deeper on Cloud integration platform

Picture of System Administrator

Multi-Tenant Data Centers

by System Administrator - Monday, 20 October 2014, 1:52 PM
 

Four Advantages of Multi-Tenant Data Centers

Increasing demands on IT are forcing organizations to rethink their data center options. These demands can be difficult for IT to juggle for a variety of reasons. They represent tactical issues relating to IT staffing and budget, but they also represent real inflection points in the way enterprises strategically conduct business in the 21st century.

Please read the attached whitepaper.

Picture of System Administrator

Multicloud Strategy

by System Administrator - Tuesday, 7 February 2017, 6:19 PM
 

For enterprises, multicloud strategy remains a siloed approach

by Trevor Jones

Enterprises need a multicloud strategy to juggle AWS, Azure and Google Cloud Platform, but the long-held promise of portability remains more dream than reality.

Most enterprises utilize more than one of the hyperscale cloud providers, but "multicloud" remains a partitioned approach for corporate IT.

Amazon Web Services (AWS) continues to dominate the public cloud infrastructure market it essentially created a decade ago, but other platforms, especially Microsoft Azure, gained a foothold inside enterprises, too. As a result, companies must balance management of the disparate environments with questions of how deep to go on a single platform, all while the notion of connectivity of resources across clouds remains more theoretical than practical.

Similar to hybrid cloud before it, multicloud has an amorphous definition among IT pros as various stakeholders glom on to the latest buzzword to position themselves as relevant players. It has come to encompass everything from the use of multiple infrastructure as a service (IaaS) clouds, both public and private, to public IaaS alongside platform as a service (PaaS) and software as a service (SaaS).

The most common definition of a multicloud strategy, though, is the use of multiple public cloud IaaS providers. By this strictest definition, multicloud is already standard operations for most enterprises. Among AWS customers, 56% said they already use IaaS services from at least one other vendor, according to IDC.

"If you go into a large enterprise you're going to have different teams across the organization using different cloud platforms," said Jeff Cotten, president of Rackspace, based in Windcrest, Texas, which offers managed services for AWS and Azure. "It's not typically the same product teams leveraging both platforms. It's often different business units, with a different set of apps, likely different people and organizational constructs."

The use of multiple clouds is often foisted upon enterprises. Large corporations may opt for a second vendor when their preferred vendor has no presence in a particular market. Typically, however, platform proliferation is driven by lines-of-business that either procured services on their own or were brought under an IT umbrella through mergers and acquisitions.

"By the time these two get to know each other it's too late and they've gone too far down the path to make the change," said Deepak Mohan, research director at IDC.

An apples-to-apples comparison of market share among the three biggest hyperscale IaaS providers --AWS, Azure and Google Cloud Platform (GCP) -- is difficult to surmise because each company breaks out its revenues differently. Microsoft is closing the gap, while GCP saw a significant bump in 2016 as IT shops begin testing the platform, according to 451 Research. But by virtually any metric, AWS continues to lead the market by a sizable margin that is unlikely to close any time soon.

Nevertheless, the competition between the big three is not always a fight for the same IT dollars, as each takes a slightly different tact to wooing customers. Amazon, though softening to hybrid cloud, continues its stand-alone, all-encompassing approach, while Microsoft has a greater percentage of enterprise accounts as it positions itself to accommodate existing customers' journey from on premises to the cloud. Google, meanwhile, is banking on its heritage around big data algorithms, containers and machine learning to get ahead of the next wave of cloud applications.

"[IT shops] are not evaluating the three hyperscale guys purely on if AWS is cheaper, or which has the better portal interface or the coolest features because there's parity there," said Melanie Posey, research vice president at 451. "It's not a typical horse race story."

The move away from commoditization has also shifted how enterprises prioritize portability. In the past, companies emphasized abstracting workloads to pit vendors against each other and get better deals, but over the past year they have come to prize speed, agility and flexibility over cost, said Kip Compton, vice president of Cisco's cloud platform and services organization.

"We're actually seeing CIOs and customers starting to view these clouds through the lens of, 'I'm going to put the workloads in the environment that's best for that workload' and 'I'm going to worry a lot less about portability and focus on velocity and speed and taking more advantage of a higher- level service that each of these clouds offer.'"

Silos within a multicloud strategy

Even as the hyperscale vendors attempt to differentiate, picking and choosing providers for specific needs typically creates complications and leads to a siloed approach, rather than integration across clouds.

"It's more trouble than it's worth if you're going to do it that way," Posey said. "What ends up happening is company XYZ is running some kind of database function on AWS, but they're running customer-facing websites on Azure and never the two shall meet."

The idea of multicloud grew conceptually out of the traditional server model where a company would pick between Hewlett Packard Enterprise (HPE) or IBM and build its applications on top, but as the cloud evolved it didn't follow that same path, Mohan said.

"The way clouds were evolving fundamentally differs and there wasn't consistency, so integrating was hard unless you did a substantial amount of investment to do integration," he said.

It is also important to understand what is meant by a "multicloud" strategy, whether an architecture supports a multicloud strategy or that workloads actually run on multiple clouds.

"There's a difference between being built for the cloud or built to run in the cloud, and it's difficult from a software development perspective to have an architecture that's cloud agnostic and can run in either one," said Dave Colesante, COO of Alert Logic, a cloud security provider in Houston.

Alert Logic is migrating from a mix of managed colocation and AWS to being fully in the cloud as it shifts to a microservices model. The company offers support for AWS and Azure, but all of the data management ends up back in AWS.

The company plans to design components of its SaaS application to provide flexibility and to assuage Microsoft customers that want the back end in Azure, but that creates limitations of what can be done on AWS.

"It's a Catch-22," Colesante said. "If you want to leverage the features and functions that Amazon makes available for you, you probably end up in a mode where you're hooked into some of the things."

The two key issues around multicloud center on the control plain and the data plain, IDC's Mohan said. A consistent way to manage, provision and monitor resources across all operational aspects of infrastructure is a challenge that's only exacerbated when enterprises go deeper on one platform than another.

On the data side, the concept of data gravity often discourages moving workloads between clouds because it's free to move data in, but expensive to move data out. There are also limitations on the speed and ease by which they can be migrated.

Getting the best of both worlds

Companies with fewer than 1,000 employees typically adopt a multicloud strategy to save money and to take advantage of new services as they become available, but the rationale changes with larger enterprises, Mohan said.

"As you move up the spectrum, the big reason is to avoid lock-in," he said. "We attribute that to the nature of apps that are being run, and that they're probably more business critical IT app run by organizations internally."

The largest organizations, though, seem get the best of both worlds.

"Especially if it's for experimentation with new initiatives, they have much higher tolerance for going deep onto one platform," Mohan said. "For bread-and-butter workloads, volatility and jumping around services is not as important."

At the same time, large organizations that prioritize reliability, predictability, uptime and resiliency tend to favor the lowest common denominators of cost savings and commodity products, he said.

Motorola Mobility takes an agnostic view of cloud and does in fact look to move workloads among platforms when appropriate. It has a mix of AWS, GCP and Azure, along with its own OpenStack environment, and the company has put the onus on standardized tooling across platforms.

"If I can build an application at the simplest level of control, I should be able to port that to any cloud environment," said Richard Rushing, chief information security officer at Motorola Mobility. "This is kind of where we see cloud going."

Ultimately, a multicloud strategy comes down to IT shops' philosophical view, whether it's another form of a hosted environment, or a place to use APIs and put databases in order to take advantage of higher-level services, but can lead to lock-in, he added.

"I don't think there's a right way or a wrong way," Rushing said. "It depends on what you feel comfortable with."

Despite that agnostic view, Motorola hasn't completely shied away from services that tether it to a certain provider.

"Sometimes the benefit of the service is greater than [the concern] about what you want to be tied down to," Rushing said. "It's one of those things where you have to look at it and say, is this going to wrap me around something that could benefit me, but what else is it going to do?"

Experimentation and internal conversations about those tradeoffs can be healthy because it opens an organization to a different way of doing things, but it also forces developers to justify a move that could potentially restrict the company going forward, he added.

 

Cross-cloud not yet reality

A wide spectrum of companies has flooded the market to fill these gaps created by multicloud, despite some high-profile failures including Dell Cloud Manager. Smaller companies, such as RightScale and Datapipe, compete with legacy vendors, such as HPE, IBM and Cisco, and even AWS loyalists like 2nd Watch look to expand their capabilities to other providers. Other companies, such as NetApp and Informatica, focus on data management across environments.

Of course, the ultimate dream for many IT shops is true portability across clouds, or even workloads that span multiple clouds. It's why organizations abstract their workloads to avoid lock-in. It's also what gave OpenStack so much hype at its inception in 2010, and helped generate excitement about containers when Docker first emerged in 2013. Some observers see that potential coming to fruition in the next year or two, but for now those examples remain the exception to the rule.

What you'd eventually like to get to is data science analytics on platform A, your infrastructure and processing and storage on platform B and something else on platform C, but that's a number of years before that becomes a reality.

Dave ColesanteCOO, Alert Logic

The hardest path to span workloads across clouds is through the infrastructure back end, Colesante said. For example, if an AWS customer using DynamoDB, Kinesis or Lambda wants to move to Azure, there are equivalents in Microsoft's cloud. However, the software doesn't transparently allow users to know the key-value store equivalent between the two, which means someone has to rewrite the application for every environment it sits on.

Another obstacle is latency and performance, particularly the need for certain pieces of applications to be adjacent. Cisco has seen a growing interest in this, Compton said, with some banks putting their database in a colocation facility near a major public cloud to resolve the problem.

Alert Logic's data science teams are exploring what Google has to offer, but Colesante pumped the brakes on the cross-cloud utopia, noting that most companies are still in the earliest stages of cloud adoption.

"What you'd eventually like to get to is data science analytics on platform A, your infrastructure and processing and storage on platform B and something else on platform C," he said, "but that's a number of years before that becomes a reality."

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

Next Steps

Picture of System Administrator

Multifactor Authentication (MFA)

by System Administrator - Tuesday, 16 June 2015, 9:11 PM
 

Multifactor Authentication (MFA)

Posted by Margaret Rouse

Multifactor authentication is one of the most cost-effective mechanisms a business can deploy to protect digital assets and customer data.

Multifactor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transaction. 

Multifactor authentication combines two or more independent credentials: what the user knows (password), what the user has (security token) and what the user is (biometric verification). The goal of MFA is to create a layered defense and make it more difficult for an unauthorized person to access a target such as a physical location, computing device, network or database. If one factor is compromised or broken, the attacker still has at least one more barrier to breach before successfully breaking into the target.

Typical MFA scenarios include:
  • Swiping a card and entering a PIN.
  • Logging into a website and being requested to enter an additional one-time password (OTP) that the website's authentication server sends to the requester's phone or email address.
  • Downloading a VPN client with a valid digital certificate and logging into the VPN before being granted access to a network.
  • Swiping a card, scanning a fingerprint and answering a security question.
  • Attaching a USB hardware token to a desktop that generates a one-time passcode and using the one-time passcode to log into a VPN client.

Background

One of the largest problems with traditional user ID and password login is the need to maintain a password database. Whether encrypted or not, if the database is captured it provides an attacker with a source to verify his guesses at speeds limited only by his hardware resources. Given enough time, a captured password database will fall.As processing speeds of CPUs  have increased, brute force attacks have become a real threat. Further developments like GPGPU password cracking and rainbow tables have provided similar advantages for attackers. GPGPU cracking, for example, can produce more than 500,000,000 passwords per second, even on lower end gaming hardware. Depending on the particular software, rainbow tables can be used to crack 14-character alphanumeric passwords in about 160 seconds. Now purpose-built FPGA cards, like those used by security agencies, offer ten times that performance at a minuscule fraction of GPU power draw. A password database alone doesn't stand a chance against such methods when it is a real target of interest.In the past, MFA systems typically relied upon two-factor authentication. Increasingly, vendors are using the label "multifactor" to describe any authentication scheme that requires more than one identity credential.

Authentication factors

An authentication factor is a category of credential used for identity verification. For MFA, each additional factor is intended to increase the assurance that an entity involved in some kind of communication or requesting access to some system is who, or what, they are declared to be. The three most common categories are often described as something you know (the knowledge factor), something you have (the possession factor) and something you are (the inherence factor).

Knowledge factors – information that a user must be able to provide in order to log in. User names or IDs, passwords, PINs and the answers to secret questions all fall under this category. See also: knowledge-based authentication (KBA)

Possession factors - anything a user must have in their possession in order to log in, such as a security token, a one-time password (OTP) token, a key fob, an employee ID card or a phone’s SIM card. For mobile authentication, a smartphone often provides the possession factor, in conjunction with an OTP app.

Inherence factors - any biological traits the user has that are confirmed for login. This category includes the scope of biometric authentication  methods such as retina scans, iris scans fingerprint scans, finger vein scans, facial recognition, voice recognition, hand geometry, even earlobe geometry.

Location factors – the user’s current location is often suggested as a fourth factor for authentication. Again, the ubiquity of smartphones can help ease the authentication burden here: Users typically carry their phones and most smartphones have a GPS device, enabling reasonable surety confirmation of the login location.

Time factors – Current time is also sometimes considered a fourth factor for authentication or alternatively a fifth factor. Verification of employee IDs against work schedules could prevent some kinds of user account hijacking attacks. A bank customer can't physically use their ATM card in America, for example, and then in Russia 15 minutes later. These kinds of logical locks could prevent many cases of online bank fraud.

Multifactor authentication technologies:

Security tokens: Small hardware devices that the owner carries to authorize access to a network service. The device may be in the form of a smart card or may be embedded in an easily-carried object such as a key fob or USB drive. Hardware tokens provide the possession factor for multifactor authentication. Software-based tokens are becoming more common than hardware devices.

Soft tokens: Software-based security token applications that generate a single-use login PIN. Soft tokens are often used for multifactor mobile authentication, in which the device itself – such as a smartphone – provides the possession factor.

Mobile authentication: Variations include: SMS messages and phone calls sent to a user as an out-of-band method, smartphone OTP apps, SIM cards and smartcards with stored authentication data.

Biometric authentication methods such as retina scans, iris scans fingerprint scans, finger vein scans, facial recognition, voice recognition, hand geometry and even earlobe geometry.

GPS smartphones can also provide location as an authentication factor with this on board hardware.

Employee ID and customer cards, including magnetic strip and smartcards.

The past, present and future of multifactor authentication

In the United States, interest in multifactor authentication has been driven by regulations such as the Federal Financial Institutions Examination Council (FFIEC) directive calling for multifactor authentication for Internet banking transactions.

MFA products include EMC RSA Authentication Manager and RSA SecurID, Symantec Validation and ID Protection Service, CA Strong Authentication, Vasco IDENTIKEY Server and DIGIPASS, SecureAuth IdP, Dell Defender, SafeNet Authentication Service and Okta Verify.

Next Steps

Learn more about the benefits of multifactor authentication in the enterprise and read this comparison of the latest multifactor authentication methods.When it comes to MFA technology, it's important to determine which deployment methods and second factors will best suit your organization. This Photo Story outlines your options.

Continue Reading About multifactor authentication (MFA)

Picture of System Administrator

N (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:47 PM
 
Picture of System Administrator

Navia, el primer carro autónomo del mercado

by System Administrator - Saturday, 11 October 2014, 2:20 PM
 

Navia, el primer carro autónomo del mercado

por Camila Alicia Ortega Hermida

La compañía francesa Induct presentó en el CES en Las Vegas el Navia, un carro que puede desplazarse sin la necesidad de un conductor.
La empresa, especializada en desarrollar tecnologías inteligentes para el futuro de la industria automotriz y en proponer soluciones de movilidad innovadoras, dio a conocer el Navia, un carro 100% eléctrico que puede transportar ocho pasajeros, alcanzar una velocidad de 20 kilómetros por hora y que tiene la capacidad de conducirse a sí mismo.

Lea también: Ford presenta prototipo de vehículo que funciona con energía solar

El Navia no requiere de una infraestructura especializada, es decir, no necesita de un carril exclusivo o rieles para funcionar. El vehículo está diseñado espacios amplios como grandes centros urbanos peatonales, aeropuertos, parques temáticos, campus universitarios o complejos hospitalarios, entre otros, que necesitan una solución segura y amigable para transportar a las personas.

 

Demostración del Navia en el CES 2014 en Las Vegas. Imagen: news.yahoo.com 

El innovador vehículo cuenta con sensores y tecnología láser que permiten realizar un mapeo de la zona para evitar cualquier tipo de obstáculo que se interponga en el camino, como peatones, andenes o bicicletas. 

El Navia funciona como un ascensor: una vez la persona se encuentra dentro de él, debe elegir la parada a la que desee dirigirse en la pantalla táctil que se encuentra dentro del vehículo. 

El carro futurista ya está siendo probado en varios campos universitarios de países como Suiza, Inglaterra y Singapur. 

«El transporte público no es suficiente por sí solo. Tenemos que pensar en formas de movilizarnos que sean accesibles y que tengamos al alcance de la mano. La meta es mejorar la movilidad pensando más allá de los diseños convencionales y del vehículo privado» afirma Pierre Lefèvre, director ejecutivo de Induct y diseñador del Navia, en el sitio web de la compañía.

El Navia de Induct brinda la oportunidad de mejorar la dinámica de la ciudad y la calidad de vida de las personas y reducir el smog y la contaminación generada por los vehículos de combustible.

Aunque el Navia promete convertirse en una tendencia dentro de campus universitarios y aeropuertos, es también un anticipo de lo que viene para los sistemas de movilidad y tranporte en la ciudad del futuro: vehículos driverless que nos ayudarán a reducir accidentes causados por error humano, liberarnos del estrés que a veces causa estar atascado en el tráfico, trabajar mientras nos desplazamos de un lugar a otro o incluso despreocuparnos de la necesidad de un conductor elegido cuando decide tomar una copa de vino. Después de todo, Isaac Asimov no estaba demente cuando escribió su obra ‘Yo, Robot’ (3D).

Otros carros autónomos

Aunque el modelo Navia es el primer vehículo de su tipo en salir al mercado, no es la única propuesta existente, pues diferentes compañías ya están trabajando en sus propios modelos autónomos. Por su parte, en el CES del 2013, Audi mostró su modelo Audi A7, que está equipado con sensores que le permiten conducirse y parquearse por sí solo. Además, anunció ser la primera compañía en obtener una licencia para probar sus vehículos driverless en avenidas públicas de Nevada, Estados Unidos, y promete desarrollar un completo sistema de piloto automático antes del 2020.

Para ese mismo año, Nissan asegura que incluirá en su portafolio una gran variedad de carros completamente autónomos para introducir en el mercado. Sin embargo, ha hecho demostraciones de su modelo Nissan Leaf, capaz de conducirse solo y con sistema de parqueo autónomo.

La marca Mercedes-Benz reveló que su modelo inteligente, S 500, realizó un trayecto de 97 kilómetros manejandose solo desde Mannheim hasta Pforzheim en Alemania.

Por otra parte, se pronostica que en el 2015 algunas ciudades de Inglaterra contarán con carros driverless que transporten a las personas dentro de centros comerciales, parques, entre otros.

 

Google y Audi unen fuerzas para crear carro inteligente Android

 Google y Audi están trabajando de forma conjunta para desarrollar un vehículo que cuente con un sistema de información y entretenimiento basado en software Android.

La compañía automotriz alemana y Google darán a conocer sus planes dentro del marco del Consumer Electronics Show (CES), que se realizará en La Vegas entre el 7 y el 10 de enero de 2014. 

Google, que también ha anunciado alianzas similares con diferentes fabricantes de vehículos como Toyota y de tecnología, como la empresa productora de chips Nvidia, trabaja actualmente en un sistema Android que pueda integrarse con los vehículos para permitirle a conductores y pasajeros acceder a música y aplicaciones, navegar por internet, utilizar Google Maps, realizar video llamadas y utilizar otras funciones de la plataforma Android sin la necesidad de usar un smartphone, solamente el automóvil.

Lea también: La carrera de los carros eléctricos: los 14 modelos más veloces 

Sin embargo, el camino para alcanzar este objetivo no es fácil y más vale moverse y mostrar resultados rápidamente, pues no se trata de la única compañía que esté trabajando en un sistema operativo que integre el software de los smartphones con el de los carros: Intel ha estado trabajando en el desarrollo de vehículos conectados, en alianza con compañías como Nissan y Hyundai; y Apple, según informa The Wall Street Journal, anunció recientemente que ha estado trabajando en llevar el sistema iOS a los vehículos de sus aliados BMW y Honda.

Sin duda, uno de los «dispositivos conectados» que mayor crecimiento tendrá en el 2014 será el automóvil. La propuesta de Audi y Google busca eliminar la necesidad de utilizar smartphones o tablets mientras las personas conducen, una apuesta que no solo facilitará acceder a reportes de tráfico, rutas de GPS, búsquedas, llamadas, mensajes de texto por comandos de voz, selección de música, entre otros, sino que también ayudará a reducir cientos de miles de accidentes de tránsito que se producen a diario en todo el mundo a causa de personas que conducen y utilizan dispositivos móviles simultáneamente.

Read more: http://www.youngmarketing.co

Picture of System Administrator

Navigating DevOps

by System Administrator - Thursday, 7 May 2015, 4:40 PM
 

Navigating DevOps: Learn What It Is and Why It Matters To Your Business

 

In these eBooks you'll learn; what DevOps is, where it came from, why it was created and who's adopting it...plus a whole lot more.

Please read the attached eBooks.

Picture of System Administrator

NFV

by System Administrator - Monday, 13 July 2015, 5:11 PM
 

 

Getting Started with NFV

Transform Your Business with the New IP

Introduction: The New IP, an Overview Evolving Service Provider Requirements

This eBook will discuss network function virtualization (NFV) and its role in the New IP, a network vision that is innovation-centric, software-enabled, ecosystem-driven, open with a purpose, and can be achieved on an operator's own terms. Cloud-based options for network transformation are becoming mainstream, with NFV crossing the hype-cycle chasm into early commercial deployments in the next 12 to 18 months. NFV is being embraced by both large telcos and smaller hosting/cloud service providers alike, as a path to better optimize their networks and uncover fresh revenue paths.

NFV, true to its name, virtualizes various network functionalities like session border controllers, voice and messaging policy control, routing, security and firewalls, load balancing and VPN capability—among many others. By implementing network functions in software that can run on a range of industry-standard servers, operators can be freed from the need to install new proprietary equipment within the carrier network itself. A management layer then orchestrates the resources to support services as needed, in a fluid, dynamic, real-time environment. When fully implemented alongside software-defined networking (SDN) for programmability, this translates into a lower operational cost and the ability to provision services with the flexibility and elasticity that we've come to expect from the on-demand world of the cloud.

Please read the attached eBook.

Picture of System Administrator

NoSQL Database: Benchmarking on Virtualized Public Cloud

by System Administrator - Sunday, 7 December 2014, 11:32 PM
 

Benchmarking a NoSQL Database on Virtualized Public Cloud

NoSQL databases are now commonly used to provide a scalable system to store, retrieve and analyze large amounts of data. Most NoSQL databases are designed to automatically partition data and workloads across multiple servers to enable easier, more cost-effective expansion of data stores than the single server/scale up approach of traditional relational databases. Public cloud infrastructure should provide an effective host platform for NoSQL databases given its horizontal scalability, on-demand capacity, configuration flexibility and metered billing; however, the performance of virtualized public cloud services can suffer relative to bare-metal offerings in I/O intensive use cases. Benchmark tests comparing latency and throughput of operating a high-performance in-memory (flash-optimized), key value store NoSQL database on popular virtualized public cloud services and an automated bare-metal platform show performance advantages of bare-metal over virtualized public cloud, further quantifying conclusions drawn in prior studies. Normalized comparisons that relate price to performance also suggest bare metal with SSDs is a more efficient choice for data-intensive applications.

Please read the attached whitepaper.

Picture of System Administrator

O (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:48 PM
 
Picture of System Administrator

Organización (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 12:53 AM
 

CONCEPTOS RELATIVOS A LA ORGANIZACIÓN

Organización: Conjunto de personas e instalaciones con una disposición de responsabilidades, autoridades y relaciones.

Estructura de la organización: Disposición de responsabilidades, autoridades y relaciones entre el personal.

Parte interesada: Persona o grupo que tenga un interés en el desempeño o éxito de una organización.

Proveedor: Organización o persona que proporciona un producto .

Cliente: Organización o persona que recibe un producto.

Infraestructura: Organización. Sistema de instalaciones, equipo y servicios necesarios para el funcionamiento de una organización.

Ambiente de trabajo: Conjunto de condiciones bajo las cuales se realiza el trabajo.

Picture of System Administrator

Outsourcing

by System Administrator - Monday, 13 February 2017, 11:29 PM
 

Outsourcing o tercerización

Publicado por: Margaret Rouse

El concepto de outsourcing puede traducirse como tercerización, externalización o subcontratación.

La subcontratación es una práctica en la que un individuo o una empresa realiza tareas, proporciona servicios o fabrica productos para otra empresa –funciones que podrían haberse hecho o se hacen normalmente en la empresa. El outsourcing es utilizado por las empresas para ahorrar costes.

La tercerización es una tendencia común en la industria de tecnologías de la información (TI) y otras industrias. Las empresas subcontratan servicios que se consideran intrínsecos a la gestión de un negocio y al servicio de clientes internos y externos. Los productos, como piezas de computación, y los servicios, como la nómina y la contabilidad, pueden ser subcontratados. En algunos casos, toda la gestión de la información de una empresa es externalizada, incluyendo la planificación y el análisis de negocios, así como la instalación, gestión y mantenimiento de la red y las estaciones de trabajo.

Razones para la subcontratación

Además de ahorrar en gastos generales y de mano de obra, las razones por las que las empresas emplean el outsourcing incluyen eficacia mejorada, mayor productividad y la oportunidad de centrarse en los productos y las funciones principales del negocio. Además, más empresas buscan a los proveedores de outsourcing como centros de innovación. De acuerdo con la encuesta de outsourcing 2016 de Deloitte, el 35% de los encuestados dijeron que están enfocados en medir el valor de la innovación en sus asociaciones de tercerización.

 

Insourcing vs. externalización

El término subcontratación a menudo se refiere a la externalización en el extranjero (offshore outsourcing), o a la práctica de exportar el trabajo a empresas de países menos desarrollados, donde tiende a haber costos de mano de obra más bajos. Sin embargo, la externalización no siempre es la forma más efectiva para que las empresas ahorren costos. El insourcing, o la asignación de tareas a personas o departamentos en casa, a veces es más rentable para las empresas que la contratación de personal o empresas externas. El término insourcing se utiliza a veces para referirse a emplear a las filiales de corporaciones globales extranjeras que estén en el mismo país, una práctica también conocida como outsourcing onshore o externalización doméstica.

La subcontratación puede abarcar desde un gran contrato, en el que una empresa como IBM gestiona servicios de TI para una empresa como Xerox, hasta la práctica de contratar independientes y trabajadores de oficina temporales de forma individual.

Link: http://searchdatacenter.techtarget.com

 

 

Picture of System Administrator

P (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:49 PM
 
Picture of System Administrator

PICK chart (Possible, Implement, Challenge and Kill chart)

by System Administrator - Wednesday, 14 January 2015, 10:46 PM
 

PICK chart (Possible, Implement, Challenge and Kill chart)

Posted by Margaret Rouse

A PICK chart (Possible, Implement, Challenge and Kill chart) is a visual tool for organizing ideas. The purpose of a PICK chart is to identify which ideas can be implemented easily and have a high payoff.

A PICK chart (Possible, Implement, Challenge and Kill chart) is a visual tool for organizing ideas. PICK charts are often used after brainstorming sessions to help an individual or group identify which ideas can be implemented easily and have a high payoff. 

A PICK chart is set up as a large grid, two squares high and two squares across. The PICK acronym comes from the labels for each quadrant of the grid: 

  • Possible  - ideas that are easy to implement but have a low payoff.
  • Implement  - ideas that are easy to implement and a high payoff.
  • Challenge - ideas that are hard to implement and difficult to determine payoff.
  • Kill  - ideas that are hard to implement and have low payoff.

Once each idea from the brainstorming session has been placed on the most appropriate square, it becomes easier to identify which ideas should be acted on first. In a group setting, PICK charts are useful for focusing discussion and achieving consensus. 

Although PICK charts are often associated with the Six Sigma management philosophy, they were originally developed by Lockheed Martin for lean production. Today, PICK charts can be found in many disciplines outside manufacturing including education, marketing and agile software development

Continue Reading About PICK chart (Possible, Implement, Challenge and Kill chart)

Glossary

Related Terms

More:

Link: http://searchcio.techtarget.com

Picture of System Administrator

Plataforma de colaboración

by System Administrator - Tuesday, 1 August 2017, 3:37 PM
 

 

Plataforma de colaboración

Los proveedores están adoptando diferentes enfoques para crear plataformas de colaboración. Algunos están agregando una "capa social" a las aplicaciones empresariales heredadas, mientras que otras incorporan herramientas de colaboración en nuevos productos. Todas las plataformas de colaboración empresarial exitosas comparten ciertos atributos: necesitan ser fácilmente accesibles y fáciles de usar, necesitan ser construidas para la integración, y deben incluir un conjunto común de funciones que apoyan la colaboración de equipos, el seguimiento de problemas y la mensajería. Muchas plataformas de colaboración están diseñadas para parecerse a Facebook u otros sitios que los empleados ya están acostumbrados a usar en sus vidas personales.

Link: http://searchdatacenter.techtarget.com

Términos relacionados

 

Picture of System Administrator

Pomodoro Technique

by System Administrator - Friday, 26 June 2015, 6:35 PM
 

Pomodoro Technique

 

Part of the Project management glossary | Posted by: Margaret Rouse

The pomodoro technique is a time management method based on 25-minute stretches of focused work broken by 3-to-5 minute breaks and 15-to-30 minute breaks following the completion of  four work periods.     

Developer and entrepreneur Francesco Cirillo created the pomodoro technique in the late 1980s, when he began to use his tomato-shaped kitchen timer to organize his work schedule. Each working interval is called a pomodoro, the Italian word for tomato (plural: pomodori). 

The pomodoro technique essentially trains people to focus on tasks better by limiting the length of time they attempt to maintain that focus and ensuring restorative breaks from the effort. The method is designed to overcome the tendencies to procrastinate and to multitask -- both of which have been found to impair productivity -- and to help users develop more efficient work habits. Effective time management allows people to get more done in less time, while also fostering a sense of accomplishment and reducing the potential for burnout

 

Steps for the pomodoro technique:

  1. Decide on the task for the work segment.
  2. Eliminate the potential for distraction. Close email and chat programs and shut down social media and other sites that are not related to the task.
  3. Set the timer to 25 minutes.
  4. Work on the task until the timer rings; record completion of the pomodoro.
  5. Take a three-to-five minute break.
  6. When four pomodori have been completed, take a 15-to-30 minute  break.

Various implementations of the pomodoro technique use different time intervals for task and break segments. For the breaks, it is strongly advised that the worker select an activity that contrasts with the task. Someone working at a computer, for example, should step away from the desk and do some kind of physical activity.

Greg Head explains how he uses the pomodoro technique to improve his productivity:

Related Terms

Definitions

  • quality control (QC)

    - Quality control (QC) is a procedure or set of procedures intended to ensure that a manufactured product or performed service adheres to a defined set of quality criteria or meets the requirements o... (WhatIs.com)

  • communication plan

    - A communications management formally defines who should be given specific information, when that information should be delivered and what communication channels will be used to deliver the informat... (WhatIs.com)

  • Respect for People principle

    - Continuous Improvement (CI) and Respect for People are the two foundational principles of the Toyota Way, the company's business management guide. (WhatIs.com)

Glossaries

  • Project management

    - Terms related to project management, including definitions about project management methodologies and tools.

  • Internet applications

    - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Link: http://whatis.techtarget.com

 
Picture of System Administrator

Prescriptive Analytics

by System Administrator - Wednesday, 8 April 2015, 8:23 PM
 

Prescriptive Analytics

 

Posted by Margaret Rouse

Prescriptive analytics is a type of business analytics that focuses on finding the best course of action for a given situation, and belongs to a portfolio of analytic capabilities that include descriptive and predictive analytics.

Prescriptive analytics is the area of business analytics (BA) dedicated to finding the best course of action for a given situation.

Prescriptive analytics is related to both descriptive and predictive analytics. While descriptive analytics aims to provide insight into what has happened and predictive analytics helps model and forecast what might happen, prescriptive analytics seeks to determine the best solution or outcome among various choices, given the known parameters.

Prescriptive analytics can also suggest decision options for how to take advantage of a future opportunity or mitigate a future risk, and illustrate the implications of each decision option. In practice, prescriptive analytics can continually and automatically process new data to improve the accuracy of predictions and provide better decision options.
 
 

A process-intensive task, the prescriptive approach analyzes potential decisions, the interactions between decisions, the influences that bear upon these decisions and the bearing all of the above has on an outcome to ultimately prescribe an optimal course of action in real time. Prescriptive analytics is not failproof, however, but is subject to the same distortions that can upend descriptive and predictive analytics, including data limitations and unaccounted-for external forces. The effectiveness of predictive analytics also depends on how well the decision model captures the impact of the decisions being analyzed.

Advancements in the speed of computing and the development of complex mathematical algorithms applied to the data sets have made prescriptive analysis possible. Specific techniques used in prescriptive analytics include optimization, simulation, game theory and decision-analysis methods.

A company called Ayata holds the trademark for the (capitalized) term Prescriptive Analytics. Ayata is the Sanskrit word for future.

This brief IBM video describes the progression from descriptive analytics, through predictive analytics to prescriptive analytics:

See Ayata's brief video presentation on the potential of prescriptive analytics:

Continue Reading About prescriptive analytics

Link: http://searchcio.techtarget.com/definition/Prescriptive-analytics

 

Picture of System Administrator

Preventing Data Loss Through Privileged Access Channels

by System Administrator - Wednesday, 7 October 2015, 4:14 PM
 

Preventing Data Loss Through Privileged Access Channels

A basic tenet of security is to apply the strongest safeguards to the highest value assets. Systems and IT administrators provide privileged users access to very high value assets and hosts across the enterprise. Their access rights include, to name a few, the ability to create new virtual machines, change operating system configurations, modify applications and databases, install new devices - and most of all, direct access to organization's protected data (financial or health, personnel, intellectual property, for example). If misused, the privileges they are granted can have devastating consequences.

Simultaneously the extent of privileged access is expanding to entities outside the enterprise through outsourcing arrangements, business partnerships, supply chain integration, and cloud services. The growing importance and prevalence of third-party access is bringing matters of trust, auditability and data loss prevention to the forefront of security compliance and risk management.

To be compliant against any type of standard or norms mandate privileged access to be secured by encryption, it is by default opaque to many standard layered defenses, such as next generation firewalls and data loss prevention systems. The resulting loss of visibility and control creates a heightened risk for undetected data loss and system damage as well as an attractive attack vector for malicious activity such as the stealing of information and disrupting operations, while hiding all traces from system audit logs. Auditors are always thoroughly testing privileged access controls as they are key controls for organizations such as those in the financial and health industries. Lack of visibility into administrator's activities will lead to audit exceptions.

This white paper focuses on how organizations facing these issues of privileged access can effectively balance the challenges of cost, risk and compliance. It describes how privileged access governance can be made minimally invasive, scale to enterprise requirements, and most importantly, prevent costly losses and potential audit exceptions.

Please read the attached whitepaper.

Picture of System Administrator

Principiantes de Linux

by System Administrator - Wednesday, 12 November 2014, 5:18 PM
 

Cinco leyendas urbanas que atemorizan a los principiantes de Linux

por Sander van Vugt

A medida que más sistemas Unix son reemplazados por Linux, los administradores de Unix tienen que ajustar sus habilidades. Mientras que los dos sistemas pueden parecer similares, hay algunas diferencias importantes.

A diferencia de un típico sistema operativo (OS) Unix  para servidores, Linux es fundamentalmente abierto. En algunos casos, las características amigables para los desarrolladores hacen que Linux parezca un sistema operativo desorganizado, pero esto se ve compensado por las nuevas oportunidades que el código abierto ofrece.

Para el principiante de Linux, aquí hay cinco mitos que escuchará sobre este sistema operativo de servidores así como las razones por las que simplemente no son verdad.

Mito 1: Es como cualquier otro servidor Unix

Es cierto, Linux está relacionado con Unix, pero de la misma forma en que usted se relaciona con sus abuelos: misma sangre, pero un montón de diferentes características. Los administradores de Linux que vienen de entornos Unix tienden a sentirse abrumados por la riqueza y la velocidad de desarrollo de herramientas y funcionalidades y se preguntan: "¿Por qué no sólo tenemos una herramienta decente para hacer esto?"

La respuesta es simple: porque es de código abierto.

En su plataforma de servidores Unix favorita, el proveedor del sistema operativo decide lo que hay dentro y selecciona la mejor herramienta. Linux es un sistema operativo de compromiso, por lo que ofrece muchas herramientas para realizar la misma tarea. Como un nuevo administrador de Linux, usted debe tomar las herramientas de una gran cantidad de recursos disponibles.

Mito 2: Todas las herramientas están estructuradas de la misma manera

Debido a los diferentes orígenes de las herramientas y utilidades de Linux, muchos de ellos no están organizados de la misma manera. Por ejemplo, el comando ssh ofrece un método diferente al del comando scp para el mismo paso. El comando ssh permite a un administrador de Linux establecer una conexión remota segura, mientras que scp copia un archivo a través de un canal seguro. Para especificar qué puerto del servidor debe utilizar, ssh se basa en -p mientras scp va con -P.

Esto suele molestar a los principiantes de Linux; ¿por qué no pueden utilizar la misma opción para la misma funcionalidad? La respuesta es simple: debido a los diferentes orígenes de ambos. Cuanto más tiempo se trabaja con Linux, más ejemplos se encontrarán.

Mito 3: Una funcionalidad proporcionada por el kernel siempre funciona

El núcleo Linux está continuamente en desarrollo. Las nuevas características salen rápidamente, y es responsabilidad de la comunidad desarrollar una herramienta o marco para trabajar con estas características. En muchos casos sucede; en algunos casos no es así. En Linux, se puede encontrar una larga lista de atributos (por ejemplo, man chattr, un comando para cambiar atributos de archivo), de los cuales muchos nunca fueron implementados.

Los atributos son sólo un ejemplo de las características que nunca llegaron a la realidad diaria del administrador del sistema. Es sólo uno de los problemas que vienen con un sistema operativo de código abierto, y un recién llegado a Linux tiene que lidiar con eso, en tanto que aprende a vivirlo.

Mito 4: No es tan potente como Unix

Nunca he conocido a un administrador de Unix que no pensara que Linux fuera tan poderoso como Unix, y sin embargo el mito persiste. De hecho, Linux es más poderoso que la mayoría de las plataformas Unix estos días. El ochenta por ciento de las supercomputadoras del mundo ejecutan Linux como sistema operativo por defecto, lo que demuestra cuán emocionante es el kernel de Linux.

El kernel de Linux es tan poderoso porque es muy accesible. Por ejemplo, el sistema de archivos /proc/sys da acceso directo a cientos de elementos de ajuste. Si eso no es lo suficientemente bueno, siempre es posible volver a compilar el kernel para incluir nuevas características.

Mito 5: Es un reemplazo de uno-a-uno para Unix

Un error común que los administradores de Unix hacen al migrar a Linux es que tratan el servidor de código abierto como un reemplazo de su servidor Unix, y se detienen ahí.

Muchos servidores Unix albergan aplicaciones de misión crítica, desde aplicaciones de desarrollo propio hasta SAP y Oracle. Estas aplicaciones se ejecutan en Linux también, pero el sistema operativo se adapta a muchas otras cosas.

La mayoría de los servidores de internet y la mayoría de los servidores que alojan la nube ejecutan Linux. Hay un nuevo mundo de posibilidades una vez que logra adoptarlo.

Sobre el autor: Sander van Vugt es un entrenador y consultor independiente con sede en Holanda. Es un experto en Linux de alta disponibilidad, virtualización y rendimiento. Es autor de numerosos libros sobre temas de Linux, incluyendo Iniciación a la línea de comandos de LinuxComenzando con la administración del servidor Ubuntu LTS y Administración de servidores Ubuntu Pro.

Picture of System Administrator

Procesos y Productos (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 12:56 AM
 

CONCEPTOS RELATIVOS A PROCESOS Y PRODUCTOS

Procedimiento: Forma especificada para llevar a cabo una actividad o proceso.

Proceso: Conjunto de actividades mutuamente relacionadas o que interactúan, las cuales transforman entradas en salidas.

Producto: Resultado de un proceso.

Diseño y desarrollo: Conjunto de procesos que transforman los requisitos en características especificadas o en la especificación de un producto, proceso o sistema.

Proyecto: Proceso único consistente en un conjunto de actividades coordinadas y controladas con fechas de inicio y de finalización, llevadas a cabo para lograr un objetivo conforme con requisitos específicos, incluyendo las limitaciones de tiempo, costo y recursos.

Picture of System Administrator

Process Intelligence (business process intelligence)

by System Administrator - Wednesday, 3 February 2016, 5:51 PM
 

Process Intelligence (business process intelligence) 

Posted by Margaret Rouse

Process intelligence is data that has been systematically collected to analyze the individual steps within a business process or operational workflow

Process intelligence can help an organization to identify bottlenecks and improve operational efficiency. The goal of process intelligence is to provide an organization with accurate information about what work items exist, who does the work, how long it takes work to be completed, what the average wait time is and where the bottlenecks are. 

Process intelligence software can help an organization improve process management by monitoring and analyzing processes on a historic or real-time basis. Process intelligence software is especially useful for analyzing and managing nonlinear processes that have a lot of dependencies.

 

Continue Reading About process intelligence (business process intelligence)

Link: http://searchbusinessanalytics.techtarget.com

Picture of System Administrator

Project Planning Templates for the CIO

by System Administrator - Saturday, 27 June 2015, 5:15 PM
 

Project Planning Templates for the CIO

Sample project scope and technology roadmap documents

Successful IT project planning requires today's technology leaders to document project scope and define a technology roadmap. This e-guide compiles a complimentary collection of downloadable templates, hand-picked by the editors of SearchCIO.com, that serve as a model for success –and that help you fine- tune your short and long-term IT objectives and stay on track throughout all stages of your initiative.

Make the most of project scope statements.

Please read the attached e-guide.


Page: (Previous)   1  2  3  4  5  6  7  8  9  10  ...  63  (Next)
  ALL