Glosario KW | KW Glossary


Ontology Design | Diseño de Ontologías

All categories

Page:  1  2  3  4  5  6  7  8  9  10  ...  63  (Next)
ALL

BUSINESS

Picture of System Administrator

3 razones por las que los correos electrónicos de marketing no funcionan (y cómo solucionarlos)

by System Administrator - Wednesday, 23 August 2017, 6:30 PM
 

3 razones por las que los correos electrónicos de marketing no funcionan (y cómo solucionarlos)

por Anupam Rajey | Traducido automáticamente con Google

En 2015, había alrededor de 205 mil millones de correos electrónicos de marketing enviados por día. Y usted se sorprenderá de saber que se esperan alrededor de 246 mil millones de correos electrónicos enviados a finales de 2019. ¿Y qué? El marketing por correo electrónico no está muerto, amigo mío. A pesar del zumbido constante de medios sociales, la mayoría de los vendedores todavía consideran la comercialización con emails su herramienta preferida de la comercialización.

 

"De todos los canales que he probado como un vendedor, el correo electrónico siempre supera a la mayoría de ellos. No sólo tiene una alta tasa de conversión, sino que a medida que construye su lista puede continuamente monetizar lanzando múltiples productos, "

Dice Neil Patel , un vendedor de Internet de renombre.Sin embargo, no todas las empresas son capaces de aprovechar el verdadero poder de marketing por correo electrónico. En el post de hoy, voy a hablar de las 3 razones principales de correo electrónico de marketing chupar. Después de leer este post, sabrá cómo evitarlo. Sin más preámbulos, vayamos directamente a los puntos:

 

1- No optimización para teléfonos móviles

El 53% de los correos electrónicos se abren en dispositivos móviles. Y el 74% de los usuarios de teléfonos inteligentescomprueban sus correos electrónicos en sus teléfonos. Si sus correos electrónicos no están optimizados para teléfonos móviles, esto afectará gravemente la tasa de apertura y el ROI de sus correos electrónicos porque muchas personas no interactuarán con sus correos electrónicos. Pero, ¿cómo puede optimizar los correos electrónicos para teléfonos móviles? Aquí está una hoja de trucos rápida para asegurarse de que los correos electrónicos que envíe se verá muy bien en los dispositivos móviles:
  • Ir fácil en las imágenes - utilizar imágenes simples y ligeras 
  • Utilizar una plantilla de respuesta 
  • Romper el texto en párrafos más pequeños 
  • Escribir una línea de asunto corta
Cuando usted va a escribir copia para sus correos electrónicos de marketing, asegúrese de hacerlo simple, claro y conciso. Nadie va a leer una copia larga y veraz en sus teléfonos móviles.

 

2- No escribir línea de asunto irresistible

La línea de asunto es la parte más importante de los correos electrónicos de marketing. No importa cuán grande sea su copia de correo electrónico, nadie va a abrir sus correos electrónicos si la línea de asunto no es irresistible.De hecho, el 35% de los destinatarios de correo electrónico consideran las líneas de asunto como un factor decisivo para abrir correos electrónicos. Es por eso que es imperativo que usted debe escribir líneas de asunto convincente para sus correos electrónicos de marketing. Pero, ¿cómo puedes hacerlo?Aquí hay una lista de algunas tácticas probadas para crear líneas de asunto convincentes:
  • Mantenga la línea de asunto breve y simple 
  • Revelar lo que está dentro del correo electrónico 
  • Iniciar su línea de asunto con verbos orientados a la acción 
  • Usar el número en la línea de asunto 
  • Crear un sentido de urgencia en la línea de asunto
Debe recordar que el elemento emocional de la línea de asunto hace que la gente haga clic en él. Así que nunca olvide hacer un atractivo emocional en sus líneas de asunto.

 

3- No ofrecer a los suscriptores cualquier oferta

Si usted lo llama un soborno ético o un bono, una oferta en correos electrónicos es una manera infalible de aumentar el ROI de sus correos electrónicos. Todos están ocupados estos días. Si la gente está leyendo sus correos electrónicos, están gastando tiempo para que se les pague por su tiempo, tan simple como eso.Sin embargo, no significa que siempre debe ofrecer grandes cupones de descuento en sus correos electrónicos de marketing.El punto es, debe haber algún intercambio de valor. Incluso un eBook pequeño puede traerle grandes resultados. 
Como dice el legendario profesor de marketing de Harvard Business School, Theodore Levitt ,
"La gente no quiere comprar un taladro de un cuarto de pulgada. ¡Quieren un agujero de un cuarto de pulgada!
Sus suscriptores siempre piensan lo que usted les ofrecerá cuando abran sus correos electrónicos de marketing. 
La próxima vez, cuando va a ejecutar una campaña de marketing por correo electrónico, siga estos consejos prácticos de marketing por correo electrónico , sin duda aumentará el ROI.

 

Conclusión:

Con inboxes que se inundan de emails de comercialización, la gente actualmente no los abre con frecuencia. Peor, están moviendo cantidades de emails a sus carpetas de Spam regularmente. Si desea asegurarse de que sus correos electrónicos lleguen a leerse, debe actuar inteligentemente.Nunca debe olvidar optimizar los correos electrónicos para teléfonos móviles, escribir una irresistible línea de asunto y ofrecer una valiosa oferta. Esto aumentará la tasa de apertura de sus correos electrónicos y aumentar el ROI.Link: http://customerthink.com
Picture of System Administrator

4-D printing (four-dimensional printing)

by System Administrator - Monday, 16 February 2015, 12:32 AM
 

4-D printing (four-dimensional printing)

Posted by Margaret Rouse

4-D printing is additive manufacturing that prints objects capable of transformation and, in some cases, self-assembly.

When a complex item is created using 3-D printing, the item is printed in parts that must be assembled. The purpose of 4-D printing is to reduce the total time needed to create a finished product by printing with materials that are capable of changing form or self-assembling with minimal human interaction. The "D" in 4-D printing stands for time -- more specifically, time saved.

The materials in a 4-D printed item are chosen to respond to a certain stimulus such as the transfer of kinetic energy from one medium to another. In such an example, the particles in printed material would start to bond together and change form when heat is introduced.  Another approach to 4-D printing involves programming physical and biological materials to change shape and change properties. 4-D printing is closely associated withnanotechnology, a branch of engineering that is also called molecular manufacturing.

While 4-D printing is still very much in the experimental phase, it has the potential to eventually save a lot more than just time by opening the door for new kinds of assemble-at-home products. Because unassembled items created with 4-D printing would be flatter and easier to ship in large quantities, they would also save on transportation costs. The recipient would simply introduce the needed stimulus and assemble the end product without requiring directions.

This video from FW: Thinking explains more about the possibilities of 4-D printing.

Link: http://searchmanufacturingerp.techtarget.com

Picture of System Administrator

45 Database Performance Tips for Developers

by System Administrator - Thursday, 15 January 2015, 10:33 PM
 

45 Database Performance Tips for Developers

  • Speed up your development with ORMs and discover the 'gotchas' involved
  • Learn the best practices for efficiently indexing tables
  • Get expert advice on writing well-formed SQL queries

Please read the attached eBook.

Picture of System Administrator

5 Keys to Enterprise Knowledge Management (KM)

by System Administrator - Friday, 5 February 2016, 12:29 AM
 

5 Keys to Enterprise Knowledge Management

KM

Here are five keys to successfully delivering on an agile enterprise knowledge management strategy for customer service and support.

By Duke Chung | Expert Opinion 

Three little words have turned enterprise knowledge management, and specifically information and knowledge related to customer service, on its head in recent years: "Just Google It." A recent study from Fleishman-Hillard found that 89 percent of consumers go directly to business websites or turn to Google, Bing, or another search engine to find information on products, services or businesses before any human to human interaction takes place, if it ever does. 

With the rise of search, brand transparency and agile channeling, corporate knowledge can no longer be contained within an organization's pre-determined boundaries; it must become an infinite, available and ever-evolving resource. Here are five keys to successfully delivering on an enterprise knowledge management strategy for customer service and support. 

1. Internal Collaboration

While many organizations have doubled their content marketing efforts and budgets over the past few years, it isn't always the case that customer service content has received equal attention. Perhaps it's simply a difference in terminology. While "content marketing" turns heads, "enterprise knowledge management" is typically met with an indifferent stare and so a brand's FAQ pages and knowledgebase sit idle for months without being updated while blogs and social media pages continuously produce and promote new content.

It's easy to make the case, however, that customer service content deserves an organization's equal attention, because improving the quality and frequency of service and support content produces many of the same results as mainstream content marketing including better SEO positioning, positive brand reputation, and increased consumer engagement. 

The best customer service content starts not only with a quality KB solution that includes workflow, versioning features, and the ability to see what content's most searched for and used, but also with an organized system of internal collaboration. Optimally, while many SMEs and CSRs within an organization should collaborate to suggest and produce raw customer service and retention content, there should be a focused KB team that works together to refine this content, and then one or a few knowledge managers (depending on the size of the organization) to ensure all content published for consumption is in a consistent voice and format, is well organized, and contains relevant keywords for search purposes.

2. External Collaboration

Sometimes your best content editors are often your best customers. Make features available in your public-facing knowledgebase so that customers can not only rate content, but add comments as to what they found helpful or not, and what may be missing. 

Communities and a brand's social media properties can also be used to improve customer support content. With social media and community monitoring tools, knowledge managers can bring in informative customer-to-customer Q & A and add frequently asked questions with their correct answers to the knowledgebase.

3. Agile Distribution

Even the highest quality content has no real value unless your customers are able to find it. According to a recent Ovum study of more than 8,000 consumers, 74 percent now use at least three channels when interacting with an enterprise for customer related issues, which is why your customer-facing content must not only be accessible via your corporate website, but must also consistently convey across your social media properties, mobile apps, and other customer service channels of convenience.

The other must is search indexing. Within the past two years, the number of Google searches on mobile devices has grown by 500 percent; and by 2016, mobile searches will overtake PC searches for local search, according to Google Our Mobile Planet Smartphone Research. To remain competitive, an organization's content must be everywhere, and ideally, in multiple languages.

On the flipside of the content coin, making information and answers available to your CSRs is equally important. Having customers come into a support conversation armed with more product or service knowledge than the service rep is unfortunate, but through the power of search, it's happening more and more. A recent Aberdeen multichannel customer service trends report notes that 57 percent of best-in-class customer service providers give their CSRs access to the company's resolution knowledgebase versus 41 percent of all others. For both self-service and full-service customer care, agile channel distribution of content to both customers and CSRs is key.

4. Channel-specific Formatting

If your organization's knowledgebase content is comprised of pdf documents or multiple pages of text devoted to one subject, you've just lost your connection with the growing number of mobile customers. Simplify or repurpose content to make it mobile-friendly, chat-friendly, email-friendly, and even social-friendly, and if you must present a great deal of content, use bolding to highlight the text that will be most useful to the customer trying to find quick and correct answers.

5. Context Development

While today's challenge is mastering the creation, organization, and distribution of knowledgebase content, tomorrow's is to incorporate context to help customers find the information and answers that are most relevant. Mastering context in addition to content ups the ante on self-service success and customer satisfaction.

So call it what you want—"knowledge" or "content" management—whatever you need to get your customer service content the attention it deserves. It's time to brush the dust off that corporate knowledgebase and realize its true potential. Content is king, every facet of it. 

Picture of System Administrator

5G

by System Administrator - Wednesday, 22 April 2015, 3:06 PM
 

Understanding 5G

Making up new definitions in the telecoms market is generally frowned upon and in many cases the technical definitions are overtaken by marketing and publicity definitions: ITU defined 4G to be IMT-Advanced (100Mbps when user is moving, 1Gbps when stationary) but the market has decided otherwise. LTE, and even LTE-Advanced, does not yet meet these requirements, but on the other hand, some operators called HSPA+ a "4G" technology or Long Term HSPA Evolution as an LTE technology, both for marketing and competitive reasons.

A new mobile network generation usually refers to a completely new architecture, which has traditionally been identified by the radio access: Analog to TDMA (GSM) to CDMA/W-CDMA and finally to OFDMA (LTE). So the industry has started now to refer to the next fundamental step beyond fourth generation OFDMA (LTE) networks as being "5G". It is clear that 5G will require a new radio access technology, and a new standard to address current subscriber demands that previous technologies cannot answer. However, 5G research is driven by current traffic trends and requires a complete network overhaul that cannot be achieved organically through gradual evolution. Software-driven architectures, fluid networks that are extremely dense, higher frequency and wider spectrum, billions of devices and Gbps of capacity are a few of the requirements that cannot be achieved by LTE and LTE-Advanced.

This paper will review the technology and society trends that are driving the future of mobile broadband networks, and derive from here a set of future requirements. We will then look at the key technical challenges and requirements, and some of the research subjects that are addressing these. Examples of this include Cloud- RAN, massive MIMO, mmW access, and new air interface waveforms optimized for HetNet and super-dense networks.

The paper will then review the impact of these 5G developments to the test and measurement industry. We will look at both how the 5G technology will change the requirements and parameters we will need to test, and also at how the 5G technology will be used by Test and Measurement to align the test methods to network evolutions.

The final section of the paper will take a more in-depth review of some specific waveforms being evaluated for air interface access. We will study the theory and objectives for the waveforms, and then see how the waveforms can be simulated and analyzed using test equipment. Such an exercise is important as these tests need to be made early in R&D to evaluate the impact and inter-action of the waveforms onto real device technology, to evaluate the real performance. This will also inform closely the level of device technology development needed to support the widespread deployment of the different types of waveforms.

Please read the attached whitepaper.

Picture of System Administrator

7 plantillas para hacer infografías sin Photoshop

by System Administrator - Tuesday, 30 June 2015, 11:54 PM
 

7 plantillas para hacer infografías sin Photoshop

Por Carolina Samsing

Cuando le quieres presentar información a un colega ¿cómo lo haces? ¿escribes un reporte? ¿usas una plantilla de PowerPoint que ya te tiene aburrido de ver tantas veces? Realmente, esta es una duda que mucha gente tiene. Una buena solución para este problema es utilizar infografías.

¿Por qué una infografía? Una infografía es una herramienta muy eficaz para comunicarse y cautivar la atención del lector - te permiten presentar de forma simple información que de cualquier otra forma sería difícil de comunicar.

Una encuesta reciente nos mostró que este año, profesionales de marketing tienen como prioridad aprender sobre contenido original y visual. Y como también aprendimos en nuestro reporte sobre el estado del Inbound Marketing en Latinoamérica, 17% de las empresas considera el contenido visual una prioridad.

Pero aquí está el problema: ¿cómo aquellos que no tienen experiencia con diseño - o el presupuesto para pagar una agencia, un diseñador o un programa de diseño - van a crear infografías profesionales y atractivas?

Que bueno que preguntaste. Aquí te revelamos un pequeño secreto: puedes ser un diseñador profesional utilizando un programa que probablemente ya tienes en tu computadora hace años: PowerPoint. PowerPoint puede ser tu mejor amigo cuando quieres crear contenido visual.

Y para ayudarte a comenzar, hemos creado 7 plantillas de infografías increíbles que puedes usar gratuitamente.

>>Descarga aquí tus 7 plantillas de infografías gratuitas<<

En el siguiente video te mostraremos cómo editar una de estas plantillas y gacer tu propia infografía. No te olvides de descargar las plantillas para poder personalizarlas.

Herramientas básicas para utilizar en cualquier infografía

Cuando piensas en crear una infografía, tienes que considerar cuatro herramientas esenciales de PowerPoint que te ayudarán a lo largo del proceso de creación: 

  • Relleno: determina el color principal del objeto o el texto 
  • Líneas: determina el color del contorno 
  • Efectos: agrega elementos de diseño en la infografía 
  • Formas: te permite escoger una serie de formas pre concebidas

 como-hacer-infografias-1.jpg

Una infografía con distintos colores e imágenes 

Una vez que ya entendiste cómo funcionan las herramientas básicas tienes que empezar por elegir los colores te gustaría utilizar. La mejor forma de hacer esto es seleccionando dos colores principales y dos secundarios. Trata de que estos colores vayan de acuerdo a tu imagen corporativa. 

Si quieres usar formas, íconos y tipos de letra distintos, un buen lugar para encontrarlos es el proprio PowerPoint, que tiene más de 400 opciones de íconos para descargar.

 como-hacer-infograficos-2.jpg

Muestra estadísticas utilizando distintos tipos de letra 

Es muy común querer compartir estadísticas dentro de una infografía. Los gráficos pueden ser monótonos y poco atractivos por lo que intenta utilizar distintos colores. Otra cosa que ayuda a destacar esta información es el uso de distintos tipos de letras y distintos tamaños. También le puedes agregar íconos que sean relevantes a cada estadística, o a las que quieras destacar más. Aquí hay un ejemplo de esto:

 como-hacer-infografias-3.jpg

Compara alternativas

Una infografía es una muy buena forma de comparar dos cosas distintas porque puedes ponerlas lado a lado y es fácil de visualizar las diferencias. Divide cada slide en dos partes y elige un esquema de colores distintos para cada lado, de esta forma, el contraste será mayor. Incorpora todos los puntos que hablamos en este post - utiliza distintos tipos de letras, tamaños, gráficos e imágenes para hacer la información más clara.


 como-hacer-infografias-4.jpg

Busca inspiración en Pinterest

Otra buena idea es inspirarse en Pinterest, por ejemplo, utiliza cajas grandes para mostrar información importante y utiliza tamaños distintos, siempre siguiendo la idea de utilizar imágenes.

 como-hacer-infografias-5.jpg

Algo un poco distinto

Si quieres mostrar información y estadísticas en un formato que no tenga que ser tan formal, puedes utilizar esta plantilla: es divertida pero al mismo tiempo te ayuda a mostrar tu información de una forma clara y cautivadora. 

 como-hacer-infografias-7.jpg

Para terminar

Cuando termines la infografía, guárdala en formato PNG - esto le va a dar mejor calidad de imagen si la quieres utilizar para web.

 como-hacer-infografias-6.jpg

Link: http://blog.hubspot.es

Picture of System Administrator

7 Tips on Becoming an IT Service Broker

by System Administrator - Thursday, 16 April 2015, 1:33 PM
 

7 Tips on Becoming an IT Service Broker

In the not-so-distant past, CIOs had only two choices: build or buy. Today, all that has changed. In addition to build vs buy, IT leaders can choose from an alphabet soup of “as-a-service” options offered in the cloud: SaaS, DaaS, IaaS, and PaaS. You name it; somebody is offering it as a service. Plus,

there’s the added dimension of three different types of clouds: public, private, and hybrid.

As the pressure on IT continues to grow, budgets do not. The demand to “do more with less” is the new normal. What’s a CIO to do?

Many believe the next logical step is for IT leaders to evolve from making build vs buy decisions to matching up line-of-business requirements with third-party service providers — many of them in the cloud.

In essence, an effective CIO must become an IT service broker.

This white paper provides seven tips on how to make that transition with maximum effectiveness and minimal disruption to services.

Please read the attached whitepaper.

Picture of System Administrator

802.11ac Access Points

by System Administrator - Wednesday, 22 October 2014, 1:36 PM
 

Why mobile businesses need 802.11ac access points

by: Craig Mathias

802.11ac access points have finally arrived, bringing significant improvements over 802.11n -- especially now that everyone's on Wi-Fi.

With Wi-Fi now the primary network access method for many business workers, the improved capacity and performance of the 802.11ac wireless standard is just what's needed today.

The 802.11ac wireless standard is really more evolutionary than revolutionary, but it represents a direction with quantifiable benefits that are valuable enough for 802.11ac to constitute the core strategic WLAN direction for just about every enterprise going forward. 802.11ac is that important -- and now it's here. Any thoughts of putting off deployment should be dismissed by the end of this article.

802.11ac or 802.11n: What's the difference?

Consider that enterprise-class 802.11ac access points essentially cost the same as their 802.11n predecessors, meaning that 802.11ac provides an immediate boost to the price/performance ratio. 802.11ac also gets better performance than 802.11n through a combination of improved modulation, which offers more bits on the air per units of frequency and time;beamforming, or the ability to focus transmitted energy in a particular direction, improving throughput, reliability and, where required, range; and wider radio channels that are defined only in the relatively underutilized 5 GHz unlicensed bands. Current 802.11ac access point models support up to 1.3 Gbps, with lower-cost 866 Mbps access points now becoming more widely available as well, and the standard itself extends all the way to 6.93 Gbps -- although products with that level of performance are unlikely anytime soon due to the underlying complexity of such implementations.

Even more important than this performance boost is that 802.11ac access points operating in backwards-compatible 802.11n mode, with current 802.11n clients, yield 15% to 20% better throughput than current 802.11n access points, based on our own testing to date. This improved throughput means that enterprise WLANs currently using 802.11n clients can realize a big boost in capacity, simply by substituting 802.11ac access points -- which, by the way, cost about the same as their 802.11n counterparts. It's also worth noting that while mobile devices equipped with 802.11ac are in short supply at the moment, the number is expected to rapidly increase during 2014.

Deploying 802.11ac

It's easy to recommend that any deployments of previously unprovisioned space (the greenfield case) should use 802.11ac. 802.11ac should also be substituted in any pending orders of 802.11n access points, with the only proviso being to make sure that the existing management console can support such mixed configurations.

It's difficult to understand how waiting to deploy 802.11ac access points makes any sense today. Assuming that demands for capacity are continuing to increase everywhere (driven by BYOD mobile devices often being unable to connect to a wired network), the alternatives are either to do nothing and wait, which exacerbates the capacity problem, or deploy more 802.11n access points, which means investing in a technology that is not going to see further enhancements.

Some have suggested waiting for the so-called wave 2 versions of 802.11ac, which feature higher throughput of 1.8 and even 3.5 Gbps, as well as a capability known as multi-user MIMO, which enables multiple clients to be addressed with distinct data streams during a single access point transmit cycle. Multi-user MIMO will require new clients to make this work (sorry, no firmware upgrades in this case), and while some products here may appear in late 2014, it will be several years before multi-user MIMO and the other advanced features of wave 2 dominate the market.

The need for assurance functionality

Even if a given IT shop chooses not to deploy 802.11ac today despite the obvious benefits, there is one irrefutable and even urgent justification for installation of at least some 802.11ac access points right now -- assurance functionality. Assurance in this case refers primarily to rogue access point detection and intrusion detection and prevention. Note that an 802.11n access point or WLAN sensor cannot detect 802.11ac, so 802.11ac access points configured as sensors -- or dedicated 802.11ac sensors -- are required no matter what. The security and integrity of the WLAN, and the network overall, demand at least some investment in what is clearly going to become the mainstream wireless-LAN technology going forward. Note that 802.11ac access points deployed as sensors can later be converted to access if desired, although assurance functionality is always required regardless.

There is one other gigabit-class WLAN technology that will see increasing utilization over the next few years -- 802.11ad, which was approved over a year ago and which operates in the 60 GHz bands. It's unlikely the two standards will directly compete, however -- 802.11ac will likely replace 802.11n as the mainstream enterprise standard, with 802.11ad filling in in critical power-user, video and specialized high-throughput applications.

Next Steps

Link: http://searchmobilecomputing.techtarget.com/

Picture of System Administrator

A (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:35 PM
 
Picture of System Administrator

A Strategic Timeline for Next-Generation ERP

by System Administrator - Tuesday, 14 February 2017, 10:42 PM
 

A Strategic Timeline for Next-Generation ERP

by Rimini Street

Today's rapidly changing technology landscape and the onset of the "Hybrid IT Era" are driving the vision for a next-generation ERP platform. CIOs have a golden opportunity to lead ongoing business transformation within IT - transformation with the potential to deliver new growth for the business as a whole.

Please read the attached whitepaper.

Picture of System Administrator

A Well-Orchestrated Trust Model

by System Administrator - Tuesday, 13 October 2015, 1:44 PM
 

A Well-Orchestrated Trust Model

 

by Tempered Networks

According to a recent IDG survey, 76% of companies would feel more confident in enterprise security if they were able to 'cloak' vital systems and endpoints--whitelisting trusted devices and making them invisible to the shared network. That kind of confidence comes to life when you flip your trust model. In this report, we explore how these strategic thinkers can do just that, applying trusted overlay networks to conceal sensitive networks, endpoints or applications.

Please read the attached whitepaper.

Picture of System Administrator

Advanced Threat Hunting

by System Administrator - Friday, 11 September 2015, 12:21 AM
 

eGuide to Advanced Threat Hunting

By Bit9

With the number of advanced attacks increasing every day—most undiscovered through traditional detection and response solutions—truly hunting for threats within your environment can be a laborious task. To combat this, enterprises must focus on prioritizing endpoint data collection over detection, leveraging comprehensive threat intelligence, and expanding detection beyond the moment of compromise.

To combat this, enterprises must focus on:

  • Prioritizing endpoint data collection over detection: Businesses need to continuously record the critical data necessary while also maintaining the relationships of those data sets to fully scope an attack.
  • Leveraging comprehensive threat intelligence: Alongside continuous data collection, enterprises must possess the capability to layer threat intelligence and reputation over the data they collect to instantly classify and prioritize threats—accelerating threat discovery in the process.
  • Expanding detection beyond the moment of compromise: Businesses should deploy solutions that can hunt both past and present threats based off of a continuously recorded history—not just individual events.

Tags: advanced threat hunting, traditional detection, endpoint data collection, threat intelligence, threat detection, networking, security, it management

Please read the attached whitepaper.

Picture of System Administrator

Agile Operations

by System Administrator - Tuesday, 13 October 2015, 1:56 PM
 

eGuide: Agile Operations

by CA Technologies

In the application economy, constant application updates are table stakes. To gain a competitive advantage, you must deliver the very best user experience by ensuring those improvements are based on real user feedback and application and infrastructure performance - from mobile to mainframe, on-premise or in the cloud. End-to-end monitoring solutions from CA can give your enterprise the holistic monitoring and in-depth management capabilities it needs to turn this feedback into valuable functions and reduce mean-time-to-recover.
Read this eGuide to learn how you can enhance user experience by leveraging real-time insights from your entire application and infrastructure to drive improvements.

Please read the attached whitepaper.

Picture of System Administrator

Agile Planning

by System Administrator - Thursday, 25 June 2015, 9:36 PM
 

Learn the Top Five Challenges to Agile Planning

LET’S START AN AGILE TEAM

To set the stage, let’s visualize an Agile team getting started. Your senior team has heard all about Agile and wants to gain from all the benefits – better products, shorter development cycles, happy customers and bigger returns. You, as the Agile evangelist, have been selected to lead this effort. You also have a new project just getting started. The project is going to be a new home monitoring system that will cut down electric usage in the average home by 93%. You can picture building your Agile development process and sharing your success with the rest of the company until everyone is bathing in Agile goodness. Not only will you demonstrate the power of Agile, but solve most of the world’s problems in one fell swoop. We’ll further assume that your team is already using a collaborative... 

Please read the attached whitepaper.

Picture of System Administrator

Application Modernization

by System Administrator - Friday, 12 September 2014, 12:03 AM
 

Application Modernization

Picture of System Administrator

Aseguramiento (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 1:12 AM
 

CONCEPTOS RELATIVOS AL ASEGURAMIENTO DE LA CALIDAD PARA LOS PROCESOS DE MEDICIÓN

Proceso de medición; Conjunto de operaciones que permiten determinar el valor de una magnitud.

Confirmación metrológica: Conjunto de operaciones necesarias para asegurar que el equipo de medición cumple con los requisitos para su uso previsto.

Función metrológica: Función con responsabilidad en la organización para definir e implementar el sistema de control de las mediciones.

Sistema de control de las mediciones: Conjunto de elementos interrelacionados o que interactúan , necesarios para lograr la confirmación metrológica y el control continuo de los procesos de medición.

Equipo de medición: Instrumento de medición, software, patrón de medición , material d referencia o equipos auxiliares o combinación de ellos necesarios para llevar a cabo un proceso de medición.

Característica metrológica: Rasgo distintivo que puede influir sobre los resultados de la medición.

Picture of System Administrator

Asegurar los datos empresariales

by System Administrator - Wednesday, 18 February 2015, 7:58 PM
 

Cinco pasos para asegurar los datos empresariales

por Warwick Ashford

Las investigaciones han revelado que la pérdida de datos es una de las principales preocupaciones de los ejecutivos de TI, según la firma de gestión de datos Iron Mountain, que ha compilado cinco pasos para asegurar los datos para conmemorar el Día de la Protección de Datos.

La iniciativa internacional, ahora en su noveno año, tiene como objetivo sensibilizar a los consumidores y las empresas de la importancia de salvaguardar los datos, respetando la privacidad y creando confianza.

El 28 de enero fue elegido porque ese día, en 1981, el Consejo de Europa aprobó el Convenio 108 sobre la protección de los datos personales de los individuos, la raíz de toda la legislación sobre privacidad y protección de datos.

La gerente senior de marketing de producto y soluciones de Iron Mountain, Jennifer Burl, dijo que las empresas de todos los tamaños pueden beneficiarse con los consejos sobre cómo mejorar la seguridad de sus datos.

"De acuerdo con la Alianza Nacional para la Ciberseguridad, el 50% de los ataques cibernéticos dirigidos están apuntados a empresas con menos de 2,500 empleados", agregó.

Burl dijo que hay cinco pasos que las empresas pueden tomar para mantener los datos seguros y protegidos para evitar problemas legales y reglamentarios.

Paso 1: Conozca dónde residen su datos

"No puede completar su plan de seguridad hasta que usted sepa exactamente lo que está protegiendo y dónde se almacena", dijo Burl.

La mayoría de las empresas almacenan datos en múltiples tipos de medios: discos locales, sistemas de respaldo basados en disco, en cintas fuera de las instalaciones y en la nube. Cada tecnología y formato requiere su propio tipo de protección.

Paso 2: Ponga en práctica una política de “conocimiento según necesidad”

Para minimizar el riesgo del error humano (o curiosidad), cree políticas que limiten el acceso a los conjuntos de datos particulares.

Designe el acceso basado en descripciones herméticas de puestos. También asegúrese de automatizar las entradas de acceso al registro para que nadie que ha tenido acceso a un conjunto de datos en particular pase inadvertido.

Paso 3: Refuerce la seguridad de su red

"Su red está casi seguramente protegida por un firewall y un software antivirus. Pero es necesario asegurarse de que esas herramientas están actualizadas y son lo suficientemente amplias como para hacer el trabajo", dijo Burl.

Diariamente se lanzan nuevas definiciones de malware, y el software antivirus tiene que seguirles el ritmo.

La filosofía de traiga su propio dispositivo está aquí para quedarse, y su equipo de TI debe extender su paraguas de seguridad sobre los teléfonos inteligentes y las tabletas que los empleados utilizan para fines de negocios.

Paso 4: Monitoree e informe sobre el ciclo de vida de sus datos

Cree un plan de gestión del ciclo de vida de los datos para garantizar la destrucción segura de los datos datos antiguos y obsoletos de la empresa.

Como parte de este proceso, las empresas deben:

Paso 5: Educar a todo el mundo

"La seguridad de los datos se trata, en última instancia, de la gente", dijo Burl. “Cada empleado debe entender los riesgos y consecuencias de las violaciones de datos y saber cómo prevenirlas, especialmente con el aumento de los ataques de ingeniería social”.

"Hable con sus empleados acerca de las vulnerabilidades, como enlaces web de malwarehábilmente disfrazados en mensajes de correo electrónico no solicitados. Anímelos a hablar si sus computadoras empiezan a funcionar de forma extraña".

Construya una cultura de seguridad en la cual todo el mundo entienda el valor crítico de sus datos de negocio y la necesidad de su protección. "Porque cuando se piensa en ello, todos los días son días de protección de datos", dijo Burl.

Educar a los usuarios para proteger la economía

La firma de gestión de contenido Intralinks dijo que muchas personas traen malos hábitos de seguridad de su casa al negocio, por lo que la educación de los usuarios no se trata solo de protegerlos, sino también sobre la protección de la economía.

El director de tecnología para Europa de Intralinks, Richard Anstey, dijo que puede ser contraproducente decirle a la gente que utilice contraseñas seguras, ya que crea una falsa sensación de seguridad que la gente luego lleva al trabajo.

"Cuando se trata de información muy sensible, como el protocolo de Internet, la gente necesita saber acerca de medidas muy seguras, como la gestión de los derechos de información", dijo.

Según Anstey, la seguridad se trata de saber cuál es el peligro y cómo implementar el nivel adecuado de protección.

"Si queremos una sociedad con datos verdaderamente seguros tenemos que empezar por asegurar que las personas saben qué valor tienen sus datos, entonces pueden tomar una decisión informada sobre cómo asegurarlos", dijo.

Demasiado énfasis en las amenazas externas

La firma de cifrado Egress ha advertido que muchos negocios se están centrando en las amenazas externas.

Una solicitud de libertad de la Información (FOI) de Egress a la Oficina del Comisionado de Información del Reino Unido reveló que el 93% de las brechas de datos se producen como consecuencia de un error humano.

El directores general de Egress, Tony Pepper, dijo que las empresas deberían empezar a mirar más cerca de casa para evitar las violaciones de datos.

"Los errores tales como la pérdida de un dispositivo sin cifrar en el correo o enviar un correo electrónico a la persona equivocada están hiriendo a las organizaciones", dijo.

Pepper añadió que los datos del FOI muestran que se ha gastado un total de 7.7 millones de dólares por errores cometidos al manejar información sensible, mientras que hasta la fecha no se han aplicado multas debido a fallos técnicos que exponen datos confidenciales.

"El error humano nunca será erradicado, ya que la gente siempre comete errores. Por lo tanto, las organizaciones necesitan encontrar formas de limitar el daño causado por estos errores", dijo.

Según Egress, la política debe ser soportada por tecnología de fácil uso que permita formas seguras de trabajar sin afectar la productividad, al tiempo que proporciona una red de seguridad para cuando los usuarios cometen errores.

Las empresas necesitan un enfoque proactivo para la seguridad de los datos

La firma de gobernabilidad de datos Axway dijo que los negocios necesitan tomar un enfoque proactivo hacia la seguridad de los datos frente a los hackers maliciosos y las brechas de datos.

El vicepresidente del Programa de Salida al Mercado de Axway, Antoine Rizk, dijo que en un mundo cada vez más conectado, las empresas necesitan monitorear proactivamente sus flujos de datos para prevenir las brechas de datos costosos.

"Sin embargo, muchas organizaciones grandes siguen esperando a que algo salga mal antes de abordar las fallas en sus estrategias de seguridad –un movimiento que fracasó en algunas de las brechas de seguridad más famosas de 2014", dijo.

Axway predice que en 2015, traer tu propio dispositivo va a evolucionar rápidamente en traer tu propio Internet de las cosas, con los empleados trayendo dispositivos vestibles al lugar de trabajo.

“Para que esa mayor movilidad empresarial abra las ventanas de oportunidades para las empresas, sin allanar el camino para que los hackers accedan a los datos privados, la seguridad debe evolucionar a la misma velocidad que los propios dispositivos", dijo Rizk.

"Las organizaciones también necesitan saber qué datos están trayendo a la oficina los empleados y qué datos están sacando de ella para asegurarse de que los ataques maliciosos y la actividad conspicua están bloqueados", dijo.

Es importante destacar los riesgos en las plataformas móviles

La empresa de protección de aplicaciones Arxan dijo que, en el Día de la Protección de Datos, es importante destacar el aumento de los riesgos en las plataformas móviles, especialmente en el sector bancario y de pagos.

El director de ventas para Europa de Arxan, Marcos Noctor, dijo que la firma predice que los riesgos de seguridad en el sector financiero serán un área clave de amenazas para el 2015.

"Con esto en mente, es vital que la seguridad de las aplicaciones móviles tome prioridadconforme los bancos, proveedores de pago y los clientes busquen hacer más en los dispositivos móviles", dijo.

Una investigación de Arxan reveló que 95% de las 100 mejores aplicaciones financieras de Android y 70% de las aplicaciones de iOS han sido hackeadas el año pasado.

La compañía dijo: "Nos gustaría recomendar a los clientes bancarios y de pago que están considerando el uso de una aplicación financiera móvil que tomen las siguientes medidas para aumentar la seguridad:

  • Descargue aplicaciones bancarias y de pago solo de las tiendas de aplicaciones certificadas;

  • Pregunte a su institución financiera o proveedor de pago si su aplicación está protegida contra la ingeniería inversa;

  • No se conecte a un correo electrónico, banco u otra cuenta sensible a través de WiFi público. Si eso es inevitable –porque usted pasa mucho tiempo en cafés, hoteles o aeropuertos, por ejemplo– pague por el acceso a una red privada virtual, que mejorará considerablemente su privacidad en las redes públicas;

  • Pregunte en su banco o proveedor de pago móvil si han desplegado protecciones automáticas para las aplicaciones que han lanzado en las tiendas de aplicaciones. No confíe solo en antivirus móviles, antispam o sus soluciones de seguridad de dispositivos para toda la empresa para proteger aplicaciones que residen en su dispositivo móvil contra los hackeos o ataques de malware”.

Más noticias y tutoriales

 

Picture of System Administrator

AUACODE

by System Administrator - Saturday, 30 August 2014, 12:31 AM
 

 

 Por Dr. Mario Cabrera Avivar

¿Qué es AUACoDe y en qué se inspiró?

AUACoDe, Asociación Uruguayo Andaluza para la Cooperación y el Desarrollo, es una Asociación Civil sin fines de lucro; que se inspiró en un ser nacional con identidad y autoestima, respetuoso y orgulloso de su pasado, comprometido con su presente y proyectado hacia el futuro con una visión de un porvenir enmarcado en la ética de la vida.

¿Cuándo y quienes la gestaron y constituyeron?

Se   gestó en 1999, a iniciativa de su actual Presidente, Dr. Mario Cabrera Avivar, junto a un grupo de personas, provenientes de diversas actividades de vida, vinculadas a la cultura, el arte, las ciencias sociales y exactas, la educación y la tecnología; se constituyó formalmente en abril del 2000 y obtuvo su Personería Jurídica, por resolución del Sr. Ministro de Educación y Cultura, el 16 de enero del 2001, siendo registrada con el número 8593, en el folio 70 del libro 17.

¿Por qué y para qué la constituyeron?

Por nuestra condición de descendientes de españoles-andaluces, con orgullo y por mandato de sangre, con la conciencia y la convicción de la necesidad de trascender nuestros tiempos, desde el ámbito del estado uruguayo, caracterizado por tener una población, crisol de inmigrantes predominantemente españoles, en un contexto como el actual, en el que el fenómeno inmigratorio se ha revertido, por diversos motivos.

Para :

  • Reafirmar la contribución, de carácter abierta y universal, de los valores y el legado cultural andaluz a estas tierras, como parte del rico legado español, en base a los valores esenciales de España y sus Regiones, como los principales sustentos de nuestra Identidad, aún compartidos con otras corrientes migratorias (incluídas las autóctonas), como lo expresa el escudo de una de sus comunidades autónomas : "Andalucía por sí, para España y la Humanidad"
  • Potenciar nuestra Autoestima en base a actividades vinculantes intergeneracionales, interterritoriales interdisciplinarias e interorganizacionales.

¿Cuál es su Objeto Social?

"Fomentar, promover, realizar y apoyar, con independencia de cualquier corriente de pensamiento político, religioso o filosófico, actividades de cooperación y desarrollo, social, cultural, educativo, científico y tecnológico, a nivel nacional, regional e internacional...", entre Uruguay y España, prioritariamente con la Comunidad Andaluza y " fomentar la  inserción  de  jóvenes en las actividades proyectadas, como factor de cambio futuro, que promueva el desarrollo ambiental y humano saludables y reconozca a la cooperación, como prioridad estratégica para el fortalecimiento de las capacidades locales, nacionales, regionales e internacionales de Iberoamérica, a fin de poder contribuir con su desarrollo integral".

¿Cuándo y para qué se integró a la Federación de Instituciones Españolas o FIE?

Se integró, como miembro pleno de la FIE desde Junio/2001, para:

  • Compartir igualitaria y fraternalmente con todos sus miembros, el deseo y  la acción de pertenencia, desarrollo y grandeza para Uruguay y España. 
  • Complementar e interactuar sinérgicamente con todos los miembros y allegados afines a las instituciones de la FIE, sus actividades  principalmente culturales lúdicas.

¿Qué actividades ha desarrollado e impulsado, desde el año 2000?

  • Festejo del Día de Andalucía, el 28 de febrero.
  • Reuniones Ordinarias de Comisión Directiva(mensuales) y de Asamblea Ordinaria.
  • Actividades Profesionales de Proyectos, vinculado al quehacer de sus miembros socios, acordes con las siguientes  Áreas de Actividad.

¿Qué vínculos nacionales e internacionales ha generado?

Con personas e Instituciones Públicas y Privadas en Uruguay, la Región Iberoamericana, España, Italia, Alemania y con Organismos Transnacionales como OPS/OMS, Federación de Cruz Roja y Unión Europea.

¿Qué siente necesario apoyar?

La necesidad de información a la población general, en particular a los niños, jóvenes, madres y ancianos, para fortalecer su identidad, su autoestima y sus potencialidades para el desarrollo social sustentable, en base a una economía genuina, en pro de un bienestar general viviendo la vida con salud y alegría.

¿Cuál es la Sede Oficial, con quién y a dónde comunicarse ?

Su sede oficial está en el Ateneo de Montevideo, Plaza Cagancha 1157, donde se reune la Comisión Directiva,; pudiendo comunicarse también, con su Presidente, Dr. Mario Cabrera Avivar en la Dirección Postal: Cap.Videla 2891,CP 11600, Montevideo, Tel.709 4970, Cel.(096) 100 000 o al E-mail:auacode@gmail.com

¿Cómo y para qué asociarse?

Se puede asociar telefónicamente o por e-mail, brindando :Nombre completo, Nacionalidad, Tipo y Nº de Documento de Identidad, SIN MÁS COSTO, que aportar sus ideas y su tiempo, para hacer y desarrollar con nuestra Institución un punto de encuentro y proyección de nuestras ideas solidarias y de futuro, respetuosas de nuestro pasado y responsables con nuestro presente y futuro. 

 

Director de AUACODE: Dr. Mario Cabrera Avivar

Sitio Web/Campus Virtual: http://auacode.org

Picture of System Administrator

Auditoría (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 1:10 AM
 

CONCEPTOS RELATIVOS A LA AUDITORÍA

Cliente de la auditoría: Organización o persona que solicita una auditoría.

Programa de la auditoría: Conjunto de una o más auditorías planificadas para un periodo de tiempo determinado y dirigidas hacia un propósito específico.

Auditado: Organización que es auditada.

Auditoría: Proceso sistemático, independiente y documentado para obtener evidencias de la auditoría y evaluarlas de manera objetiva con el fin de determinar la extensión en que se cumplen los criterios de auditoría.

Hallazgos de la auditoría: Resultados de la evaluación de la evidencia de la auditoría recopilada frente a los criterios de auditoría.

Criterios de auditoría: Conjunto de políticas, procedimientos o requisitos utilizados como referencia.

Equipo auditor: Uno o más auditores que llevan a cabo una auditoría.

Evidencia de la auditoría: Registros, declaraciones de hechos o cualquier otra información que son pertinentes para los criterios de auditoría y que son verificables.

Experto técnico: (Auditoría) persona que aporta experiencia o conocimientos específicos con respecto a la materia que se vaya a auditar.

Auditor: Persona con la competencia para llevar a cabo la auditoría.

Conclusiones de la auditoría: Resultado de una auditoría que proporciona el equipo auditor tras considerar los hallazgos de la auditoría.

Picture of System Administrator

Autenticación Multifactor de Usuarios

by System Administrator - Friday, 26 June 2015, 6:21 PM
 

Consejos para la Autenticación Multifactor de Usuarios

Más vieja que la propia web, la autenticación de múltiples factores es un método de tecnología de seguridad de TI que requiere que la gente proporcione múltiples formas de identificación o de información para confirmar la legitimidad de su identidad para una transacción en línea o con el fin de tener acceso a una aplicación corporativa. El objetivo de los métodos de autenticación multifactorial es aumentar la dificultad con la que un adversario puede explotar el proceso de inicio de sesión para vagar libremente por las redes personales o corporativas y así comprometer equipos de cómputo con el fin de robar información confidencial, o algo peor.

Please read the attached whitepaper.

Picture of System Administrator

Automated tests and continuous delivery

by System Administrator - Friday, 23 January 2015, 2:23 PM
 

CTO advocates automated tests and continuous delivery

by: James Denman

Automated testing and continuous development have become the driving force for CTO Andy Piper as Push Technologies evolves its middleware platform.

Andy Piper is CTO at London-based Push Technologies, which provides a Java-based middleware platform that helps U.K. developers working on applications that require a lot of messaging to a lot of users. Most of their clients publish statistical information for either financial entities (like stocks and bonds) or for online gambling. Piper has pushed his development team forward by espousing continuous delivery and automated testing.

 

When it comes to functional testing, Piper says everything has to be automated. "Manual tests are almost valueless," he said. Manual tests take too much time, and he said he needs tests that are quick, clear and repeatable. Those requirements naturally lead to automated tests. He also pointed out that there's practically zero user interface to a middleware platform, which removes a lot of the need for user experience testing.

Conducting performance tests is one area where Piper sees benefits to manual testing. He pointed to Gil Tene's research on latency at Azul and explained that for performance testing, he's not looking for average behavior; he's analyzing the effect of the outliers. He said that using tools like HdrHistogram and jHiccup and analyzing the results intuitively works better for his team than trying to set up reliable automated performance tests.

Piper said the important aspects of functional testing are maintaining quality and moving quickly. "It's about enabling the developers to make changes more confidently," he said, "so they work more efficiently." Automation is an important part of keeping up with the pace at which his developers are able to make changes and making sure they get the feedback they need as soon as possible. But managing a large battery of automated tests can be challenging. 

Some tests break bad

Most of the tests are very straightforward, according to Piper; they either pass or fail and the results are very accurate. However, some tests have a tendency to fail when they should pass -- or they fail for the wrong reason. He calls these tests Heisentests, after the Heisenberg uncertainty principle. These tests are "a bit of a bugbear" for Piper right now.

The Heisentests are tricky because they can't be trusted. A failed result may be accurate and require a developer to recheck the work and fix something. Or a failed result might mean that some detail is slightly different than expected and everything is actually working as it should. Developers don't appreciate being sent on a wild goose chase, especially when the supposed target is an imaginary flaw in their code.

The Heisentests are a problem that persists at Push Technologies for the same reason that technical debt persists at many organizations: There aren't enough staff hours to fix the misfiring tests and meet project deadlines. However, the problem has reached a point where it must be addressed, and Piper is starting by having his team sort out the good tests from the bad. He said he has some testers working to sort the tests using JUnit categories. This way his developers will know which tests to question right away, and the team will know which tests to overhaul when they have time to pay down the technical debt.

Continuously delivering value

The search for efficiency has led Push Technologies to adopt continuous delivery practices. Piper said they use Maven for source code and Jenkinsfor automation, which seems to be a popular combination. Right now, every change that his team commits is automatically merged into a new shippable version of the platform as soon as it passes the battery of automated tests.

Piper deliberately chose continuous delivery over continuous deploymentbecause "enterprise clients want to peg everything to particular releases." It's important for enterprise developers to update middleware at their own pace and to be able to rely on the platform to remain stable.

Push Technologies is working on a cloud release that will likely be aimed at midsize businesses. That version "will probably be more of a continuous deployment model," Piper said. He said that one of the challenges of moving to continuous development will be making sure all the code that goes into production is as hardened as it should be. "I love all my developers to death," he said, "But I'd still feel like I was being irresponsible if I don't keep a close eye on them."

Link: http://searchsoftwarequality.techtarget.com

Picture of System Administrator

Automating the accounts payable process

by System Administrator - Thursday, 4 September 2014, 1:09 PM
 

Automating the accounts payable process

The challenges of a paper-intensive AP process

A study carried out among people working in NHS Trusts to explore attitudes to the challenge to go paperless by 2018 shows very high awareness of the initiative. This awareness permeates all the groups surveyed—heads of Trust, healthcare professionals and IT decision-makers—all of whom are generally enthusiastic about it and recognise the broad range of benefits for their Trusts of going paperless, or at least paper light.

Please read the attached whitepaper

Picture of System Administrator

B (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:35 PM
 
Picture of System Administrator

B Corporation (Benefit Corporation)

by System Administrator - Wednesday, 10 September 2014, 8:54 PM
 

B Corporation (Benefit Corporation)

 

Posted by Margaret Rouse

A B Corporation (or Benefit Corporation) is a voluntary corporate form for for-profit companies that, in addition to wanting the tax benefits and legal protections that come in incorporating, also wish to take into consideration the best interests of society as well as the environment.

Benefit corporation (B corps) is a type of corporate structure recognized by some state governments in the United States. 

In addition to being profitable, a benefit corporation assumes the legal responsibility of considering its impact on society and the environment.  The goal of the corporate structure is to encourage for-profit companies to identify social missions and demonstrate corporate sustainability efforts. In exchange, the corporation may be eligible for certain types of legal protection, bidding protection or tax benefits.

 

As of this writing, almost half the state governments in the United States allow a business to incorporate or re-incorporate as a benefit corporation. Although each state provides its own specific guidelines for incorporation, the general intent of the B corporation form is to allow a business to define non-financial goals in its founding documents. Such goals may include quantifiable levels of acceptable community involvement, sustainability efforts or charitable donations.

Benefit corporation as a corporate structure in the United States has been promoted by B Lab, a not-for-profit company that promotes the use of business to solve social and environmental problems internationally.  B Lab promotes the triple-bottom-line approach to business, placing equal emphasis on people, planet and profit. B Lab also offers a certification for companies that wish to be identified as being a benefit corporation; while the certification can provide public relations benefits, it does not affect the organization’s legal status.

Continue reading:

Link: http://searchcio.techtarget.com

Picture of System Administrator

Balanced Scorecards (BUSINESS)

by System Administrator - Wednesday, 25 June 2014, 4:12 PM
 

Use balanced scorecards to follow through on business strategy

by Barry Wilderman

In a recent article on this site, I explained the concept of strategy maps and how they can be effective in creating a business strategy and monitoring performance. Strategy maps show how an organization’s learning and growth initiatives impact the internal business processes that serve customers and determine financial results.

Balanced scorecards, another concept developed by Robert Kaplan and David Norton of Harvard University,  are the mechanism you use to fill in the details of the objectives that the organization has determined are essential to each of the four levels of the strategy map.

In the sample strategy map from the previous article, "Create High Quality Products" was listed as an internal business process. 

 

But what is the objective for creating high quality and what is the target? These are the kinds of questions an organization attempts to answer and enter into a balanced scorecard template like the one in Figure 1, from the Balance Scorecard Institute, a consultancy based in Cary, North Carolina.

For example, the internal business processes scorecard for "Create High Quality Products" might have the following elements:

  • Objective: Improve the quality of all manufactured products.
  • Measure: The number of products returned due to manufacturing defects
  • Target: For the one-year warranty period, fewer than three returns per 1,000 products shipped
  • Initiatives: Hire three new high-quality engineers. Implement a new supplier relationship management program.

 

BALANCED SCORECARD INSTITUTE AND “THE BALANCED SCORECARD” BY ROBERT KAPLAN AND DAVID NORTON (HARVARD BUSINESS REVIEW).

Figure 1. Balanced scorecard template.
 

Objectives, measures, targets and initiatives can be set at any level of the organization, starting with the individual. For example, a programmer in IT might want to increase his effectiveness in delivering subsystems. The measure could be the number of subsystems delivered, with a target of three in a given year. Initiatives would define each subsystem by title.

This would lead naturally to a set of initiatives by the head of programming, and then, in turn by the CIO.

The balanced scorecard defines the terms of engagement. Poor performance becomes visible, as does superior performance.

Take a look at the example in Figure 2, which comes from Kaplan and Norton’s book about balanced scorecards. There is a full set of definitions for each of the four major areas of balanced scorecards and strategy.  Even though this example is relatively simple, care must be exercised.

 

BALANCED SCORECARD INSTITUTE AND “THE BALANCED SCORECARD” BY ROBERT KAPLAN AND DAVID NORTON (HARVARD BUSINESS REVIEW).

Figure 2. Expanded view of one company’s balanced scorecard.

Consider the information under Financial. The objective, "Increase Market Share," is a noble one. However, in balanced scorecards, objectives do not specify measures or targets, but overall concepts that are subject to interpretation. For example, one might think that this objective is all about revenue, but the stated measure is actually "Increase in Number of Clients." Market share in this case refers to the share of clients, not the share of revenue.

Furthermore, the target listed for this objective is 25% revenue, which somewhat contradicts the measure of the number of clients. So, is the market share goal about percentage of revenue or percentage of clients? You can have a large number of very small clients and only a modest increase in revenue.

The bottom line is to exercise care in how the components are put together.

Aligning strategy maps and balanced scorecards

Figure 3 is an example of a balanced scorecard and strategy map that line up well. It represents a powerful combination for showing strategy hierarchically, with significant detail around each objective.

 

BALANCED SCORECARD INSTITUTE AND “THE BALANCED SCORECARD” BY ROBERT KAPLAN AND DAVID NORTON (HARVARD BUSINESS REVIEW).

Figure 3. Example of strategy map and balanced scorecard that are aligned well.

Analyzing the diagram a bit further suggests certain implications of the regional airline’s strategies and balanced scorecard objectives, along with their relationship to each other:

  • Reducing the number of planes and optimizing routes are complex projects that might require project management software to execute effectively. Moreover, there is a need to understand risks and contingencies.
  • Multiple strategies may be under consideration, and not all of them can be done at the same time. A working committee must analyze when and if certain strategies can be included in the overall strategic plan.
  • Certain initiatives may contradict others. For example, on its face, reducing the number of planes might lower customer satisfaction because fewer choices will be on the schedule.

Balanced scorecards provide the necessary level of detail to implement the strategies defined in strategy maps. Leaving aside their relationship to strategy maps for a moment, using balanced scorecards is just good technique. All project steps should be subject to an analysis of goals, measures, targets and initiatives.

About the author:

Barry Wilderman has more than 30 years of experience as an industry analyst, researcher and consultant at such companies as Meta Group, Lawson Software, SalesOps Analytics and McKinsey and Co. He is currently president of Wilderman Associates. Contact him at Barry@WildermanAssociates.com and on Twitter @BarryWilderman.

MORE ON CORPORATE PERFORMANCE MANAGEMENT

 

Fuente: SearchFinancialApplications

Picture of System Administrator

Base de datos multimodelo

by System Administrator - Wednesday, 2 August 2017, 1:36 PM
 

Base de datos multimodelo

Picture of System Administrator

Best Practices in BPM & Case Management

by System Administrator - Monday, 13 February 2017, 10:21 PM
 

Best Practices in BPM & Case Management

Choosing a case management software solution or deciding to switch to another vendor is nerve-wracking. As with any relationship, whether it's a personal or a business one, the elements that go into the decision require a great deal of thought.

But thanks to the advice in this white paper, you don't need to reinvent the wheel. It provides you with four important questions to ask potential relationship partners. Getting good answers will lead you to the best case management software for your individual situation.

Please read the attached whitepaper.

Picture of System Administrator

Best Practices in Cognitive Computing

by System Administrator - Monday, 13 February 2017, 10:08 PM
 

Best Practices in Cognitive Computing

Don't mistake cognitive computing for a science fiction artifact. Combining elements of machine learning, natural language processing, and artificial intelligence, cognitive computing holds much promise for the future of managing knowledge.

Applying computing power that learns from human behavior leads to better product and services design, which in turn promotes profitability. Learn how cognitive computing can help your business endeavors from this insightful white paper.

Please read the attached whitepaper.

 

Picture of System Administrator

Best Practices in Information Governance

by System Administrator - Thursday, 6 October 2016, 1:16 AM
 

Best Practices in Information Governance

Information Governance Grabs Center Stage

By Marydee Ojala

The old saying about any publicity being good publicity as long as your name is spelled correctly doesn't apply when the publicity is about a data breach. Bad guys breaking into your systems and stealing confidential information about customers and employees is never good publicity. No organization wants to incur the wrath of affected individuals or run afoul of government laws and regulations.

Good information governance policies and procedures help companies avoid data breaches.
They also have the beneficial effect of increasing productivity, streamlining workflows, and efficiently managing data lifecycles.

They facilitate the knowledge management practice of information sharing—safely and legally.
These white papers will provide guidance about putting information governance center stage at
your organization.

Please read the attached whitepaper.

Picture of System Administrator

Better Business Bureau (BBB)

by System Administrator - Friday, 26 May 2017, 1:50 PM
 

Better Business Bureau (BBB)

The Better Business Bureau (BBB) is a non-profit accreditor of ethical businesses. The BBB also acts as a consumer watchdog for questionable sales tactics and scams.

Accreditation allows companies to be certified as legitimate and reputable businesses. For consumers, BBB offers free business reviews of over four million businesses and investigates complaints.

As an avenue for voluntary industry self-regulation, the BBB offers a business code that organizations can pledge to adhere to, paying a fee and receiving a BBB logo to display as a sign of reputability. The BBB also intermediates customer complaints in an official capacity. The organization resolves about 75 percent of more than 885,000 consumer complaints per year.

Fraudulent activities the BBB has dealt with, and raised awareness of, include telephone cruise contest frauds, “Can you hear me?” scams and various tech support scams

Although the BBB is not affiliated with any government department and endorses no particular business, the organization itself isn't without controversy. The non-profit has been alleged to give higher ratings to businesses which pay a membership to the organization, a charge which they deny.

Picture of System Administrator

Blockchain

by System Administrator - Friday, 28 April 2017, 3:48 PM
 

Cómo funciona blockchain: Una explicación infográfica

por Emily McLaughlin

 

Entender cómo funciona una cadena de bloques es el primer paso para aprovechar la tecnología. Aprenda cómo una unidad de valor de blockchain se mueve de la parte A a la parte B.

Una cadena de bloques es un tipo de libro de caja distribuido que utiliza cifrado para almacenar registros permanentes y a prueba de manipulaciones de datos de transacciones. Los datos se almacenan a través de una red peer-to-peer utilizando un principio de "consenso" para validar cada transacción.

Uno de los principales beneficios de un sistema de cadena de bloques es que tiene la promesa de eliminar –o reducir enormemente– la fricción y los costos en una amplia variedad de aplicaciones, principalmente los servicios financieros, ya que elimina una autoridad central (por ejemplo, una cámara de compensación) al realizar y validar transacciones.

La tecnología de blockchain subyace en las criptomonedas, específicamente bitcoin y Ethereum, y está siendo explorada como tecnología fundamental para una serie de otros sistemas de registro como pagos móviles, registros de propiedad y contratos inteligentes.

Cómo funciona la cadena de bloques

Para obtener una sólida comprensión de cómo utilizar blockchain en una configuración empresarial, los CIOs deben comprender primero cómo una unidad de valor en una transacción se mueve de la parte A a la parte B. Esta infografía detalla cómo funciona la cadena de bloques desde el inicio de la transacción, a través de la verificación y hasta llegar a la entrega.

Cómo implementar blockchain

Si bien se espera que la cadena de bloques sea adoptada en primer lugar en los servicios financieros, tiene potencial para una amplia gama de industrias verticales; por ejemplo, la Oficina del Coordinador Nacional para las Tecnologías de Información de la Salud y el NIST examinaron recientemente propuestas para 70 casos de uso diferentes de blockchain para la asistencia sanitaria. Pero no importa la industria, para las empresas que ven los beneficios potenciales de la cadena de bloques –ya sea en ahorros de costos o mayor eficiencia en los procesos existentes, o en oportunidades de ingresos de una nueva línea de negocio– hay un riguroso proceso de implementación estándar a seguir. En nuestra guía paso a paso, Jeff Garzik, cofundador de la empresa de software y servicios de cadena de bloques Bloq, recomienda que los CIO planifiquen una implementación blockchain en cuatro etapas:

  • Etapa 1: Identificar un caso de uso y asignar un plan tecnológico. La elección de casos de uso adecuados es fundamental.
  • Etapa 2: Crear una prueba de concepto.
  • Etapa 3: Realizar una prueba de campo que implique un ciclo de producción limitado con datos orientados al cliente y, a continuación, realizar pruebas adicionales con productos y volúmenes de datos más orientados al cliente.
  • Etapa 4: Realizar un despliegue de volumen completo en la producción.

Impacto social de la tecnología blockchain

Los expertos predicen que la lista de casos de uso de la cadena de bloques, y el impacto de la tecnología en la sociedad, seguirán creciendo. Según Don Tapscott, autor, consultor y CEO de The Tapscott Group, la promesa de blockchain de cambiar cómo se crea la riqueza en todo el mundo es uno de los impactos sociales más significativos a tener en cuenta.

En la Cumbre DC Blockchain en Washington, DC, Tapscott también sugirió que blockchain:

  • Permitirá que las personas que viven en el mundo en desarrollo, que actualmente no tienen cuentas bancarias, participen en la economía digital.
  • Protegerá los derechos a los registros de propiedad.
  • Ayudará a crear una economía compartida basada en el intercambio real.
  • Mejorará el proceso de envío de dinero a miembros de la familia en países extranjeros a través de remesas electrónicas.
  • Ayudará a los consumidores a monetizar los datos, incluidos sus propios datos.
  • Reducirá los costos de hacer negocios.
  • Responsabilizará a los funcionarios gubernamentales con contratos inteligentes.

En el gráfico de abajo, el representante estadounidense David Schweikert (de Arizona); Bart Chilton, ex presidente de la Comisión de Comercio de Bienes Futuros de los Estados Unidos; Carl Lehmann, director de investigación en 451 Research; y David Furlonger, analista de Gartner, son citados este año hablando sobre el impacto de blockchain.

 

Excavando aún más profundamente

Si usted se está poniendo al día en blockchain, aquí está un glosario de términos:

  • Bitcoin: Una moneda digital que no está respaldada por el banco central o gobierno de ningún país; negociados por bienes o servicios con proveedores que aceptan bitcoins como pago
  • Minería bitcoin: El acto de procesar transacciones en el sistema de moneda digital; los registros de transacciones bitcoin actuales –identificados como bloques– se añaden al registro de transacciones pasadas, conocido como la cadena de bloques.
  • Criptomoneda: Un subconjunto de monedas digitales; no tienen representación física y utilizan cifrado para asegurar los procesos involucrados en la realización de transacciones.
  • Billetera digital: Una aplicación de software, normalmente para un teléfono inteligente, que sirve como una versión electrónica de una cartera física.
  • Libro de caja distribuido: Una base de datos en la que partes de la base de datos se almacenan en múltiples ubicaciones físicas y el procesamiento se distribuye entre varios nodos de base de datos; los sistemas de cadena de bloques se denominan libros de caja (ledgers) distribuidos.
  • Ethereum: Una plataforma de computación distribuida pública basada en cadena de bloques con funcionalidad de contrato inteligente; ayuda a ejecutar contratos peer-to-peer usando una criptomoneda llamada éter.
  • Hash/hashing: La transformación de una cadena de caracteres en un valor normalmente más corto, de longitud fija, o una clave que representa la cadena original (similar a la creación de un enlace bitly).
  • Remesas: Una suma de dinero enviado, especialmente por correo o transferencia electrónica, en pago por bienes o servicios, o como regalo.
  • Contrato inteligente: Programa de computadora que controla directamente la transferencia de monedas o activos digitales entre partes bajo ciertas condiciones; almacenados en la tecnología blockchain. .

Profundice más

Picture of System Administrator

Blockchain Technology

by System Administrator - Thursday, 23 March 2017, 12:39 PM
 

Why it's disruptive: Blockchain promises to make firms' back-end operations more efficient and cheaper. Eventually, it could replace companies altogether.

Executive's guide to implementing blockchain technology

By Laura Shin

The technology behind bitcoin is one of the internet's most promising new developments. Here's how businesses can use it to streamline operations and create new opportunities.

Blockchains are one of the most important technologies to emerge in recent years, with many experts believing they will change our world in the next two decades as much as the internet has over the last two.

Although it is early in its development, firms pursuing blockchain technology include IBMMicrosoftWalmart, JPMorgan Chase, Nasdaq, Foxconn, Visa, and shipping giant Maersk. Venture capitalists have so far poured $1.5 billion into the space, with storied firms such as Andreessen Horowitz, Kleiner Perkins Caufield and Byers, and Khosla Ventures making bets on startups.

A blockchain is a golden record of the truth that creates trust among multiple parties.

 

Picture of System Administrator

Bob Metcalfe: Ethernet Inventor Still Rings the Changes

by System Administrator - Friday, 29 August 2014, 8:27 PM
 

 

Ethernet Inventor Bob Metcalfe Still Rings the Changes

 Posted by Martin Veitch

“It’s a great story,” says Bob Metcalfe, speaking down the line from his summer home in Maine, when I ask him if his family has British roots. The man who gave the world Ethernet has a bunch of great stories and, like the stand-up comedian he’s thinking of becoming (more of which later), he can improvise on seemingly any topic.

“We won the battle of Agincourt. We fought with the longbow that had a greater range and higher firing rate than the traditional bow and arrow. Four hundred Metcalfes slaughtered thousands of French. We were from Yorkshire but we blew our money and went to New York.”

He adds that the two-finger salute used by Brits to denote contempt for the recipient comes from the same page in history. The French cut off the fingers of captured archers and the English would show them two fingers to show their digits remained intact. In a clarification email he adds that he considers himself a Viking-American: “Marauding is my game.” It’s the sort of zig-zag way his thought processes go: a brilliant mind but restless in its computations.

I’m trying to get a psychological angle on what made Metcalfe because he’s an unusual character. The self-confidence, way with words and forays into venture capitalism might be classic Silicon Valley shtick, but who else decides spells in publishing and academia might be smart career moves after changing the world through computer networking?

His father was an aerospace test technician who never graduated college and Metcalfe has said in previous interviews that he didn’t get on well with Harvard where his dissertation was initially rejected in 1972, hinting there was a class divide.

“I still contend Harvard doesn’t like engineers much. They prefer the liberal arts. Even when they finally built an engineering school they had to call it the School of Engineering and Applied Sciences,” he says, spitting out the last few words.

That rejection (he finally received his PhD a year later) might have served to give him a thicker skin. He went to the renowned Xerox PARC research facility where his major achievement was the invention of Ethernet, the networking protocol that is the highway of the modern, hyper-connected world. Incidentally, he rebuts the notion of Xerox as a company unable to translate inventions (the graphical user interface, computer mouse and laser printing, for example) into real money. Instead, he says Xerox built a powerful printing business and spies some shifting of responsibilities.

“Usually [ex-Xerox] people say they failed but we worked there.”

He will always be associated with Ethernet but he generously shares credit with many others, even if he was the leading force.

“People tend to think it happened in a day—and it’s a myth I promulgated—but it’s been a 40-year effort. There was punctuated equilibrium. Slow and steady progress punctuated with some sort of breakthrough.” [To be referred to as the Father of Ethernet’ and such like] it’s a little bit cringing and I bend over backwards to include as many people as possible."

However, it wasn’t the invention of Ethernet that brought him wealth but rather the ability to sell local area networking at 3Com, the company he co-founded in 1979.

“I had to learn sales quickly,” he says, and it was the years of long trips across North America, and later the world, that made 3Com a powerful force.

Another myth is that he was ousted from 3Com in some sort of “bloody boardroom battle”, he says.

“3Com’s board of directors twice decided that I shouldn’t be CEO. The board of directors did their job. Both times they chose somebody else and both times their judgment was vindicated.” He only left because he didn’t think it right to have a former CEO contender second-guessing the CEO.

Always quote-worthy, his digs at 3Com CEO Eric Benhamou weren’t based on animosity, he says.

“I think the world of Benhamou. I made a crack that he was successful despite not being very charismatic. To me it was a revelation that a person lacking charisma could be so successful. He stilllacks charisma!”

I express surprise that his next move wasn’t to build another company but into computer-sector publishing, at IDG [this site is part of the IDG group] where he became a publisher, columnist and, later, a board member.

“[InfoWorld magazine editor-in-chief] Stewart Alsop asked me if I wanted to be his boss. Next thing, [the late IDG CEO] Pat McGovern called and invited me to visit corporate [in Framingham, Massachusetts] and San Mateo where InfoWorld was. I insisted on the title of CEO and publisher. Pat said, ‘You don’t want that: publishers sell ads to media buyers’, but it was the opportunity to learn a whole new business and hang out with my peeps. [Oracle CEO] Larry Ellison actually signed off insertion orders and laboured over the copy.”

Those were go-go days for tech publishers and Metcalfe says it didn’t feel like a slower or more conservative environment than tech itself.

“A printing press is much more high-tech than a personal computer. Then the web hit and I was at the heart of it. I watched as one publisher after another either succeeded or failed.”

Metcalfe made headlines himself after predicting the collapse of the internet in a column published in InfoWorld. I’d always suspected this as stemming some controversialist tendencies designed to cook up debate and Metcalfe concurs.

“I’d go much further and say it was a monumental publicity stunt,” he says. It was designed to court publicity for an imminent book, Internet Collapses and Other InfoWorld Punditry (“you can still buy it for $1 on Amazon”).

“People had made fun of [IBM founder] Tom Watson saying there would only be 11 computers in the world and Bill Gates saying you only needed 640K of RAM, and in that vein they made fun of me. It was a self-denying prophecy.”

Ever game, Metcalfe literally ate his words after whizzing them into an edible soupy sludge. Later he predicted the failure of wireless networks.

“In 1993, wireless was in one of those bubbles: the modems were bigger than PCs. I went too far in one of my columns and said it would never catch on… never say never.”

But, he says, the success of wireless only increases demand for Ethernet and back-haul networks. “LTE stands for ‘Leads To Ethernet’,” he quips.

In his writing, he was also among the first to take aim at Microsoft, criticising its business practices and foreshadowing its later conviction as a monopolist abusing its market power. Although some traced his criticisms back to a falling out over licensing, Metcalfe says there was nothing personal.

“It wasn’t Bill Gates; it was the twenty-something petty monopolists at Microsoft. [What I wrote] cost me my relationship with Bill Gates.”

He says he remains an admirer of Gates but recalls being in a room with Microsoft’s PR agency rep at the time of the brouhaha.

“She said how disappointed Bill Gates was. Disappointed! As if it was my job not to disappoint Bill Gates…”

However, the tensions between having been a tech industry star turned media all-rounder were becoming apparent.

“The unusual thing was that I’d crossed over to the dark side. It was confusing to people. I’d attack companies in my columns and then try to sell them ad pages.”

A conflict of interests, surely?

“It was a separation of church and state that took place entirely in my head,” he concedes with characteristic drollery.

“Before I continue I’d like to insist that I was right about Microsoft,” Metcalfe says with mock pomposity. “They were eventually convicted.”

To be just, Metcalfe also coined the term “extranet” and may have done the same for “ping”, as well as giving us Metcalfe’s Law, stating that the value of a network is proportionate to the number of potentially connected devices.

Returning to Microsoft, I ask him whether the US and the wider world is getting better at handling abuses of power in technology.

“We got better at it when we took down IBM and AT&T in the 1980s,” he says. “I think we’re getting worse now. The US has a bad government now and anti-trust has become anti-business.”

“Cronyism” in DC lets the powerful slip away, he says, but then the Europeans don’t get away scot-free either. He considers the recent “right to be forgotten” law relating to Google: “What a stupidthing that is.”

Regrets? He appears to have fewer than Sinatra although he beats himself up for not getting IBM to admit defeat on Token Ring, leaving the road open for a two-decade battle with Ethernet.

“IBM gave me two shots to convince them. My contention is that I hadn’t learned to sell yet. I wouldn’t have used the word ‘collision’ [to describe Ethernet traffic handling] and that was a mistake. That related to blood, breaking glass, like a car crash.”

He should have used the “mot juste”, he says, citing his recent discovery of the French term for an appropriate word.

He adds that today’s networking king of the hill Cisco “wouldn’t exist if I were a better person” although he admires the company and its CEO, John Chambers.

His current mission is helping beautiful Austin “become a better Silicon Valley” and is enjoying his work to that end at the University of Texas. He says that he is living his life in 10-year cycles. Having been an engineer/scientist (Ethernet/Xerox); entrepreneur/executive (3Com); publisher and pundit (IDG); venture capitalist; and Professor of Innovation (University of Texas).

In seven years’ time he might, he says, create a startup, picking up where he left off decades ago. Then again he might become a stand-up comic, he says, as if the two options were a ‘blue socks or red socks’ choice.

He could do the standup patter as he has something of the classic-period Steve Martin in his bearing, dryness, self-mocking and capacity for surprise. Say you were plumping for the former career move though, I ask.

“It’s a way off but if I were starting a company today it would be in computational biology. I know a bit about computation and I have a sense biology is about where computing was in 1980. All the trial and error is starting to give way to science and engineering.”

On the economy he is pessimistic and positive at the same time.

“It’s a bubble and it’s going to burst pretty soon but I like bubbles: they’re tools of innovation. There’s the debt bubble too. Everyone’s in debt, including the US to the tune of $17 trillion.”

I ask if he ever considered a career in politics but he says his contribution is limited to tweeting.

And with that our time is up. Metcalfe says he is getting ready to return to Texas after having the summer off and mentions that he was once a visiting professor in “the real Cambridge” in England.

“I loved it but in the end I was getting stir crazy and needed a change.”

I bet.

Martin Veitch is Editorial Director at IDG Connect

Link: http://www.idgconnect.com/abstract/8642/ethernet-inventor-bob-metcalfe-still-rings-changes

Picture of System Administrator

BPM in the Cloud

by System Administrator - Friday, 26 June 2015, 7:06 PM
 

Guide: BPM in the Cloud

BPM software and cloud computing make a fine pair, but is a move to the cloud the right fit for your organization? Uncover an expert list of considerations you should start with first.

Please read the attached guide.

 

Picture of System Administrator

Branch Office Recovery

by System Administrator - Wednesday, 10 September 2014, 9:12 PM
 

Eliminating the Challenge of Branch Office Recovery 

Nobody can afford to lose data. But managing the backup and recovery of data and services in far-flung locations can present many logistical and technology challenges that add complexity, expense, and risk. A new branch converged infrastructure approach allows IT to project virtual servers and data to the edge, providing for local access and performance while data is actually stored in centralized data centers. IT can now protect data centrally and restore branch operations in a matter of minutes versus days.

Please read the attached whitepaper

Picture of System Administrator

Bring Your Own Cloud (BYOC)

by System Administrator - Monday, 16 March 2015, 10:12 PM
 

Bring Your Own Cloud (BYOC)

Posted by Margaret Rouse

BYOC is a movement whereby employees and departments use their cloud computing service of choice in the workplace. Allowing employees to use a public cloud storage service to share very large files may be more cost-effective than rolling out a shared storage system internally.

BYOC (bring your own cloud) is the trend towards allowing employees to use the cloud service of their choice in the workplace.

In a small or mid-size business, allowing employees to use a public cloudstorage service like Dropbox to share very large files may be more cost-effective than rolling out a shared storage system internally.  Problems can occur, however, when employees fail to notify anyone when they use such services.  The use of any shadow IT can pose security and complianceconcerns in the workplace and BYOC in particular can prevent business owners from knowing exactly where their company’s information is being stored, who has access to it and what it’s being used for.

To prevent BYOC from becoming a problem, businesses should implement policies that strictly define what personal cloud services can be used for work-related tasks (if any) and who needs to be notified when a personal cloud service is used.
Continue Reading About bring your own cloud (BYOC)
Picture of System Administrator

Building BI Dashboards: What to Do—and Not Do

by System Administrator - Monday, 19 January 2015, 1:38 PM
 

Building BI Dashboards: What to Do—and Not Do

BY ALAN R. EARLS

Business intelligence dashboards make it easier for corporate executives and other business users to understand and analyze data. But there are right ways and wrong ways to design them.

Please read the attached PDF

Picture of System Administrator

Business Drivers (BUSINESS)

by System Administrator - Wednesday, 3 September 2014, 7:14 PM
 

Cloud economics subject to business drivers, customer perception

by: Kristen Lee

What are the financial benefits of using the cloud? Don't expect any hard-and-fast formulas. Cloud economics turn out to be a local affair, dependent on a company's business drivers and constraints -- and the ability of CIOs to understand them.

At Health Management Systems Inc., "data is our life blood," said CIO Cynthia Nustad. The Irving, Texas-based Health Management Systems (HMS) analyzes petabytes of data for large healthcare programs to determine whether payments were made to the correct payee and for the right amount. Nustad, who joined HMS as CIO in February 2011, doesn't handle just a lot of data but a lot of highly sensitive data. So, when it comes to calculating the cost benefits of using the cloud for crunching data, the expense oftransporting large data sets to the cloud is just one factor she weighs. Data security, of course, is another -- both real and perceived.

"It's always perception that we're battling, right?" Nustad said. "If a client perceives for any reason that there's less security, it's not worth the hassle to try to dissuade them, because it's always going to be a 'gotcha' if something does go bump in the night, God forbid."

Cloud-based business applications, however, are another story. "It's pretty easy to get a Salesforce, Silkroad, a Red Carpet … that are tuned to what the business team needs," she said. Indeed, HMS' use of SaaS predates her tenure, Nustad said, noting that these apps are now mature enough to either meet or beat any on-premises solutions she could come up with -- and they save her maintenance costs. "They are easy to get up and running, the value proposition is there and they fill a particular business need -- a win-win all the way around."

The potential cost-savings of cloud computing have long been touted as an obvious benefit of using this relatively new platform. And, to be sure, examples abound of companies that have saved millions of dollars in labor costs and upfront capital investment by migrating IT operations to the cloud. Even cloud security -- a cause of concern for many CIOs, not just those trading in super-sensitive data -- is gaining traction. Increasing numbers of companies are realizing that cloud-based security providers offer solutions that are not only cheaper but also better than what they could build and manage in-house.

 

Cynthia Nustad

But as Nustad made clear, any discussion of the economics of cloud is complicated. Hard-and-fast formulas for comparing the cost of cloud services versus in-house delivery of those services are difficult to come by, because for starters, the business models of cloud providers are often not transparent to customers. In addition, many CIOs, for reasons not always in their control, don't fully understand their own costs for providing IT services. Cultural factors also get in the way of calculating the economics of cloud, according to analysts and consultants who cover this field.

"A lot of IT departments are defensive about the use of cloud," said Forrester Research analyst James Staten. "They're worried that if the company starts using more cloud, they'll use less of the data center."

In those instances, the political overlay brings "bias into the analysis" of cloud economics, Staten said, with the result that internal IT staff may claim they're cheaper "when in reality they are not."

Perhaps the biggest reason for the lack of solid financial comparisons, however, is that the business's main motivator for using the cloud is usually not to save money, said David Linthicum, senior vice president at Cloud Technology Partners, a Boston software and services provider specializing in cloud migration services.

"The ability for the company to move into new markets, to acquire companies, to kind of change and shift its core processes around … that typically is where cloud pays off," Linthicum said. "So, even if you may not have direct or very obvious operational cost savings, the cloud may still be for you."

Forrester Research's Staten agrees. "It's pretty much across the board and universal that they use the cloud for agility first and foremost," he said, referring to business priorities. It's only later, after some of those benefits have been realized, that the question of cost savings comes up, and even that push for cost savings, he added, "is usually driven by the IT department ... [and] not usually driven by the business."

Nuanced approach to cloud economics

These complex and, at times, competing business needs often result in CIOs adopting a highly nuanced cloud strategy. While HMS, for example, relies on SaaS for some of its back-end business applications, the analytics it uses to weed out fraud, waste and abuse in healthcare payments, for example, is proprietary and deployed in-house.

"I think if you don't look at cloud and you don't look at the economics of cloud, they'll find another CIO who will."
Pat Smith, CIO

To crunch the data, Nustad said, her team mainly uses a combination of open source and vendor tools (from Teradata and Microstrategy), and the IBM DB2 mainframe software "is still, quite frankly, a cost-effective technology" for the task. Plus, she added, "the bandwidth doesn't exist" to move the data back and forth to the cloud.

"If I have data that I can't easily get at that's in a cloud app or on cloud infrastructure, then I've just disabled my business," she said.

Nustad's not the only one with a cloud economics strategy that is not just a matter of dollars and cents.

Pat Smith, CIO at Our Kids of Miami-Dade Monroe Inc., a not-for-profit serving abused and neglected children, said that she looks at cloud for "availability and reliability that would cost us a lot to duplicate."

 

Pat Smith

She too, however, has tweaked her cloud strategy to meet her company's needs. Smith plans to deployMicrosoft Office 365, and although this cloud service offers an archiving solution, she has decided to put the money into an on-premises archiving solution.

"We feel more comfortable," she said, keeping the archives on-premises. "We have a lot of e-discovery requirements like many organizations, so that's a non-negotiable item for us… . We feel like we have more control over it."

Cloud-first economics

But for some CIOs, parsing cloud economics is a moot exercise.

"It's never been about economics, it's always been about the benefits," said Jonathan Reichental, CIO for the city of Palo Alto. "I am solely focused on functionality and quality and those kinds of higher-value items."

Reichental is working on setting up a business registry for the California city, so that when people set up a business in Palo Alto, the registry has all its information: address, what the business does, revenue, number of staff, etc.

 

Jonathan Reichental

Ten years ago the city would have found a vendor and then built an infrastructure, he said. "The only conversation we're having today is who can provide this in the cloud and what's the user experience like," he said.

One thing is true for all CIOs: Sorting out the benefits of cloud services is a top priority. Our Kids' Smith thinks that what's happening with the cloud today is similar to what happened 10 years ago when CIOs needed to be looking at which services should be provided in-house and what services should be outsourced.

"I think cloud's in the same sphere right now," Smith said. "I think if you don't look at cloud and you don't look at the economics of cloud, they'll find another CIO who will."

Go to part two of this feature to read about expert advice for getting the most out of your cloud services. Steps required for a sound cloud economics include: analyzing business "value drivers," nailing the contract, using cloud monitoring tools and, when in doubt, calling up your CIO peers.

Let us know what you think about the story; email Kristen Lee, features writer, or find her on Twitter @Kristen_Lee_34.

Link: http://searchcio.techtarget.com

Picture of System Administrator

Business Information

by System Administrator - Monday, 16 February 2015, 10:40 PM
 

Launching big data initiatives? Be choosy about the data

Thanks to open source technologies like Hadoop and lower data storage costs, more organizations are able to store multi-structured data sets from any number of internal and external sources. That's a good thing, because valuable insight could lurk in all that info. But how do organizations know what to keep and what to get rid of? It's a problem that the February issue of Business Information aims to solve.

In the cover story, SearchBusinessAnalytics reporter Ed Burns talks to businesses that have learned just what to tease from their data. Take marketing analytics services provider RichRelevance, which runs an online recommendation engine for major retailers such as Target and Kohl's. The company has two petabytes of customer and product data in its systems, and the amount keeps growing. To sift through it for shopping suggestions, RichRelevance looks at just four factors: browsing history, demographic data, the products available on a retailer's website and special promotions currently being offered. That way, it keeps its head above the rising tide of data.

And finding themselves surrounded by a sea of data, businesses are finding it's increasingly important for to know how to swim. Many turn to the waters of the data lake, hoping to cash in on the benefits the Hadoop-based data repository promises. But the data lake may not be as tranquil as it sounds, reporter Stephanie Neil writes. Data governance challenges abound, and changes in workplace culture will most likely be required to make it work.

The issue also features a brand-new column. It's insight from a CIO for CIOs. Or would-be CIOs. The inaugural installation, by Celso Mello, of Canadian home heating and cooling company Reliance Home Comfort, dishes up advice for those wishing to climb the corporate ladder to the C-level.

The issue also puts the spotlight on an IT manager at a Boston nonprofit who used the skills inherited from her political family to usher in a human capital management system upgrade. It also captures some of the wants and needs of BI professionals who attended TechTarget's 2014 BI Leadership Summit, last December and takes a look at the origins and prospects of open source data processing engine Apache Spark. The issue closes with a few words by Craig Stedman, executive editor of SearchDataManagement and SearchBusinessAnalytics, on the hard work needed to put in place an effective business intelligence process.

Please read the attached whitepaper.

Picture of System Administrator

C (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:36 PM
 
Picture of System Administrator

Cadena de Bloques

by System Administrator - Thursday, 23 March 2017, 12:48 PM
 

Formación de una cadena de bloques. La cadena mayor (negra) consiste del serie de bloques más larga del bloque de génesis (verde) al bloque actual. Bloques huerfanos (púrpura) existen fuera de la cadena mayor

Cadena de Bloques

Fuente: Wikipedia

Una cadena de bloques, también conocida por las siglas BC (del inglés Blockchain)1 2 3 4 5 es una base de datos distribuida, formada por cadenas de bloques diseñadas para evitar su modificación una vez que un dato ha sido publicado usando un sellado de tiempo confiable y enlazando a un bloque anterior.6 Por esta razón es especialmente adecuada para almacenar de forma creciente datos ordenados en el tiempo y sin posibilidad de modificación ni revisión. Este enfoque tiene diferentes aspectos:

  • Almacenamiento de datos.- Es lograda mediante la replicación de la información de la cadena de bloques
  • Transmisión de datos.- Es lograda mediante peer-to-peer
  • Confirmación de datos.- Es lograda mediante un proceso de consenso entre los nodos participantes. El tipo de algoritmo más utilizado es el de prueba de trabajo en el que hay un proceso abierto competitivo y transparente de validación de las nuevas entradas llamada minería.

El concepto de cadena de bloque fue aplicado por primera vez en 2009 como parte de Bitcoin.

Los datos almacenados en la cadena de bloques normalmente suelen ser transacciones (Ej. financieras) por eso es frecuente llamar a los datos transacciones. Sin embargo no es necesario que lo sean. Realmente podríamos considerar que lo que se registran son cambios atómicos del estado del sistema. Por ejemplo una cadena de bloques puede ser usada para estampillar documentos y securizarlos frente a alteraciones.7

 

Aplicaciones

El concepto de cadena de bloques es usado en los siguientes campos:

  • En el campo de las criptomonedas la cadena de bloques se usa como notario público no modificable de todo el sistema de transacciones a fin de evitar el problema de que una moneda se pueda gastar dos veces. Por ejemplo es usada en BitcoinEthereumDogecoin y Litecoin, aunque cada una con sus particularidades 8 .
  • En el campo de las bases de datos de registro de nombres la cadena de bloques es usada para tener un sistema de notario de registro de nombres de tal forma que un nombre solo pueda ser utilizado para identificar el objeto que lo tiene efectivamente registrado. Es una alternativa al sistema tradicional de DNS. Por ejemplo es usada en Namecoin.
  • Uso como notario distribuido en distintos tipos de transacciones haciéndolas más seguras, baratas y rastreables. Por ejemplo se usa para sistemas de pago, transacciones bancarias (dificultando el lavado de dinero), envío de remesas y préstamos.
  • Es utilizado como base de plataformas descentralizadas que permiten soportar la creación de acuerdos de contrato inteligente entre pares. El objetivo de estas plataformas es permitir a una red de pares administrar sus propios contratos inteligentes creados por los usuarios. Primero se escribe un contrato mediante un código y se sube a la cadena de bloques mediante una transacción. Una vez en la cadena de bloques el contrato tiene una dirección desde la cual se puede interaccionar con él. Ejemplos de este tipo de plataformas son Ethereum y Eris

Clasificación

Las cadenas de bloques se pueden clasificar basándose en el acceso a los dos datos almacenados en la misma7 :

  • Cadena de bloques pública: Es aquella en la que no hay restricciones ni para leer los datos de la cadena de bloques (los cuales pueden haber sido cifrados) ni para enviar trasacciones para que sean incluidas en la cadena de bloques
  • Cadena de bloques privada: Es aquella en la que tanto los accesos a los datos de la cadena de bloque como el envío de transacciones para ser incluidas, están limitadas a una lista predefinida de entidades

Ambos tipos de cadenas deben ser considerados como casos extremos pudiendo haber casos intermedios.

Las cadenas de bloques se pueden clasificar basándose en los permisos para generar bloques en la misma7 :

  • Cadena de bloques sin permisos: Es aquella en la que no hay restricciones para que las entidades puedan procesar transacciones y crear bloques. Este tipo de cadenas de bloques necesitan tokens nativos para proveer incentivos que los usuarios mantengan el sistema. Ejemplos de tokens nativos son los nuevos bitcoins que se obtienen al construir un bloque y las comisiones de las transacciones. La cantidad recompensada por crear nuevos bloques es una buena medida de la seguridad de una cadena de bloques sin permisos.
  • Cadena de bloques con permisos: Es aquella en la que el procesamiento de transacciones está desarrollado por una predefinida lista de sujetos con identidades conocidas. Por ello generalmente no necesitan tokens nativos. Los tokens nativos son necesarios para proveer incentivos para los procesadores de transacciones. Por ello es típico que usen como protocolo de consenso prueba de participación

Las cadenas de bloques públicas pueden ser sin permisos (ej. Bitcoin) o con permisos (ej. cadenas laterales federadas9 . Las cadenas de bloques privadas tienen que ser con permisos 9 . Las cadenas de bloques con permisos no tienen que ser privadas ya que hay distintas formas de acceder a los datos de la cadena de bloques como por ejemplo7 :

  • Leer las transacciones de la cadena de bloques, quizás con algunas restricciones (Ejemplo un usuario puede tener acceso sólo a las transacciones en las que está involucrado directamente)
  • Proponer nuevas transacciones para la inclusión en la cadena de bloques.
  • Crear nuevos bloques de transacciones y añadirlo a la cadena de bloques.

Mientras que la tercera forma de acceso en las cadenas de bloques con permisos está restringida para cierto conjunto limitado de entidades, no es obvio que el resto de accesos a la cadena de bloques debería estar restringido. Por ejemplo una cadena de bloques para entidades financieras sería una con permisos pero podría7 :

  • Garantizar el acceso de lectura (quizá limitada) para transacciones y cabeceras de bloques para sus clientes con el objetivo de proveer una tecnológica, transparente y fiable forma de asegurar la seguridad de los depósitos de sus clientes.
  • Garantizar acceso de lectura completo a los reguladores para garantizar el necesario nivel de cumplimiento.
  • Proveer a todas las entidades con acceso a los datos de la cadena de bloques una descripción exhaustiva y rigurosa del protocolo, el cual debería contener explicaciones de todas las posibles interacciones con los datos de la cadena de bloques.

 

Cadena lateral

Una cadena lateral, en inglés sidechain, es una cadena de bloques que valida datos desde otra cadena de bloques a la que se llama principal. Su utilidad principal es poder aportar funcionalidades nuevas, las cuales pueden estar en periodo de pruebas, apoyándose en la confianza ofrecida por la cadena de bloques principal10 11 . Las cadenas laterales funcionan de forma similar a como hacían las monedas tradicionales con el patrón oro.12

Un ejemplo de cadena de bloques que usa cadenas laterales es Lisk13 . Debido a la popularidad de Bitcoin y la enorme fuerza de su red para dar confianza mediante su algoritmo de consenso por prueba de trabajo, se quiere aprovechar como cadena de bloques principal y construir cadenas laterales vinculadas que se apoyen en ella. Una cadena lateral vinculada es una cadena lateral cuyos activos pueden ser importados desde y hacia la otra cadena. Este tipo de cadenas se puede conseguir de las siguiente formas11 :

  • Vinculación federada, en inglés federated peg. Una cadena lateral federada es una cadena lateral en la que el consenso es alcanzado cuando cierto número de partes están de acuerdo (confíanza semicentralizada). Por tanto tenemos que tener confianza en ciertas entidades. Este es el tipo de cadena lateral Liquid, de código cerrado, propuesta por Blockstream14 .
  • Vinculación SPV, en inglés SPV peg donde SPV viene de Simplified Payment Verification. Usa pruebas SPV. Esencialmente una prueba SPV está compuesta de una lista de cabeceras de bloque que demuestran prueba de trabajo y una prueba criptográfica de que una salida fue creada en uno de los bloques de la lista. Esto permite a los verificadores chequea que cierta cantidad de trabajo ha sido realizada para la existencia de la salida. Tal prueba puede ser invalidada por otra prueba demostrando la existencia de una cadena con más trabajo la cual no ha incluido el bloque que creó la salida. Por tanto no se requiere confianza en terceras partes. Es la forma ideal. Para conseguirla sobre Bitcoin el algoritmo tiene que ser modificado y es dificil alcanzar el consenso para tal modificación. Por ello se usa con bitcoin vinculación federada como medida temporal

Referencias

  • An Integrated Reward and Reputation Mechanism for MCS Preserving Users’ Privacy. Cristian Tanas, Sergi Delgado-Segura, Jordi Herrera-Joancomartí. 4 de febrero de 2016. Data Privacy Management, and Security Assurance. 2016. pp 83-99
  1. Economist Staff (2015-10-31). «Blockchains: The great chain of being sure about things»The Economist. Consultado el 18 June 2016. «[Subtitle] The technology behind bitcoin lets people who do not know or trust each other build a dependable ledger. This has implications far beyond the crypto currency.»
  2. Morris, David Z. (2016-05-15). «Leaderless, Blockchain-Based Venture Capital Fund Raises $100 Million, And Counting»Fortune (magazine). Consultado el 2016-05-23.
  3. Popper, Nathan (2016-05-21). «A Venture Fund With Plenty of Virtual Capital, but No Capitalist»New York Times. Consultado el 2016-05-23.
  4. Brito, Jerry & Castillo, Andrea (2013). «Bitcoin: A Primer for Policymakers». Fairfax, VA: Mercatus Center, George Mason University. Consultado el 22 October 2013.
  5. Trottier, Leo (2016-06-18). «original-bitcoin» (self-published code collection). github. Consultado el 2016-06-18. «This is a historical repository of Satoshi Nakamoto's original bit coin sourcecode».
  6. «Blockchain»Investopedia. Consultado el 19 March 2016. «Based on the Bitcoin protocol, the blockchain database is shared by all nodes participating in a system.»
  7. a b c d e Public versus Private Blockchains. Part 1Public versus Private Blockchains. Part 2. BitFury Group in collaboration with Jeff Garzik. Octubre 2015
  8. «Particularidades Desarrollo Blockchain». Consultado el 7 Marzo 2017.
  9. Digital Assets on Public Blockchains. BitFury Group. Marzo 2016
  10. La revolución de la tecnología de las cadenas de bloques y su impacto en los sectores económicos. Ismael Santiago Moreno. Profesor Doctor de Finanzas. Universidad de Sevilla octubre 2016
  11. a b Enabling Blockchain Innovations with Pegged Sidechains. Adam Back et ali. 2014
  12. Cadenas laterales: el gran salto adelante. Majamalu el 11 abril, 2014 en Economía, Opinión
  13. Lisk libera la primera criptomoneda modular con cadenas laterales. Bitcoin PR Buzz. Mayo 2016
  14. Liquid Recap and FAQ. Johnny Dilley. Noviembre de 2015

Enlaces externos

Link: https://es.wikipedia.org

Picture of System Administrator

Calidad (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 12:49 AM
 

CONCEPTOS RELATIVOS A LA CALIDAD

Requisito: Necesidad o expectativa establecida, generalmente implícita u obligatoria.

Clase: Categoría o rango dado a diferentes requisitos de la calidad para productos, procesos o sistemas que tienen el mismo uso funcional.

Calidad: Grado en el que un conjunto de características inherentes cumple con los requisitos.

Capacidad: Aptitud de una organización, sistema o proceso para realizar un producto que cumple los requisitos para ese producto.

Satisfacción del cliente: Percepción del cliente sobre el grado en que se han cumplido sus requisitos.

Picture of System Administrator

Características (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 12:58 AM
 

CONCEPTOS RELATIVOS A LAS CARACTERÍSTICAS

Característica: Rasgo diferenciador.

Seguridad de funcionamiento: Término colectivo utilizado para describir el desempeño de la disponibilidad y los factores que la influencian: desempeño de la confiabilidad, de la capacidad de mantenimiento y del mantenimiento de apoyo.

Trazabilidad: Capacidad para seguir la historia, la aplicación o la localización de todo aquello que ésta bajo consideración.

Característica de la calidad: Característica inherente de un producto, proceso o sistema relacionado con un requisito.

Picture of System Administrator

CISO

by System Administrator - Monday, 13 February 2017, 9:39 PM
 

CISO (chief information security officer)

Posted by: Margaret Rouse | Contributor(s): Emily McLaughlin, Taina Teravainen

The CISO (chief information security officer) is a senior-level executive responsible for developing and implementing an information security program, which includes procedures and policies designed to protect enterprise communications, systems and assets from both internal and external threats. The CISO may also work alongside the chief information officer to procure cybersecurity products and services and to manage disaster recovery and business continuity plans.

The chief information security officer may also be referred to as the chief security architect, the security manager, the corporate security officer or the information security manager, depending on the company's structure and existing titles. While the CISO is also responsible for the overall corporate security of the company, which includes its employees and facilities, he or she may simply be called the chief security officer (CSO).

CISO role and responsibilities

Instead of waiting for a data breach or security incident, the CISO is tasked with anticipating new threats and actively working to prevent them from occurring. The CISO must work with other executives across different departments to ensure that security systems are working smoothly to reduce the organization's operational risks in the face of a security attack. 

The chief information security officer's duties may include conducting employee security awareness training, developing secure business and communication practices, identifying security objectives and metrics, choosing and purchasing security products from vendors, ensuring that the company is in regulatory compliance with the rules for relevant bodies, and enforcing adherence to security practices.

Other duties and responsibilities CISOs perform include ensuring the company's data privacy is secure, managing the Computer Security Incident Response Team and conducting electronic discovery and digital forensic investigations.

CISO qualifications and certifications

A CISO is typically an individual who is able to effectively lead and manage employees and who has a strong understanding of information technology and security, but who can also communicate complicated security concepts to technical and nontechnical employees. CISOs should have experience with risk management and auditing.

Many companies require CISOs to have advanced degrees in business, computer science or engineering, and to have extensive professional working experience in information technology. CISOs also typically have relevant certifications such as Certified Information Systems Auditor and Certified Information Security Manager, issued by ISACA, as well as Certified Information Systems Security Professional, offered by (ISC)2.

CISO salary

According to the U.S. Bureau of Labor Statistics, computer and information systems managers, including CISOs, earned a median annual salary of $131,600 as of May 2015. According to Salary.com, the annual median CISO salary is $197,362. CISO salaries appear to be increasing steadily, according to research from IT staffing firms. In 2016, IT staffing firm SilverBull reported the median CISO salary had reached $224,000. 

Continue Reading About CISO (chief information security officer)

Link: http://searchsecurity.techtarget.com

Related Terms

 

Picture of System Administrator

Cloud IoT and IT Security

by System Administrator - Thursday, 2 July 2015, 7:57 PM
 

Cloud IoT and IT Security

More organizations are deploying Internet of Things devices and platforms to improve efficiency, enhance customer service, open up new business opportunities and reap other benefits. But the IoT can expose enterprises to new security threats, with every connected object becoming a potential entry point for attackers.

This eBook will discuss:

  • What to expect from IoT security standardization efforts;
  • Whether current generation systems, like mobile device management software, will help;
  • How to approach networking to keep corporate systems secure; and 
  • How to make sure the cloud components of your IoT implementations are secure.

Please read the attached ebook.               

Picture of System Administrator

Cloud Mechanics: Delivering Performance in Shared Environments

by System Administrator - Monday, 22 December 2014, 9:17 PM
 

Cloud Mechanics: Delivering Performance in Shared Environments

By: VMTurbo

Expedient Data Centers, a leader in Managed and Data Center Services with locations from Cleveland to Memphis to Boston, unpacks the mechanics of how it consistently meets Service Level Agreements for its customers. This whitepaper explores how service providers use VMTurbo to provide consistent performance across all workloads, as well as the three roles a responsible managed service provider (MSP) takes in order to accomplish that directive.

Please read the attached whitepaper.

 

Picture of System Administrator

Cloud Orchestrator

by System Administrator - Thursday, 29 October 2015, 8:23 PM
 

Cloud Orchestrator

Posted by Margaret Rouse

A cloud orchestrator is software that manages the interconnections and interactions among cloud-based and on-premises business units. Cloud orchestrator products use workflows to connect various automated processes and associated resources. The products usually include a management portal.

To orchestrate something is to arrange various components so they achieve a desired result. In an IT context, this involves combining tasks into workflows so the provisioning and management of various IT components and their associated resources can be automated. This endeavor is more complex in a cloud environment because it involves interconnecting processes running across heterogeneous systems in multiple locations. 

Cloud orchestration products can simplify the intercomponent communication and connections to users and other apps and ensure that links are correctly configured and maintained. Such products usually include a Web-based portal so that orchestration can be managed through a single pane of glass.

When evaluating cloud orchestration products, it is recommended that administrators first map the workflows of the applications involved. This step will help the administrator visualize how complicated the internal workflow for the application is and how often information flows outside the set of app components. This, in turn, can help the administrator decide which type of orchestration product will help automate workflow best and meet business requirements in the most cost-effective manner.  

Orchestration, in an IT context, is the automation of tasks involved with managing and coordinating complex software and services. The endeavor is more complex in a cloud environment because it involves interconnecting processes running across heterogeneous systems in multiple locations. Processes and transactions have to cross multiple organizations, systems and firewalls.

The goal of cloud orchestration is to, insofar as is possible, automate the configuration, coordination and management of software and software interactions in such an environment. The process involves automating workflows required for service delivery. Tasks involved include managing server runtimes, directing the flow of processes among applications and dealing with exceptions to typical workflows.

Vendors of cloud orchestration products include Eucalyptus, Flexiant, IBM, Microsoft, VMware and V3 Systems.

The term “orchestration” originally comes from the study of music, where it refers to the arrangement and coordination of instruments for a given piece.

Continue Reading About cloud orchestrator

Related Terms

Dig Deeper on Cloud data integration and application integration

Link: http://searchcloudapplications.techtarget.com

Picture of System Administrator

Cloud vs. on-premises

by System Administrator - Tuesday, 2 May 2017, 10:26 PM
 

Cloud vs. on-premises: Finding the right balance

By Sandra Gittlen

The process of figuring out which apps work in the cloud vs. on-premises doesn't yield the same results for everyone.

Greg Downer, senior IT director at Oshkosh Corp., a manufacturer of specialty heavy vehicles in Oshkosh, Wisc., wishes he could tip the balance of on-premises vs. cloud more in the direction of the cloud, which currently accounts for only about 20% of his application footprint. However, as a contractor for the Department of Defense, his company is beholden to strict data requirements, including where data is stored.

"Cloud offerings have helped us deploy faster and reduce our data center infrastructure, but the main reason we don't do more in the cloud is because of strict DoD contract requirements for specific types of data," he says.

In Computerworld's Tech Forecast 2017 survey of 196 IT managers and leaders, 79% of respondents said they have a cloud project underway or planned, and 58% of those using some type of cloud-based system gave their efforts an A or B in terms of delivering business value.

Downer counts himself among IT leaders bullish on the cloud and its potential for positive results. "While we don't have a written cloud-first statement, when we do make new investments we look at what the cloud can offer," he says.

Oshkosh has moved some of its back-office systems, including those supporting human resources, legal and IT, to the cloud. He says most of the cloud migration has been from legacy systems to software as a service (SaaS). For instance, the organization uses ServiceNow's SaaS for IT and will soon use it for facilities management.

According to the Forecast report, a third of respondents plan to increase spending on SaaS in the next 12 months.

Cordell Schachter, CTO of New York City's Department of Transportation, says he allies with the 22% of survey respondents who plan to increase investments in a hybrid cloud computing environment. The more non-critical applications he moves out of the city's six-year-old data center, the more room he'll have to support innovative new projects such as the Connected Vehicle Pilot Deployment Program, a joint effort with the U.S. Department of Transportation's Intelligent Transportation Systems Joint Program Office.

The Connected Vehicle project, in the second year of a five-year pilot, aims to use dedicated short-range communication coupled with a network of in-vehicle and roadway sensors to automatically notify drivers of connected vehicles of traffic issues. "If there is an incident ahead of you, your car will either start braking on its own or you'll get a warning light saying there's a problem up ahead so you can avoid a crash," Schachter says. The program's intent is to reduce the more than 30,000 vehicle fatalities that occur in the U.S. each year.

Supporting that communication network and the data it generates will require more than the internal data center, though. Schachter says the effort will draw on a hybrid of on-premises and cloud-based applications and infrastructure. He expects to tap a combination of platform as a service, infrastructure as a service, and SaaS to get to the best of breed for each element of the program.

"We can use the scale of cloud providers and their expertise to do things we wouldn't be able to do internally," he says, adding that all providers must meet NYC DOT's expectations of "safer, faster, smarter and cheaper."

Apps saved for on-premises

In fact, Schachter has walled off only a few areas that aren't candidates for the cloud -- such as emergency services and email. "NYC DOT is one of the most sued entities in New York City, and we constantly need to search our corpus of emails. We have a shown a net positive by keeping that application on-premises to satisfy Freedom of Information Law requests as well as litigation," he says.

The City of Los Angeles also has its share of applications that are too critical to go into the cloud, according to Ted Ross, CIO and general manager of the city's Information Technology Agency. For instance, supervisory control and data acquisition (SCADA), 911 Dispatch, undercover police operations, traffic control and wastewater management are the types of data sets that will remain on-premises for the foreseeable future.

"The impact of an abuse is so high that we wouldn't consider these applications in our first round of cloud migrations. As you can imagine, it's critical that a hacker not gain access to release sewage into the ocean water or try to turn all streetlights green at the same time," he says.

The cloud does serve as an emergency backup to the $108 million state-of-the-art emergency operations center. "If anything happens to the physical facility, our software, mapping and other capabilities can quickly spin up in the cloud," he says, adding that Amazon Web Services and Microsoft Azure provide many compelling use cases.

The city, with more than 1,000 virtual servers on-premises, considers the cloud a cost-effective godsend. "We very much embrace the cloud because it provides an opportunity to lower costs, makes us more flexible and agile, offers off-site disaster recovery, empowers IT personnel, and provides a better user experience," he says.

SaaS is a gateway drug to other cloud services. Ted Ross, CIO, city of Los Angeles

As an early adopter of Google's Gmail in 2010, Ross appreciates the value of the cloud, so much so that in 2014, the city made cloud a primary business model, starting with SaaS, which he calls "a gateway drug to other cloud services."

Eventually, the city ventured into infrastructure as a service, including using "a lot of Amazon Web Services," which Ross describes as more invasive than SaaS and more in need of collaboration between the service provider and the network team. "You have to be prepared to have a shared security model and to take the necessary steps to enact it," he says. Cloud computing also requires additional network bandwidth to reduce latency and maximize performance, he adds.

Other reasons for saying no to the cloud

As much as Ross is a cloud promoter, he says he fully understands the 21% of respondents to Computerworld's Forecast survey who say they have no plans to move to the cloud. "I get worried when users simply want to spin up anything anywhere and are only concerned about functionality, not connectivity and security."

Ron Heinz, founder and managing director of venture capital firm Signal Peak Ventures, says there will always be a market for on-premises applications and infrastructure. For instance, one portfolio client that develops software for accountants found that 40% of its market don't want to move their workflow to the cloud.

Heinz attributes the hesitation to more mature accounting professionals and those with security concerns. "Everybody automatically assumes there is a huge migration to the cloud. But there will always be a segment that will never go the cloud as long as you have strong virtual private networks and strong remote access with encrypted channels," he says.

Greg Collins, founder and principal analyst at analyst firm Exact Ventures, has found clients usually stick with on-premises when they are still depreciating their servers and other gear. "They have the attitude 'if it ain't broke, don't fix it,'" he says.

Still, he also believes the cloud is still in the early days and will only grow as the installed base of on-premises equipment hits end of life.

Performance gains

"We have seen a significant shift in the last couple of years in the interest for public cloud," says Matthew L. Taylor, managing director of consulting firm Accenture Strategy. Accenture, a company of more than 394,000 employees, has most of its own applications hosted in the public cloud.

Many of his clients are not moving as fast. "I wouldn't say the majority of our clients' application loads are in the public cloud today; that's still the opportunity," he says.

Of the clients that have moved to the cloud, very few have gone back to on-premises. "If they did, it wasn't because the cloud-based capabilities were not ready; it was because the company wasn't ready and hadn't thought the migration, application or value case through," Taylor says, adding that others who floundered did so because they couldn't figure out how to wean off their legacy infrastructure and run it in tandem with the cloud.

Most of his clients have been surprised to find that lower service costs have not been the biggest benefit of the cloud. "In the end, savings don't come from technology tools, they come from operational shifts and performance gains," he says.

For instance, a bank in Australia that he wouldn't name moved a critical application to the cloud but had two other applications on-premises, causing performance problems. The performance problems arose because the cloud app relied heavily on the on-premises applications, so performance was slowed as they tried to communicate with one another. Once the bank moved all three applications to the cloud, it found the applications had never performed better, and downtime and maintenance improved.

Kas Naderi, senior vice president of Atlanticus Holdings Corp., a specialty finance company focused on underserved consumers in the U.S., U.K., Guam and Saipan, had a similar experience when the company "lifted and shifted" its entire application portfolio to the cloud. "Every one of our applications performed as good or better than in our data center, which had hardware that was ten years old," he says.

In 2014, the company took all existing applications and ran them "as is" in the cloud environment. Atlanticus relied on consulting firm DISYS to not only validate Atlanticus' migration approach, but also to help staff a 24-hour, "follow the sun" implementation. "They enabled us to accelerate our timeline," he says. In addition, DISYS, an Amazon Web Services partner, lent its expertise to explain what would and wouldn't work in Amazon's cloud.

Atlanticus deployed a federated cloud topology distributed among Amazon Web Services, Microsoft Azure, Zadara cloud storage, InContact Automatic Call Distribution, and Vonage phone system, with applications sitting where they operate best -- such as Microsoft Active Directory on Azure. The company front-ends Amazon Web Services with a private cloud that handles security tasks including intrusion detection/prevention and packet inspection. "There is an absolute need for private cloud services to encapsulate a level of security and control that might not be available in the public cloud," Naderi says.

In its next phase of cloud migration, Atlanticus will assess whether legacy applications have SaaS or other cloud-based alternatives that perform even better. In other words, the company took all its applications "as is," including legacy, and put them in the cloud. Now they are going to see if there are better alternatives to those legacy apps available to adopt.

Oshkosh ran a similar exercise and found that cloud-based SharePoint outperformed on-premises SharePoint and improved functionality. For instance, the company has been able to create a space where external suppliers can interact with internal employees, safely exchanging critical information. "That was challenging for on-premises," Downer says.

He adds: "We also are using various CRM cloud applications within some segments, and have started to meet niche business requirements on the shop floor with cloud solutions."

Staffing the cloud

As organizations move to the cloud, they sometimes harbor the misconception that migration means they need fewer IT staff. These IT leaders say that's not the case. Instead, they've gotten more value out of their skilled workforce by retraining them to handle the demands of cloud services.

Greg Downer, senior IT director at specialty vehicle manufacturer Oshkosh Corp.: "We retrained our legacy people, which went well. For instance, we trained our BMC Remedy administrators on the ServiceNow SaaS. We're not just using 10% to 20% of a large on-premises investment, but getting the full value of the platform subscription we are paying for."

Kas Naderi, senior vice president of technology, specialty finance company Atlanticus Holdings Corp.: "Our staff used to be extended beyond the normal 40-hour week, handling ad-hoc requests, emergencies, upgrades, security, etc. We were blessed to have a very flexible and high-IQ staff and were happy to shift their day-to-day responsibilities away from upkeep and maintenance to leadership of how to best leverage these cloud-based platforms for better quality of service. We have become a lot more religious on operating system upgrades and security postures and a lot more strategic on documentation and predictability of services. We went from racking and stacking and maintaining the data center to a business purpose."

Ted Ross, general manager of information technology and CIO, city of Los Angeles: "Moving to the cloud requires a sizeable skills change, but it's also a force multiplier that lets fewer hands do a lot more. We're not a start-up; we're a legacy enterprise. Our data center had a particular set of processes and its own ecosystem and business model. We want to continue that professionalism, but make the pivot to innovative infrastructure. We still have to be smart about data, making sure it's encrypted at rest, and working through controls. The cloud expands our ecosystem considerably, but of course we still don't want to allow critical information into the hands of the wrong people."-- Sandra Gittlen

Related:  

Link: http://www.computerworld.com

Picture of System Administrator

Cloud-Based Disaster Recovery on AWS

by System Administrator - Monday, 5 January 2015, 8:38 PM
 

Best Practices: Cloud-Based Disaster Recovery on AWS

This book explains Cloud-based Disaster Recovery in comparison to traditional DR, explains its benefits, discusses preparation tips, and provides an example of a globally recognized, highly successful Cloud DR deployment.

Please read the attached PDF

Using AWS for Disaster Recovery

by Jeff Barr

Disaster recovery (DR) is one of most important use cases that we hear from our customers. Having your own DR site in the cloud ready and on standby, without having to pay for the hardware, power, bandwidth, cooling, space and system administration and  quickly launch resources in cloud, when you really need it (when disaster strikes in your datacenter) makes the AWS cloud the perfect solution for DR. You can quickly recover from a disaster and ensure business continuity of your applications while keeping your costs down.

Disaster recovery is about preparing for and recovering from a disaster.  Any event that has a negative impact on your business continuity or finances could be termed a disaster.  This could be hardware or software failure, a network outage, a power outage, physical damage to a building like fire or flooding, human error, or some other significant disaster. 

In that regard, we are very excited to release Using AWS for Disaster Recovery Whitepaper. The paper highlights various AWS features and services that you can leverage for your DR processes and shows different architectural approaches on how to recover from a disaster. Depending on your Recovery Time Objective (RTO) and Recovery Point Objective (RPO) – two commonly used industry terms when building your DR strategy – you have the flexibility to choose the right approach that fits your budget. The approaches could be as minimum as backup and restore from the cloud or full-scale multi-site solution deployed in onsite and AWS with data replication and mirroring.

The paper further provides recommendations on how you can improve your DR plan and leverage the full potential of AWS for your Disaster Recovery processes. 

AWS cloud not only makes it cost-effective to do DR in the cloud but also makes it easy, secure and reliable. With APIs and right automation in place, you can fire up and test whether you DR solution really works (and do that every month, if you like) and be prepared ahead of time. You can reduce your recovery times by quickly provisioning pre-configured resources (AMIs) when you need them or cutover to already provisioned DR site (and then scaling gradually as you need). You can bake the necessary security best practices into an AWS CloudFormation template and provision the resources in an Amazon Virtual Private Cloud (VPC). All at the fraction of the cost of conventional DR. 

Link: https://aws.amazon.com

AWS Architecture Blog

 

Picture of System Administrator

Cloudlet

by System Administrator - Tuesday, 14 February 2017, 11:31 AM
 

Cloudlet

Posted by: Margaret Rouse | Contributor(s): Kathleen Casey

A cloudlet is a small-scale data center or cluster of computers designed to quickly provide cloud computing services to mobile devices, such as smartphones, tablets and wearable devices, within close geographical proximity.

The goal of a cloudlet is to increase the response time of applications running on mobile devices by using low latency, high-bandwidth wireless connectivity and by hosting cloud computing resources, such as virtual machines, physically closer to the mobile devices accessing them. This is intended to eliminate the wide area network (WAN) latency delays that can occur in traditional cloud computing models.

The cloudlet was specifically designed to support interactive and resource-intensive mobile applications, such as those for speech recognition, language processing, machine learning and virtual reality.

 

Key differences between a cloudlet and a public cloud data center

A cloudlet is considered a form of cloud computing because it delivers hosted services to users over a network. However, a cloudlet differs from a public cloud data center, such as those operated by public cloud providers like Amazon Web Services, in a number of ways.

First, a cloudlet is self-managed by the businesses or users that employ it, while a public cloud data center is managed full-time by a cloud provider. Second, a cloudlet predominantly uses a local area network (LAN) for connectivity, versus the public Internet. Thirdly, a cloudlet is employed by fewer, more localized users than a major public cloud service. Finally, a cloudlet contains only "soft state" copies of data, such as a cache copy, or code that is stored elsewhere.

The cloudlet prototype

A prototype implementation of a cloudlet was originally developed by Carnegie Mellon University as a research project, starting in 2009. The term cloudlet was coined by computer scientists Mahadev Satyanarayanan, Victor Bahl, Ramón Cáceres and Nigel Davies.

Continue Reading About cloudlet

Related Terms

Link: http://searchcloudcomputing.techtarget.com

Picture of System Administrator

Common Vulnerabilities and Exposures (CVE)

by System Administrator - Thursday, 30 April 2015, 11:07 PM
 

Common Vulnerabilities and Exposures (CVE)

Picture of System Administrator

Compliance Audit

by System Administrator - Saturday, 14 March 2015, 1:53 PM
 

Compliance Audit

Posted by Margaret Rouse

A compliance audit is a comprehensive review of an organization's adherence to regulatory guidelines. Independent accounting, security or IT consultants evaluate the strength and thoroughness of compliancepreparations. Auditors review security polices, user access controls and risk management procedures over the course of a compliance audit.

What, precisely, is examined in a compliance audit will vary depending upon whether an organization is a public or private company, what kind of data it handles and if it transmits or stores sensitive financial data. For instance, SOX requirements mean that any electronic communication must be backed up and secured with reasonable disaster recovery infrastructure. Healthcare providers that store or transmit e-health records, like personal health information, are subject to HIPAA requirements. Financial services companies that transmit credit card data are subject to PCI DSSrequirements. In each case, the organization must be able to demonstrate compliance by producing an audit trail, often generated by data from event log management software.

Compliance auditors will generally ask CIOs, CTOs and IT administrators a series of pointed questions over the course of an audit. These may include what users were added and when, who has left the company, whether user IDs were revoked and which IT administrators have access to critical systems. IT administrators prepare for compliance audits using event log managers and robust change management software to allow tracking and documentation authentication and controls in IT systems. The growing category of GRC (governance, risk management and compliance) software enables CIOs to quickly show auditors (and CEOs) that the organization is in compliance and will not be not subject to costly fines or sanctions.

 

Continue Reading About compliance audit

Related Terms

Link: http://searchcompliance.techtarget.com

Picture of System Administrator

Conformidad (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 1:02 AM
 

CONCEPTOS RELATIVOS A LA CONFORMIDAD

Defecto: Incumplimiento de un requisito asociado a un uso previsto o especificado.

No conformidad: Incumplimiento de un requisito.

Conformidad: Cumplimiento de un requisito.

Liberación: Autorización para proseguir con la siguiente etapa de un proceso.

Acción preventiva: Acción tomada para eliminar la causa de una no conformidad potencial u otra situación potencialmente indeseable.

Acción correctiva: Acción tomada para eliminar la causa de una no conformidad detectada u otra situación indeseable.

Corrección: Acción tomada para eliminar una no conformidad detectada.

Reproceso: Acción tomada sobre un producto no conforme para que cumpla con los requisitos.

Reparación: Acción tomada sobre un producto no conforme para convertirlo en aceptable para su utilización prevista.

Reclasificación: Variación de la clase de un producto no conforme, de tal forma que sea conforme con requisitos que difieren de los iniciales.

Desecho: Acción tomada sobre un producto no conforme para impedir su uso inicialmente previsto.

Concesión: Autorización para utilizar o liberar un producto que no es conforme con los requisitos especificados.

Permiso de desviación: Autorización para apartarse de los requisitos originalmente especificados de un producto antes de su realización.

Picture of System Administrator

Connection Broker

by System Administrator - Monday, 9 March 2015, 3:07 AM
 

5 ways a Connection Broker Simplifies Hosted Environments

With all the moving parts to think about when moving resources into the data center, a connection broker might be the last thing on your mind.

Waiting until you've designed the rest of your data center to consider the connection broker can be detrimental to the overall usability of your system. 

This is why we've created our new eBook, which outlines five scenarios where including a connection broker into your design from the get-go can future-proof and improve your hosted desktop solution.  

Download our new eBook and learn about:

  • Supporting mixed virtual and physical environments
  • Migrating between virtualization and hosted desktop solutions
  • Supporting a wide range of users and use cases
  • And more!

Please read the attached whitepaper

Picture of System Administrator

Converged Infrastructure

by System Administrator - Monday, 30 November 2015, 5:20 PM
 

Achieve Your IT Vision With Converged Infrastructure

Whether you've already deployed a converged system or have future deployment plans, you can maximize that investment with automation. This paper outlines 4 steps to reduce your IT complexity with converged infrastructure so your team gains the freedom to innovate and drive bottom-line results.

 

Please read the attached whitepaper.

Picture of System Administrator

Converged Infrastructures Deliver the Full Value of Virtualization

by System Administrator - Sunday, 27 December 2015, 3:27 PM
 

Converged Infrastructures Deliver the Full Value of Virtualization

By Ravi Chalaka | Hitachi Data Systems

Satisfied with your virtualization efforts?
You could be.

How does an organization modernize IT and get more out of infrastructure resources? That’s a question many CIOs ask themselves. With hundreds or even thousands of physical hardware resources, increasing complexity and massive data growth, you need new, reliable ways to deliver IT services in an on-demand, flexible and scalable fashion. You also must address requests for faster delivery of business services, competition for resources and trade-offs between IT agility and vendor lock-in.

Please read the attached whitepaper.

Picture of System Administrator

CRM Handbook

by System Administrator - Thursday, 9 July 2015, 4:31 PM
 

 

Please read the attached handbook.

Picture of System Administrator

Crowdsourced Testing

by System Administrator - Wednesday, 29 March 2017, 9:40 PM
 

¿La solución para la rápida entrega de aplicaciones para móviles? Es una prueba de crowdsourcing

Crowdsourced Testing es una plataforma web que conecta empresas especializadas en desarrollo de software y sitios web con una red internacional de profesionales del aseguramiento de calidad (testers) que pueden probar sus productos para encontrar fallas y reportarlas de forma rápida y expedita para facilitar su corrección, donde el cliente son las empresas que pagan por este servicio y el usuario el grupo de testers encargado de las mejoras. Los testers de Crowdsourced Testing son trabajadores independientes que trabajan desde su casa, todos con experiencia previa en aseguramiento de calidad de productos informáticos. 

The solution to speedy mobile app delivery? It's crowdsourced testing

 

Sometimes you just need a lot of users playing with your app to find out how it's really working. Enter crowdsourced testing. It's the latest strategy to speed up your mobile dev.

At a time when the pressure to develop, test and release mobile apps quickly has never been more intense, the idea of crowdsourced testing is growing in popularity. The concept is simple: A crowdsourced testing company can offer thousands of testers in different locations around the world a wide swath of devices, and by literally throwing a "crowd" at the problem, testing that might take weeks with a small internal team can be done on a weekend, said Peter Blair, vice president of marketing at Applause. And it's an idea that has apparently caught hold. According to data from market research firm Gartner Group, there were 30 crowdsourced testing companies operating at the end of last year, offering fully vetted (qualified) testers, up from just 20 companies in 2015.

Priyanka Halder, director of quality assurance at HomeMe, is no stranger to crowdsourced testing. She participated in a number of "bug battles" at uTest, a software testing community that also offers crowdsourced testing opportunities. So when she joined the small startup HomeMe she immediately began thinking about a crowdsourced testing solution. 

"We're a pretty small company and we needed a larger number of people looking at our app and on a tight budget," she said. "This is the perfect model for us because we can't afford a big team on our site."

People just do things that no system, no automation and no engineer could ever predict they'd do." 

Peter Blairvice president of marketing, Applause

With crowdsourced testing it is all about the big team. Blair said Applause has over 250,000 fully vetted testers, most of whom are QA professionals with full-time jobs who do this on the side. These testers are located around the world, and are paired with "pretty much every mobile device you can think of," he said. So a crowdsourced customer wouldn't have to worry about having access to every single version of an Android phone, which Blair said is a huge selling point.

But the biggest issue, he said, is that companies are hungry to see how real users actually interface with their products. "People just do things that no system, no automation and no engineer could ever predict they'd do," he explained. "Customers who've used us just to augment their teams many times end up staying on because they like seeing the results of our exploratory testing," he said, and they can't get that information easily any other way.

Halder said she looked at a number of crowdsourced testing options before settling on Applause. The biggest plus for her was how easy it was to get the testing feedback and how mature the company's process was. "It can be a nightmare to coordinate how to get the information back from the testers. This ended up being a way for us to get more people actually using our app for less money and get all the feedback we need."

 

Next Steps

Link: http://searchsoftwarequality.techtarget.com

 

 

 

Picture of System Administrator

CROWDSOURCING FOR ENTERPRISE IT

by System Administrator - Tuesday, 14 July 2015, 6:45 PM
 

10 KEY QUESTIONS (AND ANSWERS) ON CROWDSOURCING FOR ENTERPRISE IT

A starting guide for augmenting technical teams with crowdsourced design, development and data science talent

A crowdsourcing platform is essentially an open marketplace for technical talent. The requirements, timelines, and economics behind crowdsourced projects are critical to successful outcomes. Varying crowdsourcing communities have an equal variety of payments being offered for open innovation challenges. Crowdsourcing is meritocratic - contributions are rewarded based on value. However, the cost-efficiencies of a crowdsourced model reside in the model's direct access to talent, not in the compensated value for that talent. Fair market value is expected for any work output. The major cost difference with between legacy sourcing models and a crowdsourcing model is (1) the ability to directly tap into technical expertise, and (2) that costs are NOT based around time or effort.

Please read the attached whitepaper.

Picture of System Administrator

Customer Journey Map

by System Administrator - Thursday, 21 September 2017, 7:00 PM
 

Mapa de viaje del cliente

Picture of System Administrator

Customer Service Model

by System Administrator - Wednesday, 11 November 2015, 12:59 PM
 

Mastering the Modern Customer Service Model

by Wheelhouse Enterprises

Perfecting your in-house customer service system has never been easy until now. The cloud has made customer service tools readily available and revolutionized how they are implemented. Our newest white paper details the tools necessary for the most modern, up-to-date customer service tools for your organization. Whether you're looking for specific tools for your contact center or CRM, we have you covered.

Please read the attached whitepaper.


Picture of System Administrator

D (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:36 PM
 
Picture of System Administrator

Data center design standards bodies

by System Administrator - Thursday, 12 March 2015, 7:49 PM
 

Words to go: Data center design standards bodies

 by Meredith Courtemanche

Need a handy reference sheet of the various data center standards organizations? Keep this list by your desk as a reference.

Several organizations produce data center design standards, best practices and guidelines. This glossary lets you keep track of which body produces which standards, and what each acronym means.

Print or bookmark this page for a quick reference of the organizations and associated websites and standards that data center designers and operators need to know.

  • ASHRAEThe American Society of Heating, Refrigerating and Air-Conditioning Engineers produces data center standards and recommendations for heating, ventilation and air conditioning installations. The technical committee develops standards for data centers' design, operations, maintenance and energy efficiency. Data center designers should consult all technical documents from ASHRAE TC 9.9: Mission Critical Facilities, Technology Spaces and Electronic Equipment.www.ashrae.org.
  • BISCI: The Building Industry Consulting Service International Inc. is a global association that covers cabling design and installation. ANSI/BICSI 002-2014, Data Center Design and Implementation Best Practices, covers electrical, mechanical and telecommunications structure in a data center, with comprehensive considerations from fire protection to data center infrastructure managementwww.bicsi.org.
  • BREEAMThe BRE Environmental Assessment Method (BREEAM) is an environmental standard for buildings in the U.K. and nearby countries, covering design, construction and operation. The code is part of a framework for sustainable buildings that takes into account economic and social factors as well as environmental. It is managed by BRE Global, a building science center focused on research and certification.http://www.breeam.org/
  • The Green Grid AssociationThe Green Grid Association is well-known for its PUE metric, defined as power usage effectiveness or efficiency. PUE measures how well data centers use power by a ratio of total building power divided by power used by the IT equipment alone. The closer to 1 this ratio comes, the more efficiently a data center is consuming power. Green Grid also publishes metrics for water (WUE) and carbon (CUE) usage effectiveness based on the same concept. www.thegreengrid.org
  • IDCA: The International Data Center Authority is primarily known as a training institute, but also publishes a holistic data center design and operations ranking system: the Infinity Paradigm. Rankings cover seven layers of data centers, from location and facility through data infrastructure and applications. www.idc-a.org
  • IEEEThe Institute of Electrical and Electronics Engineers provides more than 1,300 standards and projects for various technological fields. Data center designers and operators rely on the Ethernet network cabling standard IEEE 802.3ba, as well as IEEE 802 standards, for local area networks such as IEEE 802.11 wireless LAN specifications. www.ieee.org
  • ISOThe International Organization for Standardization is an overarching international conglomeration of standards bodies. The ISO releases a wide spectrum of data center standards, several of which apply to facilities. ISO 9001 measures companies' quality control capabilities. ISO 27001 certifies an operation's security best practices, regarding physical and data security as well as business protection and continuity efforts. Other ISO standards that data center designers may require include environmental practices, such as ISO 14001 and ISO 50001. www.iso.org
  • LEEDThe Leadership in Energy and Environmental Design is an international certification for environmentally conscious buildings and operations managed by the U.S. Green Building Council. Five rating systems -- building design, operations, neighborhood development and other areas -- award a LEED level -- certified, silver, gold or platinum -- based on amassed credits. The organization provides a data-center-specific project checklist, as the LEED standard includes adaptations for the unique requirements of data centers. www.usgbc.org
  • NFPA: The National Fire Protection Association publishes codes and standards to minimize and avoid damage from hazards, such as fire. No matter how virtualized or cloudified your IT infrastructure, fire regulations still govern your workloads. NFPA 75 and 76 standards dictate how data centers contain cold/cool and hot aisles with obstructions like curtains or walls. NFPA 70 requires an emergency power off button for the data center to protect emergency respondents. www.nfpa.org
  • NIST: The National Institute of Standards and Technology oversees measurements in the U.S. NIST's mission includes research on nanotechnology for electronics, building integrity and diverse other industries. For data centers, NIST offers recommendations on authorization and access. Refer to special publications 800-53, Recommended Security Controls for Federal Information Systems, and SP 800-63, Electronic Authentication Guideline. www.nist.gov
  • OCP: The Open Compute Project is known for its server and network design ideas. But OCP, started by Internet giant Facebook to promote open source in hardware, also branches into data center design. OCP's Open Rack and optical interconnect projects call for 21 inch rack slots and intra-rack photonic connections. OCP's data center design optimizes thermal efficiency with 277 Volts AC power and tailored electrical and mechanical components. www.opencompute.org
  • OIX: The Open IX Association focuses on Internet peering and interconnect performance from data centers and network operators, along with the content creators, distribution networks and consumers. It publishes technical requirements for Internet exchange points and data centers that support them. The requirements cover designed resiliency and safety of the data center, as well as connectivity and congestion management.www.open-ix.org
  • Telcordia: Telcordia is part of Ericsson, a communications technology company. The Telcordia GR-3160 Generic Requirements for Telecommunications Data Center Equipment and Spaces particularly relates to telecommunications carriers, but the best practices for network reliability and organizational simplicity can benefit any data center that delivers applications to end users or host applications for third-party operators. The standard deals with environmental protection and testing for hazards, ranging from earthquakes to lightning surges. www.ericsson.com
  • TIA: The Telecommunications Industry Association produces communications standards that target reliability and interoperability. The group's primary data center standard, ANSI/TIA-942-A, covers network architecture and access security, facility design and location, backups and redundancy, power management and more. TIA certifies data centers to ranking levels on TIA-942, based on redundancy in the cabling system.www.tiaonline.org

 

  • The Uptime InstituteThe Uptime Institute certifies data center designs, builds and operations on a basis of reliable and redundant operating capability to one of four tier levels. Data center designers can certify plans; constructed facilities earn tier certification after an audit; operating facilities can prove fault tolerance and sustainable practices. Existing facilities, which cannot be designed to meet tier level certifications, can still obtain theManagement Operations Stamp of Approval from Uptime.www.uptimeinstitute.com

Next Steps

 

 

 

Picture of System Administrator

Data Center Efficiency

by System Administrator - Wednesday, 26 August 2015, 7:17 PM
 

eGuide: Data Center Efficiency

APC by Schneider Electric

Data center efficiency is one of the cornerstones of an effective IT infrastructure. Data centers that deliver energy efficiency, high availability, density, and scalability create the basis for well-run IT operations that fuel the business. With the right approach to data center solutions, organizations have the potential to significantly save on costs, reduce downtime, and allow for future growth.

In this eGuide, Computerworld, CIO, and Network World examine recent trends and issues related to data center efficiency. Read on to learn how a more efficient data center can make a difference in your organization.

Please read the attached eGuide.

Picture of System Administrator

Data Citizen

by System Administrator - Sunday, 19 November 2017, 12:51 PM
 

Data Citizen

Posted by: Margaret Rouse

A data citizen is an employee who relies on digital information to make business decisions and perform job responsibilities.

In the early days of computing, it took a specialist with a strong background in data science to mine structured data for information. Today, business intelligence (BI) tools allow employees at every level of an organization to run ad hoc reports on the fly. Changes in how data can be analyzed and visualized allow workers who have no background in mathematics, statistics or programming be able to make data-driven decisions.

In both a government and data context, however, citizenship comes with responsibilities as well as rights. For example, a citizen who has been granted the right of free speech also has the responsibility to obey federal, state and local laws -- and an employee who has been granted the right to access corporate data also has a responsibility to support the company's data governance policies.

As data citizens increasingly expect more transparent, accessible and trustworthy data from their employers, it has become more important than ever for the rights and responsibilities of both parties to be defined and enforced through policy. To that end, data governance initiatives generally focus on high-level policies and procedures, while data stewardship initiatives focus on maintaining agreed-upon data definitions and formats, identifying data quality issues and ensuring that business users adhere to specified standards.

In addition to enforcing the data citizen's right to easily access trustworthy data, governance controls ensure that data is used in a consistent manner across the enterprise. To support ongoing compliance with external government regulations, as well as internal data policies, audit procedures should also be included in the controls.

Link: http://whatis.techtarget.com

Picture of System Administrator

Data Confabulation

by System Administrator - Tuesday, 12 May 2015, 12:30 AM
 

Data Confabulation

Posted by: Margaret Rouse

Data confabulation is a business intelligence term for the selective and possibly misleading use of data to support a decision that has already been made.

Within the volumes of big data there are often a lot of small bits of evidence that are contradictory to even clearly data-supported facts. Generally, this data noise can be seen as such and, in the context of the body of data, it is clearly outweighed. When data is selectively chosen from vast sources, however, a picture can often be created to support a desired view, decision or argument that would not be supported by a more rigorously controlled method.

Data confabulation can be used both intentionally and unintentionally to promote the user’s viewpoint. When a decision is made before data is examined, there is a danger of falling prey to confirmation bias even when people are trying to be honest. The term confabulation comes from the field of psychology, where it refers to the tendency of humans to selectively remember, misinterpret or create memories to support a decision, belief or sentiment.

Related Terms

Definitions

  • de-anonymization (deanonymization)

    - De-anonymization is a method used to detect the original data that was subjected to processes to make it impossible -- or at least harder -- to identify the personally identifiable information (PII... (WhatIs.com)

  • data anonymization

    - The purpose of data anonymization is to make its source untraceable. Data anonymization processes include encryption, substitution, shuffing, number and data variance and nulling out data. (WhatIs.com)

  • change management

    - Change management is a systematic approach to dealing with change, both from the perspective of an organization and on the individual level. (SearchCIO.com)

Glossaries

  • Business intelligence - business analytics

    - Terms related to business intelligence, including definitions about business analytics and words and phrases about gathering, storing, analyzing and providing access to business data.

  • Internet applications

    - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Picture of System Administrator

Data Lake

by System Administrator - Thursday, 25 June 2015, 10:29 PM
 

 

Author: John O’Brien

It would be an understatement to say that the hype surrounding the data lake is causing confusion in the industry. Perhaps, this is an inherent consequence of the data industry's need for buzzwords: it's not uncommon for a term to rise to popularity long before there is clear definition and repeatable business value. We have seen this phenomena many times when concepts including "big data," "data reservoir," and even the "data warehouse" first emerged in the industry. Today's newcomer to the data world vernacular—the "data lake"—is a term that has endured both the scrutiny of pundits who harp on the risk of digging a data swamp and, likewise, the vision of those who see the potential of the concept to have a profound impact on enterprise data architecture. As the data lake term begins to come off its hype cycle and face the pressures of pragmatic IT and business stakeholders, the demand for clear data lake definitions, use cases, and best practices continues to grow.

This paper aims to clarify the data lake concept by combining fundamental data and information management principles with the experiences of existing implementations to explain how current data architectures will transform into a modern data architecture. The data lake is a foundational component and common denominator of the modern data architecture enabling, and complementing specialized components, such as enterprise data warehouses, discovery-oriented environments, and highly-specialized analytic or operational data technologies within or external to the Hadoop ecosystem. Therefore, the data lake has become the metaphor for the transformation of enterprise data management, and will continue to evolve the data lake definition according to established principles, drivers, and best practices that will quickly emerge as hindsight is applied at companies.

Please read the attached guide.

 

Picture of System Administrator

Data Profiling

by System Administrator - Tuesday, 30 December 2014, 3:24 PM
 

Data Profiling

Posted by Margaret Rouse

Data profiling, also called data archeology, is the statistical analysis and assessment of data values within a data set for consistency, uniqueness and logic.

The data profiling process cannot identify inaccurate data; it can only identify  business rules violations and anomalies.The insight gained by data profiling can be used to determine how difficult it will be to use existing data for other purposes.  It can also be used to provide metrics to assess data quality and help determine whether or not metadata accurately describes the source data. 

Profiling tools evaluate the actual content, structure and quality of the data by exploring relationships that exist between value collections both within and across data sets. For example, by examining the frequency distribution of different values for each column in a table, an analyst can gain insight into the type and use of each column. Cross-column analysis can be used to expose embedded value dependencies and inter-table analysis allows the analyst to discover overlapping value sets that represent foreign keyrelationships between entities.  

See also: data modelingdata dictionarydata deduplication

Link: http://searchdatamanagement.techtarget.com

Picture of System Administrator

Data Silo

by System Administrator - Monday, 20 July 2015, 4:59 PM
 

Data Silo

Posted by Margaret Rouse

A data silo is a repository of fixed data that an organization does not regularly use in its day-to-day operation.

A data silo is a repository of fixed data that an organization does not regularly use in its day-to-day operation. So-called siloed data cannot exchange content with other systems in the organization. The expressions "data silo" and "siloed data" arise from the inherent isolation of the information. The data in a silo remains sealed off from the rest of the organization, like grain in a farm silo is closed off from the outside elements.

In recent years, data silos have faced increasing criticism as an impediment to productivity and a danger to data integrity. Data silos also increase the risk that current (or more recent) data will accidentally get overwritten with outdated (or less recent) data. When two or more silos exist for the same data, their contents might differ, creating confusion as to which repository represents the most legitimate or up-to-date version.

Cloud-based data, in contrast to siloed data, can continuously evolve to keep pace with the needs of an organization, its clients, its associates, and its customers. For frequently modified information, cloud backup offers a reasonable alternative to data silos, especially for small and moderate quantities of data. When stored information does not need to be accessed regularly or frequently, it can be kept in a single cloud archive rather than in multiple data silos, ensuring data integration (consistency) among all members and departments in the organization. For these reasons, many organizations have begun to move away from data silos and into cloud-based backup and archiving solutions.

Continue Reading About data silo

Link: http://searchcloudapplications.techtarget.com

 

Picture of System Administrator

Database-as-aService (DBaaS)

by System Administrator - Monday, 16 February 2015, 3:42 PM
 

Why Database-as-aService (DBaaS)?

IBM Cloudant manages, scales and supports your fast-growing data needs 24x7, so you can stay focused on new development and growing your business.

Fully managed, instantly provisioned, and highly available

In a large organization, it can take several weeks for a DBMS instance to be provisioned for a new development project, which limits innovation and agility. Cloudant DBaaS helps to enable instant provisioning of your data layer, so that you can begin new development whenever you need. Unlike Do-It-Yourself (DIY) databases, DBaaS solutions like Cloudant provide specific levels of data layer performance and up time. The managed DBaaS capability can help reduce risk of service delivery failure for you and your projects.

Build more. Grow more

With a fully managed NoSQL database service, you do not have to worry about the time, cost and complexity associated with database admnistration, architecture and hardware. Now you can stay focused on developing new apps and growing your business to new heights.

Who uses DBaaS?

Companies of all sizes, from start ups to mega-users use Cloudant to manage data for large or fast-growing web and mobile apps in ecommerce, on-line education, gaming, financial services, and other industries. Cloudant is best suited for applications that need a database to handle a massively concurrent mix of low-latency reads and writes. Its data replication and synchronization technology also enables continuous data availability, as well as off-line app usage for mobile or remote users.

As a JSON document store, Cloudant is ideal for managing multi- or un-structured data. Advanced indexing makes it easy to enrich applications with location-based (geo-spatial) services, full-text search, and near real-time analytics.

Please read the attached whitepaper.

Picture of System Administrator

Delivering Data Warehousing as a Cloud Service

by System Administrator - Wednesday, 8 July 2015, 9:27 PM
 

Delivering Data Warehousing as a Cloud Service

The current data revolution has made it an imperative to provide more people with access to data-driven insights faster than ever before. That's not news. But in spite of that, current technology seems almost to exist to make it as hard as possible to get access to data.

That's certainly the case for conventional data warehouse solutions, which are so complex and inflexible that they require their own teams of specialists to plan, deploy, manage, and tune them. By the time the specialists have finished, it's nearly impossible for the actual users to figure out how to get access to the data they need.

Newer 'big data' solutions do not get rid of those problems. They require new skills and often new tools as well, making them dependent on hard-to-find operations and data science experts.

Please read the attached whitepaper.

Picture of System Administrator

Designing and Building an Open ITOA Architecture

by System Administrator - Tuesday, 16 June 2015, 10:51 PM
 

Designing and Building an Open ITOA Architecture

This white paper provides a roadmap for designing and building an open IT Operations Analytics (ITOA) architecture. You will learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets. After weighing the role of each IT data source for your organization, you can learn how to combine them in an open ITOA architecture that avoids vendor lock-in, scales out cost-effectively, and unlocks new and unanticipated IT and business insights.
Please read the attached whitepaper.
Picture of System Administrator

Desktop as a Service (DaaS)

by System Administrator - Wednesday, 11 November 2015, 6:29 PM
 

Desktop as a Service (DaaS)

Posted by Margaret Rouse

Desktop as a Service (DaaS) is a cloud service in which the back-end of a virtual desktop infrastructure (VDI) is hosted by a cloud service provider.

DaaS has a multi-tenancy architecture and the service is purchased on a subscription basis. In the DaaS delivery model, the service provider manages the back-end responsibilities of datastoragebackup, security and upgrades. Typically, the customer's personal data is copied to and from the virtual desktop during logon/logoff and access to the desktop is device, location and network independent. While the provider handles all the back-end infrastructure costs and maintenance, customers usually manage their own desktop images, applications and security, unless thosedesktop management services are part of the subscription.

Desktop as a Service is a good alternative for a small or mid-size businesses (SMBs) that want to provide their end users with the advantages a virtual desktop infrastructure offers, but find that deploying a VDI in-house to be cost-prohibitive in terms of budget and staffing.

This definition is part of our Essential Guide: What you need to know about cloud desktops and DaaS providers

Link: http://searchvirtualdesktop.techtarget.com

Picture of System Administrator

DevOps

by System Administrator - Wednesday, 15 February 2017, 7:07 PM
 

DevOps

How to utilize it in your IT workspace

by TechTarget

Please read the attached whitepaper.

 

Picture of System Administrator

DevOps (PMI)

by System Administrator - Monday, 29 December 2014, 5:45 PM
 

Definición de DevOps: mejor explicamos lo que no es

by Jennifer Lent

Mucho se ha escrito acerca de lo que es DevOps: Un camino para que los desarrolladores y directores de operaciones colaboren; un conjunto de mejores prácticas para la gestión de aplicaciones en la nube; una idea Ágil que se basa en la integración continua, lo que permite frecuentes liberaciones de código.

Según Wikipedia: "DevOps es un acrónimo inglés de development (desarrollo) y operations (operaciones), que se refiere a una metodología de desarrollo de software que se centra en la comunicación, colaboración e integración entredesarrolladores de software y los profesionales de operaciones en las tecnologías de la información (IT). DevOps es una respuesta a la interdependencia del desarrollo de software y las operaciones IT. Su objetivo es ayudar a una organización a producir productos y servicios software rápidamente. Las empresas con entregas (releases) muy frecuentes podrían requerir conocimientos de DevOps. Flickr desarrolló un sistema DevOps para cumplir un requisito de negocio de diez despliegues al día. A este tipo de sistemas se les conoce como despliegue continuo (continuous deployment) o entrega continua (continuous delivery), y suelen estar asociados a metodologías lean startup. Grupos de trabajo, asociaciones profesionales y blogs usan el término desde 2009."

La definición de DevOps cubre todas estas cosas y más. Pero dado que el término ha adquirido estatus de palabra de moda, puede ser más interesante preguntarse no lo que es DevOps, sino lo que no es. En este artículo, SearchSoftwareQuality preguntó a algunos profesionales de software exactamente eso. He aquí lo que dijeron.

1. DevOps no es un puesto de trabajo.

Publicaciones en sitios de empleo sugieren otra cosa, pero DevOps no es un puesto de trabajo, dijo el consultor de Agile, Scott Ambler. "¿Gestor de DevOps? No sé lo que es eso". DevOps no debe ser un rol laboral, dijo. "DevOps se trata de que los desarrolladores entiendan la realidad de las operaciones y de que el equipo de operaciones comprenda lo que involucra el desarrollo." DevOps, el concepto, es un aspecto importante del desarrollo y la entrega de software, dijo Ambler. "Pero el puesto de DevOps es un síntoma de que las organizaciones que contratan [gerentes de DevOps] no entienden lo que DevOps es realmente. Ellos no lo entienden todavía."

La postura de Ambler sobre DevOps va en contra de la sabiduría convencional. DevOps apareció en la lista de 10 títulos de trabajo que es probable encontrar, de acuerdo con SearchCIO.com.

2. DevOps no es una categoría de herramienta de software.

DevOps no se trata de herramientas, sino de cultura, dijo Patrick Debois en unapresentación titulada “DevOps: tonterías, herramientas y otras cosas inteligentes”, durante la Conferencia GOTO. Debois, quien acuñó el término "DevOps" y fundó una conferencia conocida como DevOpsDays, dijo que las herramientas juegan un papel importante en el apoyo al enfoque de DevOps para la entrega y la gestión de software, pero DevOps no se trata de las herramientas en sí.

Ambler dijo la noción de que hay "herramientas que hacen que DevOps" refleje la realidad actual: DevOps, la palabra de moda, todavía se está moviendo hacia el pico de la curva. "Cada herramienta es una herramienta DevOps", agregó que mientras los vendedores de software continúan empujando sus visiones de DevOps, "mucha de la discusión es ingenua."

3. DevOps no se trata de resolver un problema de TI.

A pesar de sus muchos significados, DevOps es ampliamente entendido como una forma de resolver un problema de TI: permite que desarrollo y operaciones colaboren en la entrega de software. Pero ese no es su objetivo final, dijo Damon Edwards, socio gerente de consultoría de TI, Soluciones DTO, en Redwood City, California. "El punto de DevOps es permitirle a su empresa reaccionar ante las fuerzas del mercado lo más rápido, eficiente y confiable como sea posible. Sin el negocio, no hay otra razón para que estemos hablando de problemas DevOps, mucho menos pasar tiempo resolviéndolos", escribió Edwards escribió en su blog.

Kevin Parker, experto de SearchSoftwareQuality, dijo que el nuevo reto que encaran los gerentes de DevOps es toda la atención que el tema obtiene por parte del negocio. "Lo que antes era una tarea arcana, de elaborada coordinación y gestión de proyectos es ahora en parte diplomacia, parte protector –y una buena cantidad de innovación."

4. DevOps no es sinónimo de integración continua.

DevOps se originó en Agile como una forma de apoyar la práctica ágil de liberaciones de código más frecuentes. Pero DevOps es más que eso, dijo Ambler. "El hecho de que se practique la integración continua no significa que se está haciendo DevOps." Él ve a los gerentes de operaciones como los principales interesados ​​que los equipos ágiles necesitan trabajar para liberar software.

 

5. DevOps no... desaparecerá.

A pesar de las ideas falsas a su alrededor, DevOps está aquí para quedarse y sigue siendo importante para la entrega exitosa de software. "Tanto si lo llamamos DevOps o no, la gestión de cambios y versiones está experimentando una [exponencial] expansión en importancia", dijo Parker. Hay sustancia de fondo en DevOps, añadió el analista de Ovum Michael Azoff. "Por supuesto, hay expectación en torno a DevOps. Todavía estamos en la primera fase. Es donde Agile se ubicaba hace un par de años."

Please read the attached whitepaper: "Top tips for DevOps testing: Achieve continuous delivery"

Más noticias y tutorials:

Link: http://searchdatacenter.techtarget.com

 

Picture of System Administrator

Digital Marketing Plan

by System Administrator - Thursday, 17 September 2015, 7:00 PM
 

 

por Juan Carlos Muñoz | Marketing Manager, Interactive & CRM at Volvo Car España | Profesor de ICEMD

Picture of System Administrator

Distributed Computing

by System Administrator - Monday, 10 August 2015, 10:13 PM
 

Distributed Computing

Posted by: Margaret Rouse

Distributed computing is a model in which components of a software system are shared among multiple computers to improve efficiency and performance. 

According to the narrowest of definitions, distributed computing is limited to programs with components shared among computers within a limited geographic area. Broader definitions include shared tasks as well as program components. In the broadest sense of the term, distributed computing just means that something is shared among multiple systems which may also be in different locations. 

In the enterprise, distributed computing has often meant putting various steps in business processes at the most efficient places in a network of computers. For example, in the typical distribution using the 3-tier model, user interface processing is performed in the PC at the user's location, business processing is done in a remote computer, and database access and processing is conducted in another computer that provides centralized access for many business processes. Typically, this kind of distributed computing uses the client/server communications model.

The Distributed Computing Environment (DCE) is a widely-used industry standard that supports this kind of distributed computing. On the Internet, third-party service providers now offer some generalized services that fit into this model.

Grid computing is a computing model involving a distributed architecture of large numbers of computers connected to solve a complex problem. In the grid computing model, servers or personal computers run independent tasks and are loosely linked by the Internet or low-speed networks. Individual participants may allow some of their computer's processing time to be put at the service of a large problem. The largest grid computing project is SETI@home, in which individual computer owners volunteer some of their multitasking processing cycles (while concurrently still using their computer) to the Search for Extraterrestrial Intelligence (SETI) project. This computer-intensive problem uses thousands of PCs to download and search radio telescope data.

There is a great deal of disagreement over the difference between distributed computing and grid computing. According to some, grid computing is just one type of distributed computing. The SETI project, for example, characterizes the model it’s based on as distributed computing. Similarly, cloud computing, which simply involves hosted services made available to users from a remote location, may be considered a type of distributed computing, depending on who you ask.

One of the first uses of grid computing was the breaking of a cryptographic code by a group that is now known as distributed.net. That group also describes its model as distributed computing.

Related Terms

Definitions

Glossaries

  • Software applications

    - Terms related to software applications, including definitions about software programs for vertical industries and words and phrases about software development, use and management.

  • Internet applications

    - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Dig Deeper

Continue Reading About distributed computing

Link: http://whatis.techtarget.com

Picture of System Administrator

Documentación (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 1:04 AM
 

CONCEPTOS RELATIVOS A LA DOCUMENTACIÓN

Información: Datos que poseen significado.

Documento: Información y su medio de soporte.

Especificación: Documento que establece requisitos.

Manual de calidad: Documento que especifica el sistema de gestión de la calidad de una organización.

Plan de calidad: Documento qué específica que procedimientos y recursos asociados deben aplicarse, quién debe aplicarlos y cuándo deben aplicarse a un proyecto, producto o contrato específico.

Registro: Documento que presenta resultados obtenidos o proporciona evidencia de actividades desempeñadas.

Picture of System Administrator

DRaaS

by System Administrator - Monday, 6 July 2015, 8:38 PM
 

7 Critical Questions to Demystify DRaaS

This whitepaper is not a sermon on Disaster Recovery and whyyou need it. You don’t need a lesson in the perils of disasters or a theoretical “business case” that proves unpredictable events can damage your data and cost you thousands of dolllars. In fact,if you were not already aware of the need for Disaster Recovery you probably would not be reading this document.
 
Please read the attached whitepaper.
Picture of System Administrator

DSC pull server

by System Administrator - Thursday, 31 August 2017, 9:27 PM
 

DSC pull server

A DSC pull server (desired state configuration pull server) is an automation server that allows configurations to be maintained on many servers, computer workstations and devices across a network.

DSC pull servers use Microsoft Windows PowerShell DSC's declarative scripting to maintain current version software and also monitor and control the configuration of computers and services and the environment they run in. This capacity makes DSC pull servers very useful for administrators, allowing them to ensure reliability and interoperability between machines by stopping the configuration drift that can occur through making individual machine setting changes over time.

DSC pull servers use PowerShell or Windows Server 2012 and client servers must be running Windows Management Framework (WMF) 4. Microsoft has also developed PowerShell DSC for Linux.

Examples of how built-in DSC resources automation can configure and manage a set of computers or devices:

  •     Enabling or disabling server roles and features.
  •     Managing registry settings.
  •     Managing files and directories.
  •     Starting, stopping, and managing processes and services.
  •     Managing groups and user accounts.
  •     Deploying new software.
  •     Managing environment variables.
  •     Running Windows PowerShell scripts.
  •     Fixing configurations that drift away from the desired state.
  •     Discovering the actual configuration state on a given client.

Link: http://whatis.techtarget.com

Picture of System Administrator

DuckDuckGo

by System Administrator - Saturday, 20 June 2015, 2:57 PM
 

DuckDuckGo

Posted by: Margaret Rouse

DuckDuckGo (DDG) is a general search engine designed to protect user privacy, while avoiding the skewing of search results that can happen because of personalized search (sometimes referred to as a filter bubble).

DDG does not track users – user IP addresses and other information are not logged. A log of search terms entered is maintained but the terms are not associated with particular users. Because DuckDuckGo does not record user information, it has no data to turn over to any third-party organizations.

Unlike Google, DuckDuckGo does not default to personalized search, which constrains search results based on information related to the user, such as location, preferences and history. Users may opt to boost results based on locality, for example, but it will not be done unless they specify that they want it to be. Results that appear to be from content mills are also filtered out of search engine results pages (SERP).

DuckDuckGo is sometimes referred to as a hybrid search engine because it compiles results from a variety of sources including its own crawler, DuckDuckBot, crowd-sourced sites such as Wikipedia, and partnerships with other search providers including Yahoo!, Yandex, Yelp, and Bing. 

Instant answers, which appear at the top of the results page, are available for queries involving many types of searches, including flight statuses, recipes, rhyming words, calculations and statistics -- among a wide variety of other possibilities. Instant answers also include functions, such as a stopwatch and a strong password generator.

The !bang feature allows users to search a particular website.  Typing “!Facebook” before a search term, for example, restricts the results to those found on that site.

DuckDuckGo was founded by Gabriell Weinberg in September 2008. Initially funded by Weinberg, the search engine received $3 million in venture capital in 2011 and is now supported by keyword-based advertising. The company’s headquarters are in Paoli, Pennsylvania.

DuckDuckGo is available in most browsers, including Chrome, Firefox and Safari.

Tekzilla compares DuckDuckGo and Google search:

Part of the Software applications glossary

Link: http://whatis.techtarget.com

Picture of System Administrator

Dynamic Pricing

by System Administrator - Tuesday, 4 November 2014, 8:24 PM
 

Dynamic Pricing

Posted by: Margaret Rouse

Dynamic pricing, also called real-time pricing, is an approach to setting the cost for a product or service that is highly flexible. The goal of dynamic pricing is to allow a company that sells goods or services over the Internet to adjust prices on the fly in response to market demands. 

Changes are controlled by pricing bots, which are software agents that gather data and usealgorithms to adjust pricing according to business rules. Typically, the business rules take into account such things as the customer's location, the time of day, the day of the week, the level of demand and competitors' pricing.  With the advent of big data and big data analytics, however, business rules for price adjustments can be made more granular. By collecting and analyzing data about a particular customer, a vendor can more accurately predict what price the customer is willing to pay and adjust prices accordingly.

Dynamic pricing is legal, and the general public has learned to accept dynamic pricing when purchasing airline tickets or reserving hotel rooms online.  The approach, which is sometimes marketed as a personalization service, has been less successful with online retail vendors. Dynamic pricing can be contrasted with fixed pricing, an approach to setting the selling price for a product or service that does not fluctuate.

 

See also: fair and reasonable priceconsumption-based pricing model

Related Terms

DEFINITIONS

  • employee advocacy

     - Employee advocacy is the promotion of an organization by its staff members. A business may ask employees to actively promote the organization, often through social media, as an element of their job... (WhatIs.com)

  • critical success factors

     - Critical success factors are a limited number of key variables or conditions that have a tremendous impact on how successfully and effectively an organization meets its mission or the strategic goa...(SearchCIO.com)

  • unsystemic risk (unsystematic risk)

     - Unsystemic risk (also known as unsystematic risk) is a type of investment risk that is specific to an industry or organization. (SearchCompliance.com)

GLOSSARIES

  • Business terms

     - Terms related to business, including definitions about project management and words and phrases about human resources, finance and vertical industries.

  • Internet applications

     - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

CONTINUE READING ABOUT DYNAMIC PRICING

Picture of System Administrator

E (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:37 PM
 
Picture of System Administrator

Easy to Build Workflows and Forms

by System Administrator - Thursday, 6 October 2016, 3:06 PM
 

K2 Special Edition: Easy to Build Workflows and Forms for Dummies

How can automated business workflows and forms drive efficiency? The right solution for business process transformation will make it easy (even for nontechnical users), while increasing efficiency and agility.

In this book, Easy to Build Workflows and Forms for Dummies, you’ll learn how to evaluate business application workflow solutions with key criteria. You’ll also explore different department use cases, examine how businesses can use a single workflow solution across the entire organization, and much more.

Please read the attached eBook

Picture of System Administrator

Effective Software Testing

by System Administrator - Saturday, 11 July 2015, 11:17 PM
 

Four tips for effective software testing

by Robin F. Goldsmith

To ensure success, follow software testing concepts

 

Regardless of development methodology or type of software testing, multiple factors can come into play that determine the effectiveness of software testing. Generally, testers do not pay conscious attention to these key software testing concepts. Too often, lack of conscious attention means these essential factors have been overlooked, even by experienced testers who may take too much for granted. Not applying these software testing concepts not only leads to less effective software testing, but the lack of awareness can make the tester oblivious to a test's diminished effectiveness.

Here are four fundamental factors that determine effective software testing.

1. Define expected software testing results independently

When you run a test, you enter inputs or conditions. (Conditions are a form of inputs that, in production, ordinarily are not explicitly entered, such as time of year. Part of running a test often involves additional actions to create the conditions.) The system under test acts on the inputs or conditions and produces actual results. Results include displayed textual or graphical data, signals, noises, control of devices, database content changes, transmissions, printing, changes of state, links, etc.

But actual results are only half the story for effective software testing. What makes the execution a test, rather than production, is that we get the actual results so we can determine whether the software is working correctly. To tell, we compare the actual results to expected software testing results, which are our definition of software testing correctness.

If I run a test and get actual results but have not defined expected software testing results, what do I tend to presume? Unless the actual results are somehow so outlandish that I can't help but realize they are wrong, such as when the system blows up, I'm almost certain to assume that the expected results are whatever I got for actual results, regardless of whether the actual results are correct.

When expected software testing results are not defined adequately, it is often impossible for the tester to ascertain accurately whether the actual results are right or wrong. Consider how many tests are defined in a manner similar to, "Try this function and see if it works properly." "Works properly" is a conclusion, but not a specific-enough expected result on which to base said conclusion. Yet testers often somewhat blindly take for granted that they can guess needed inputs or conditions and corresponding actual results.

For a test to be effective, we must define software testing correctness (expected software testing results) independently of actual results so the actual results do not unduly influence definition of the expected results. As a practical matter, we also need to define the expected results before obtaining the actual results, or our determination of the expected results probably will be influenced by the actual results. In addition, we need to document the expected results in a form that is not subject to subsequent conscious or unconscious manipulation.

2. Know the correct application results

Defining expected results independently of and before actual results is necessary but not sufficient. The expected results have to be correct. You have to know the correct application results in order to tell whether the product is producing it correctly.

In general, real business requirements are the basis for determining correct application results. However, too often real business requirements are inadequately identified. Moreover, most testing is based on demonstrating that the product meets its feature requirements, which means the product works as designed. Demonstrating the product works as designed is necessary but not sufficient for -- let alone the same as -- demonstrating that the product as designed satisfies the real business requirements and thereby accomplishes the value it should.

Some exploratory testers believe their main purpose is to ascertain how the software works, essentially investigating many different instances of, "What happens if I try this?" Although perhaps interesting and even sometimes enlightening, this approach is a form of defining actual results that quite intentionally omits consciously determining what the right answer should be.

Ultimately, tests need to demonstrate that products not only work as designed but in fact satisfy the real business requirements, which are the basis for the "right answers" and should include most quality factors. Most developers and most testers, exploratory and otherwise, focus on product requirements without adequately understanding the real business requirements the product must satisfy to provide value.

Exploratory testing is one of many methods to help identify wrong and missed requirements, but it's usually neither the most economical nor most effective means to do so. Simply detecting requirements issues doesn't automatically correct them; and corrections are easily lost or distorted when they are only in the tester's mind. Moreover, I find it unlikely that exploratory testers who explicitly don't want to know the requirements somehow magically can know better, based on typical requirements, what the right answers should be.

3. Application testers must compare actual to expected results

I'm frequently amazed how often application testers define correctly the right expected results, get actual results by running tests, and then don't take the final comparison step to make sure the actual results are correct (i.e., what was expected).

Of course, the most common reason this key comparison of actual to expected results is skipped is because the right expected results were not defined adequately. When expected results are not externally observable, who knows what the application testers are comparing against? Sometimes the application testers mistakenly assume the actual results are correct if they don't appear outlandish. Perhaps the tester makes a cursory comparison of mainly correct results but misses some of the few exceptions whose actual results differ from expected results.

I appreciate that comparing actual software testing results to the expected results can be difficult. Large volumes of tests can take considerable effort and become tedious, which increase chances of missing something. Results that are complex can be very hard to compare accurately and may require skills or knowledge that the tester lacks.

Such situations can be good candidates for automation. A computer tool won't get tired and can consistently compare all elements of complex results. However, an automated test tool requires very precise expected results. An additional downside of automated tools is that they won't pick up on certain types of results that a human application tester might notice.

4. Follow software testing guidelines to avoid oversights

The fourth key to effective software testing deals with the common experience of overlooking things that can "fall through the cracks." The simple but not always easy way to reduce such oversights is to follow software testing guidelines that help the tester be more thorough. Software testing guidelines include checklists and templates meant to guide development or testing.

Consider the difference between going to the supermarket with and without a shopping list, which is an example of software testing guidelines. Without a list, you tend to spend more yet come home without some of the groceries you needed. With the shopping list, you get what you need and spend less because you're less likely to make impulse buys.

Software testing guidelines also help detect omissions that exploratory testers are much more likely to miss. By definition, exploratory testing is guided by executing the software as built. That tends to channel one's thinking in line with what's been built, which easily can lead away from realizing what hasn't been built that should have been. Software testing guidelines can help prompt attention to such items that following the product as built can obscure.

Link: http://searchsoftwarequality.techtarget.com

Picture of System Administrator

Electronic Contract Execution

by System Administrator - Monday, 6 July 2015, 8:44 PM
 

Pitching Paper: The Case for Electronic Contract Execution

Whether fluctuations in order volumes or the drive for greater profitability and efficiency, organizations and their employees must find new ways to be more effective with fewer resources. Over the past decade, well-known software categories such as customer relationship management (CRM), contract life-cycle management (CLM) and enterprise resource management (ERP) have been deployed in order to streamline business processes and drive greater profitability. However, when it comes to executing transactions that require documents or forms, they fall back to the Stone Age practice of printing and moving paper, dropping out of their hyper-efficient infrastructure.

Please read the attached whitepaper.

Picture of System Administrator

Employee Investigations

by System Administrator - Monday, 20 October 2014, 2:00 PM
 

Simplifying Employee Investigations

Whether you are the small business owner, head of HR, or in IT, employee investigations are a part of your daily life. In this whitepaper we’ll discuss some of the real-world issues businesses face that result in employee investigations, the methodologies used to perform investigations, and then we’ll look at why investigating proactively can help.

Please read the attached whitepaper.

Picture of System Administrator

Employee Monitoring Program (BUSINESS)

by System Administrator - Thursday, 4 September 2014, 1:56 AM
 

Implementing an Employee Monitoring Program

Security & Risk professionals recognize the value and benefits of implementing an employee-monitoring program. Privacy advocates and Legal and Human Resources professionals see potentially unwarranted invasion of employee privacy as reasons not to monitor, or at least to restrict monitoring to instances where enough “probable cause” exists to warrant tilting the balance between the privacy of an employee and the interests of the company. This document is intended to assist company executives determining whether or not to implement employee activity monitoring.

Please read the attached whitepaper

Picture of System Administrator

Endpoint Security

by System Administrator - Wednesday, 16 September 2015, 7:13 PM
 

 

Endpoint Security

by Kaseya

To win the ongoing war against hackers and cyber criminals, IT professionals must do two things: Deploy and maintain endpoint security tools with the latest updates, and ensure the software applications running in their networks have the latest available patches. Failure to do either exposes their IT environments to cyber threats and their organizations to financial losses and embarrassment, while putting their jobs at risk. Keeping up with patches and updates, however, isn't easy. Learn more in this whitepaper.

Please read the attached whitepaper.

Picture of System Administrator

Enterprise Search

by System Administrator - Friday, 27 March 2015, 11:53 PM
 

Enterprise Search

Posted by: Margaret Rouse

Enterprise search is the organized retrieval of structured and unstructured data within an organization. Properly implemented, enterprise search creates an easily navigated interface for entering, categorizing and retrieving data securely, in compliance with security and data retention regulations. 

The quality of enterprise search results is reliant upon the description of the data by the metadata. Effective metadata for a presentation, for example, should describe what the presentation contains, who it was presented to, and what it might be useful for. Given the right metadata a user should be able to find the presentation through search using relevant keywords.

There are a number of kinds of enterprise search including local installations, hosted versions, and search appliances, sometimes called “search in a box.” Each has relative advantages and disadvantages. Local installations allow customization but require that an organization has the financial or personnel resources to continually maintain and upgrade the investment. Hosted search outsources those functions but requires considerable trust and reliance on an external vendor. Search appliances and cloud search, the least expensive options, may offer no customization at all.

Enterprise search software has increasingly turned to a faceted approach. Faceted search allows all of the data in a system to be reduced to a series of drop down menus, each narrowing down the total number of results, which allows users to narrow a search to gradually finer and finer criteria. The faceted approach improves upon the keyword search many users might think of (the Google model) and the structured browse model (the early Yahoo model). In the case of keyword search, if the end user doesn't enter the correct keyword or if records weren’t added in a way that considers what end users might be looking for, a searcher may struggle to find the data. Similarly, in a browsing model, unless the taxonomies created by the catalogers of an enterprise's information make intuitive sense to an end user, ferreting out the required data will be a challenge. 

Enterprise search is complex. Issues of security, compliance and data classification can generally only be addressed by a trained knowledge retrieval expert. That complexity is further complicated by the complexity of an enterprise itself, with the potential for multiple offices, systems, content types, time zones, data pools and so on. Tying all of those systems together in a way that enables useful information retrieval requires careful preparation and forethought. 

Vendors of enterprise search products include Oracle, SAP, IBM, Google and Microsoft.

See also: enterprise content management (ECM), e-discovery, autoclassification

Definitions

  • virtual payment terminal - Virtual terminals allow sellers to take credit card payments online for orders made online or over the phone without requiring a card reader device. (WhatIs.com)
  • compensating control - Compensating controls were introduced in PCI DSS 1.0, to give organizations an alternative to the requirements for encryption. The alternative is sometimes considered a loophole that creates a secu... (WhatIs.com)
  • cloud computing - What is cloud computing? To understand cloud computing, examine public, private and hybrid cloud, as well as PaaS, SaaS and IaaS cloud models. (searchCloudComputing.com)

Glossaries

  • Customer data management - Terms related to customer data management, including customer data integration (CDI) technology definitions and words and phrases about data quality and data governance.
  • Internet applications - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Dig Deeper

Picture of System Administrator

Enterprise-Grade File Sync and Share

by System Administrator - Tuesday, 10 March 2015, 10:25 PM
 

The Need for Enterprise-Grade File Sync and Share

An Osterman Research White Paper

“The Dropbox problem” is the term applied to the widespread and problematic use of consumer-focused file sync and share tools that was popularized by Dropbox, the most commonly used tool in this space. In fact, a search for “the Dropbox problem” in Google returns nearly 4,000 results. To be fair, however, there are a large and growing number of tools similar to Dropbox that are offered by a wide variety of cloud providers. Moreover, most of these tools work as advertised – most provide users with several gigabytes of cloud storage and allow them to synchronize any file across all of their desktop, laptop and mobile platforms automatically.

And therein lies the problem: these tools allow any file to be synchronized across any device by any corporate user without the oversight or control of that user’s IT function. This means that corporate financial information, employee records, customer financial information, embargoed press releases, and any other sensitive or confidential information can be synchronized to any user’s device without first being encrypted, without an audit trail established to track the data, without an ability to prevent critical information from being modified, without any control over who has access to this data, and without any control over where and by whom that data is stored. This creates enormous legal, regulatory, privacy and other risks for an organization that allows these tools to be used.

The good news is that most decision makers and influencers are at least beginning to take the problem seriously. 

Please read the attached whitepaper.

Picture of System Administrator

Entrust Datacard

by System Administrator - Thursday, 27 August 2015, 9:07 PM
 

Leave outdated traditional two factor authentication in the past. Mobile authentication transforms a smartphone into a mobile smart credential. Now it's possible for users to securely identify themselves by simply clicking "ok" on their device, eliminating the hassle of a tradition hard token or physical cards.

Video

Link: http://resources.cio.com

Picture of System Administrator

ERP

by System Administrator - Friday, 8 August 2014, 4:14 PM
 

Enterprise Resource Planning

 

Enterprise resource planning (ERP) is an industry term for the broad set of activities that helps an organization manage its business.

 

An important goal oF ERP is to facilitate the flow of information so business decisions can be data-driven. ERP software suites are built to collect and organize data from various levels of an organization to provide management with insight into key performance indicators (KPIs) in real time.

ERP software modules can help an organization's administrators monitor supply chain, inventory, purchasing, finance, product lifecycle, projects, human resources and other mission-critical components of a business through a web portal or series of interconnected executive dashboards. In order for an ERP software deployment to be useful, however, it needs to be integrated with other software systems the organization uses. For this reason, deployment of a new ERP system in-house can involve considerable business process analysis, employee retraining and back-end information technology (IT) support for database integration, business intelligence and reporting.

 

Legacy ERP systems tend to be architected as large, complex homogeneous systems which do not lend themselves easily to a software-as-a-service (SaaS) delivery model. As more companies begin to store data in the cloud, however, ERP vendors are responding with cloud-based services to perform some functions of ERP -- particularly those relied upon by mobile users.  An ERP implementation that uses both on-premises ERP software and cloud ERP services is called two-tiered ERP. 

Link: http://searchsap.techtarget.com

Picture of System Administrator

Escritorios Eventuales

by System Administrator - Friday, 31 March 2017, 12:19 PM
 

Cómo desplegar escritorios eventuales y no morir en el intento

Los escenarios son múltiples y generalmente desafiantes: tercerización de tareas, centros de contacto con muchos operadores involucrados en tareas específicas, offshoring, asociaciones eventuales con terceras partes para llevar adelante proyectos de duración limitada… El factor común en todos estos escenarios es la necesidad de desplegar escritorios, pero sin que ese despliegue implique riesgos para la seguridad o plantee dificultades a la hora de escalar. Los ejecutivos de VMware proponen una solución.

En la medida en que las empresas eligen ocuparse de su core business y manejar el resto de las tareas como operaciones separadas o remotas, la infraestructura necesaria para soportar dichas operaciones de hace más compleja. Los retos especiales que enfrentan las compañías por la subcontratación o deslocalización son consecuencia de la distribución geográfica, el tipo de usuario, la sensibilidad de la información, niveles de servicio (SLA) o los costos de operación. Outsourcing (tercerización), Offshoring (traslado de las operaciones a otros países) y la capacidad de poder escalar y desescalar la infraestructura de acuerdo a los proyectos en danza (recursos humanos que son contratados para proyectos específicos o a demanda del negocio), son algunos de los escenarios más frecuentes en estos casos, y el factor común de todos ellos es que necesitan del despliegue flexible, económico y seguro de escritorios.

 

El Business Process Desktop puede ayudar a las empresas a satisfacer las demandas de estos escenarios. Es un ambiente de trabajo para usuarios con una o muy pocas tareas o actividades específicas. Esta clase de despliegues ayuda a minimizar el riesgo proveniente de retos tales como:

  • Robo de información: La información que manejan estos usuarios puede ser valiosa, como base de datos de clientes, información financiera, estrategia comercial, números de tarjetas de crédito, identidad, dirección del hogar, entre otras, y al tercerizarse se expone potencialmente a personal de otras empresas y riesgos adicionales, como empleados maliciosos usando la información para su propio beneficio.
  • Cumplimiento de regulaciones: Dependiendo del tipo de industria, la información puede estar sujeta por ley a manejo especial, requiriendo cierto tipo de controles de seguridad que son difíciles de implementar en un esquema tradicional de escritorios físicos.
  • Perdida de información: En los casos en los que se almacena información de manera local en el dispositivo del usuario, puede generar problemas como dejarnos sin acceso temporal o permanente a ella.
  • Fallas eléctricas: Las fallas en el servicio eléctrico en instalaciones donde se encuentran los empleados son cada vez más perjudiciales, y los planes de recuperación como salas de contingencia o ubicaciones remotas normalmente son difíciles de implementar o costosas. El vencimiento de acuerdos de nivel de servicios por interrupción de operaciones puede traer multas o perdidas de clientes a las empresas.
  • Desastres naturales: Los fenómenos como terremotos y huracanes pueden descontinuar las operaciones de manera indefinida.
  • Fallas de red: Las redes WAN son conocidas por tener problemas de confiabilidad, sobre todo en largas distancias.
  • Aprovisionamiento de nuevos usuarios: Operaciones con capacidad dinámica normalmente presentan problemas en los tiempos de aprovisionamiento y desaprovisionamiento.
  • Upgrade de software: los sistemas operativos y las aplicaciones deben mantenerse actualizados para mantener la productividad del trabajador.
  • Soporte remoto: Casos de soporte en ubicaciones remotas requieren visitas de mantenimiento costosas de parte de los técnicos.

Para atender las necesidades de este tipo de clientes, VMware propone algunas soluciones concretas:

  • Virtualización de escritorios: Horizon simplifica los escritorios y las aplicaciones moviéndolos a la nube hibrida y entregándolos como un servicio administrado altamente disponible en varios sitios.
  • Almacenamiento convergente: vSAN le permite aprovechar los discos locales de los servidores físicos con todas las ventajas de un arreglo de SAN, optimizando así los costos de almacenamiento para una solución de este tipo.
  • Monitoreo de operaciones y estudio de capacidad: vRealize Operations for Horizon le permite monitorear todos los aspectos de su operación de manera sencilla, incluyendo la experiencia de usuario de extremo a extremo, y modelar crecimientos de carga para planificar crecimientos en capacidad.
  • Recuperación ante desastres: Site Recovery Manager le permite automatizar la recuperación y puesta en marcha de las cargas de servidores asociadas a los servicios que consumen los usuarios desde sus escritorios virtuales, en el evento de la caída de su datacenter principal.
  • Automatización y autoservicio: Horizon View es integrable con vRealize Automation, motor de automatización de la nube hibrida de VMware para facilitar a través de un portal de autoservicio tareas básicas y repetitivas como creación de usuarios, asignación de perfiles, asignación de recursos compartidos de red, etc.

El Business Process Desktop de VMware habilita a las organizaciones que sufren los retos arriba detallados para que puedan incrementar la seguridad y la cumplimentación de regulaciones al centralizar la información crítica del negocio. Al mismo tiempo simplifica y centraliza la gestión de los desktops, bajando el costo de operación. A su vez permite alcanzar y superar los niveles de SLA, al asegurar acceso rápido e ininterrumpido a los datos y las aplicaciones por parte de los usuarios finales a través de la WAN. Finalmente, puede asegurar que los escritorios son resguardados (backup, recuperación de desastres) y que pueden ser desplegados bajo demanda (as a Service) lo cual va en línea con la cambiante dinámica de los negocios.

MAS INFORMACION
Guía de Diseño del Business Process Desktop

 

Link: http://www.itsitio.com

 

 

Picture of System Administrator

Evaluating the different types of DBMS products

by System Administrator - Thursday, 22 January 2015, 6:36 PM
 

Evaluating the different types of DBMS products

Expert contributor Craig S. Mullins describes the types of database management system products on the market and outlines their strengths and weaknesses.

by: Craig S. Mullins

The database management system (DBMS) is the heart of today's operational and analytical business systems. Data is the lifeblood of the organization and the DBMS is the conduit by which data is stored, managed, secured and served to applications and users. But there are many different forms and types of DBMS products on the market, and each offers its own strengths and weaknesses. 

Relational databases, or RDBMSes, became the norm in IT more than 30 years ago as low-cost servers became powerful enough to make them widely practical and relatively affordable. But some shortcomings became more apparent in the Web era and with the full computerization of business and much of daily life. Today, IT departments trying to process unstructured data or data sets with a highly variable structure may also want to considerNoSQL technologies. Applications that require high-speed transactions and rapid response rates, or that perform complex analytics on data in real time or near real time, can benefit from in-memory databases. And some IT departments will want to consider combining multiple database technologies for some processing needs.

The DBMS is central to modern applications, and choosing the proper database technology can affect the success or failure of your IT projects and systems. Today's database landscape can be complex and confusing, so it is important to understand the types and categories of DBMSes, along with when and why to use them. Let this document serve as your roadmap.

DBMS categories and models

Until relatively recently, the RDBMS was the only category of DBMS worth considering. But the big data trend has brought new types of worthy DBMS products that compete well with relational software for certain use cases. Additionally, an onslaught of new technologies and capabilities are being added to DBMS products of all types, further complicating the database landscape.

The RDBMS: However, the undisputed leader in terms of revenue and installed base continues to be the RDBMS. Based on the sound mathematics of set theory, relational databases provide data storage, access and protection with reasonable performance for most applications, whether operational or analytical in nature. For more than three decades, the primary operational DBMS has been relational, led by industry giants such as Oracle, Microsoft (SQL Server) and IBM (DB2). The RDBMS is adaptable to most use cases and reliable; it also has been bolstered by years of use in industry applications at Fortune 500 (and smaller) companies. Of course, such stability comes at a cost: RDBMS products are not cheap.

Support for ensuring transactional atomicity, consistency, isolation and durability -- collectively known as the ACID properties -- is a compelling feature of the RDBMS. ACID compliance guarantees that all transactions are completed correctly or that a database is returned to its previous state if a transaction fails to go through.

Given the robust nature of the RDBMS, why are other types of database systems gaining popularity? Web-scale data processing and big data requirements challenge the capabilities of the RDBMS. Although RDBMSes can be used in these realms, DBMS offerings with more flexible schemas, less rigid consistency models and reduced processing overhead can be advantageous in a rapidly changing and dynamic environment. Enter the NoSQL DBMS.

The NoSQL DBMS: Where the RDBMS requires a rigidly defined schema, aNoSQL database permits a flexible schema, in which every data element need not exist for every entity. For loosely defined data structures that may also evolve over time, a NoSQL DBMS can be a more practical solution.

Another difference between NoSQL and relational DBMSes is how data consistency is provided. The RDBMS can ensure the data it stores is always consistent. Most NoSQL DBMS products offer a more relaxed, eventually consistent approach (though some provide varying consistency models that can enable full ACID support). To be fair, most RDBMS products also offer varying levels of locking, consistency and isolation that can be used to implement eventual consistency, and many NoSQL DBMS products are adding options to support full ACID compliance.

So NoSQL addresses some of the problems encountered by RDBMS technologies, making it simpler to work with large amounts of sparse data. Data is considered to be sparse when not every element is populated and there is a lot of "empty space" between actual values. For example, think of a matrix with many zeroes and only a few actual values.

But while certain types of data and use cases can benefit from the NoSQL approach, using NoSQL databases can come at the price of eliminating transactional integrity, flexible indexing and ease of querying. Further complicating the issue is that NoSQL is not a specific type of DBMS, but a broad descriptor of four primary categories of different DBMS offerings:

  • Key-value
  • Document
  • Column store
  • Graph

Each of these types of NoSQL DBMS uses a different data model with different strengths, weaknesses and use cases to consider. A thorough evaluation of NoSQL DBMS technology requires more in-depth knowledge of each NoSQL category, along with the data and application needs that must be supported by the DBMS. 

The in-memory DBMS: One last major category of DBMS to consider is thein-memory DBMS (IMDBMS), sometimes referred to as a main memory DBMS. An IMDBMS relies mostly on memory to store data, as opposed to disk-based storage.

The primary use case for the IMDBMS is to improve performance. Because the data is maintained in memory, as opposed to on a disk storage device, I/O latency is greatly reduced. Mechanical disk movement, seek time and transfer to a buffer can be eliminated because the data is immediately accessible in memory.

An IMDBMS can also be optimized to access data in memory, as opposed to a traditional DBMS that is optimized to access data from disk. IMDBMS products can reduce overhead because the internal algorithms usually are simpler, with fewer CPU instructions.

A growing category of DBMS is the multi-model DBMS, which supports more than one type of storage engine. Many NoSQL offerings support more than one data model -- for example, document and key-value. RDBMS products are evolving to support NoSQL capabilities, such as adding a column store engine to their relational core.

Other DBMS categories exist, but are not as prevalent as relational, NoSQL and in-memory:

  • XML DBMSes are architected to support XML data, similar to NoSQL document stores. However, most RDBMS products today provide XML support.
  • columnar database is a SQL database system optimized for reading a few columns of many rows at once (and is not optimized for writing data).
  • Popular in the 1990s, object-oriented (OO) DBMSes were designed to work with OO programming languages, similar to NoSQL document stores.
  • Pre-relational DBMSes include hierarchical systems -- such as IBM IMS -- and network systems -- such as CA IDMS -- running on large mainframes. Both still exist and support legacy applications.

Additional considerations      

As you examine the DBMS landscape, you will inevitably encounter many additional issues that require consideration. At the top of that list is platform support. The predominant computing environments today are Linux, Unix, Windows and the mainframe. Not every DBMS is supported on each of these platforms.

Another consideration is vendor support. Many DBMS offerings are open source, particularly in the NoSQL world. The open source approach increases flexibility and reduces initial cost of ownership. However, open source software lacks support unless you purchase a commercial distribution. Total cost of ownership can also be higher when you factor in the related administration, support and ongoing costs.

You might also choose to reduce the pain involved in acquisition and support by using a database appliance or deploying in the cloud. A database appliance is a preinstalled DBMS sold on hardware that is configured and optimized for database applications. Using an appliance can dramatically reduce the cost of implementation and support because the software and hardware are designed to work together.

Implementing your databases in the cloud goes one step further. Instead of implementing a DBMS at your shop, you can contract with a cloud database service provider to implement your databases using the provider's service.

The next step

If your site is considering a DBMS, it's important to determine your specific needs as well as examine the leading DBMS products in each category discussed here. Doing so will require additional details on each of the different types of DBMS, as well as a better understanding of the specific use cases for which each database technology is optimized. Indeed, there are many variables that need to be evaluated to ensure you make a wise decision when procuring database management system software.

About the author:
Craig S. Mullins is a data management strategist, researcher, consultant and author with more than 30 years of experience in all facets of database systems development. He is president and principal consultant of Mullins Consulting Inc. and publisher/editor of TheDatabaseSite.com. Email him atcraig@craigmullins.com.

Email us at editor@searchdatamanagement.com and follow us on Twitter:@sDataManagement.

Next Steps

Learn about some database management rules of thumb from author Craig S. Mullins

Check out our Essential Guides to relational DBMS software and NoSQL database technologies

See why consultant William McKnight says you should give some thought toin-memory databases

This was first published in January 2015
Picture of System Administrator

Examen (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 1:07 AM
 

CONCEPTOS RELATIVOS AL EXAMEN

Evidencia objetiva: Datos que respaldan la existencia o veracidad de algo.

Revisión: Actividad emprendida para asegurar la conveniencia, la adecuación, y eficacia del tema objeto de la revisión, para alcanzar unos objetivos establecidos.

Verificación: Confirmación mediante la aportación de evidencia objetiva de que se han cumplido los requisitos especificados.

Validación: Confirmación mediante el suministro de evidencia objetiva de que se han cumplido los requisitos para una utilización o aplicación específica prevista.

Inspección: Evaluación de la conformidad por medio de observación y dictamen, acompañada cuando sea apropiado por medición, ensayo/prueba.

Ensayo prueba: Determinación de una o más características de acuerdo con un procedimiento.

Picture of System Administrator

Exit Strategy

by System Administrator - Friday, 19 December 2014, 5:40 PM
 

Exit Strategy

Posted by: Margaret Rouse

An exit strategy is a planned approach to terminating a situation in a way that will maximize benefit and/or minimize damage.

The idea of having a strategic approach can be applied to exiting any type of situation but the term is most often used in a business context in reference to partnerships, investments or jobs.

Understanding the most graceful exit strategy for establishing partnerships should be part of due diligence and vetting potential suppliers and service providers. In cloud services, for example, termination or early-withdrawal fees, cancellation notification and data extraction are just a few of the factors to be considered.

An entrepreneur's plan for exiting a startup might include selling the company at a profit or running the business as long as the return on investment (ROI) is attractive and simply terminating it when that ceases to be the case. In the stock market, an exit strategy might include a stop-loss order that instigates a sale when the value of a stock drops below a specified price.

In an employment context, exit strategies are becoming increasingly important not just for corporate executives but for all employees. People change jobs much more frequently than they did in the past, whether voluntarily or involuntarily through firing, downsizing oroutsourcing. An employee's exit strategy might include negotiating a severance agreement,, updating a resume, maintaining lists of potentially helpful contacts and saving enough money to cover a period of unemployment.

No matter what the context, creating an exit strategy should be an important part of anycontingency plan and risk management strategy.

See alsosupplier risk management

Related Terms

DEFINITIONS

  • Chief Risk Officer (CRO)

     - The chief risk officer (CRO) is the corporate executive tasked with assessing and mitigating significant competitive, regulatory and technological risks across the enterprise. (SearchCompliance.com)

  • micromanagement

     - Micromanagement is an approach to overseeing staff that is characterized by a need to control the actions of employees beyond what is useful or effective. (WhatIs.com)

  • third party

     - A third party is an entity that is involved in some way in an interaction that is primarily between two other entities. The third party may or may not be officially a part of the transaction betwee... (WhatIs.com)

GLOSSARIES

  • Project management

     - Terms related to project management, including definitions about project management methodologies and tools.

  • Internet applications

     - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Link: http://whatis.techtarget.com

Picture of System Administrator

Extension Strategy

by System Administrator - Monday, 10 August 2015, 9:19 PM
 

Extension Strategy

Posted by: Margaret Rouse

An extension strategy is a practice used to increase the market share for a given product or service and thus keep it in the maturity phase of the marketing product lifecycle rather than going into decline. 

Extension strategies include rebranding, price discounting and seeking new markets. Rebranding is the creation of a new look and feel for an established product in order to differentiate the product from its competitors. At its simplest, rebranding may consist of creating updated packaging to change the perception of the product. 

More complex rebranding efforts can include new advertising strategies, extensive public relations (PR) and social media marketing campaigns.

Related Terms

Definitions

Glossaries

  • Business terms

    - Terms related to business, including definitions about project management and words and phrases about human resources, finance and vertical industries.

  • Internet applications

    - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Link: http://whatis.techtarget.com

 

Picture of System Administrator

F (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:38 PM
 
Picture of System Administrator

Federated Identity Service Based on Virtualization

by System Administrator - Wednesday, 18 February 2015, 7:20 PM
 

Toward a Federated Identity Service Based on Virtualization

A Buyer’s Guide to Identity Integration Solutions, from Meta and Virtual Directories to a Federated Identity Service Based on Virtualization.

The world of identity and access management is expanding in all dimensions, with more users, more applications, more devices, and more diversity—and these multi-faceted demands are stretching the current landscape of IAM for most organizations and enterprises. The adoption of federation standards, such as SAML 2.0, OpenID Connect, and OAuth 2.0, promises a new way to combat rising complexity. However, the successful adoption of these technologies also requires a rationalization and consolidation of the identity infrastructure, which, for most sizable enterprises, is highly fragmented across multiple identity silos. While federation standards can bring secure and orderly access to the doors of the enterprise, organizations will still need a way to unlock those doors into their complex and often messy identity infrastructures. To ensure security these days, the entire diverse and distributed enterprise identity infrastructure must become one secure global service. A federated identity service based on virtualization is the answer for protecting today’s increasingly federated environments—and evolving them to meet future demands and opportunities. In this paper, we’ll look at how such a service helps you manage all this complexity and see how other solutions stack up.

Please read the attached whitepaper.

Picture of System Administrator

FIDO (Fast Identity Online) definition

by System Administrator - Wednesday, 26 August 2015, 9:18 PM
 

FIDO (Fast Identity Online) definition

Posted by: Margaret Rouse

FIDO (Fast ID Online) is a set of technology-agnostic security specifications for strong authentication. FIDO is developed by the FIDO Alliance, a non-profit organization formed in 2012.

FIDO specifications support multifactor authentication (MFA) and public key cryptography. A major benefit of FIDO-compliant authentication is the fact that users don't need to use complex passwords, deal with complex strong password rules and or go through recovery procedures when they forget a password. Unlike password databases, FIDO stores personally identifying information (PII) such as biometric authentication data locally on the user's device to protect it. FIDO's local storage of biometrics and other personal identification is intended to ease user concerns about personal data stored on an external server in the cloud. By abstracting the protocol implementation with application programming interfaces (APIs), FIDO also reduces the work required for developers to create secure logins for mobile clients running different operating systems (OSes) on different types of hardware.

FIDO supports the Universal Authentication Framework (UAF) protocol and the Universal Second Factor (U2F) protocol. With UAF, the client device creates a new key pair during registration with an online service and retains the private key; the public key is registered with the online service. During authentication, the client device proves possession of the private key to the service by signing a challenge, which involves a user–friendly action such as providing a fingerprint, entering a PIN or speaking into a microphone. With U2F,  authentication requires a strong second factor such as a Near Field Communication (NFC) tap or USB security token.

The history of the FIDO Alliance

In 2007, PayPal  was trying to increase security by introducing  MFA to its customers in the form of its one-time password (OTP) key fob: Secure Key. Although Secure Key was effective, adoption rates were low -- it was generally used only by few security-conscious individuals. The key fob complicated authentication, and most users just didn't feel the need to use it.

In talks exploring the idea of integrating fingerscanning technology into PayPal, Ramesh Kesanupalli (then CTO of Validity Sensors) spoke to Michael Barrett (then PayPal's CISO). It was Barrett’s opinion that an industry standard was needed that could support all authentication hardware. Kesanupalli set out from there to bring together industry peers with that end in mind. The FIDO Alliance was founded as the result of meetings between the group. The Alliance went public in February 2013 and since that time many companies become members, including Google, ARM, Bank of America, Master Card, Visa, Microsoft, Samsung, LG, Dell and RSA. Microsoft has announced the inclusion of FIDO for authentication in Windows 10.

Next Steps

The proliferation of smartphones and other mobile devices continue to call for standards that support multifactor authentication. Methods such as biometrics are being incorporated into smartphones and PCs to prevent identity theft. Today a variety of products exist on the market ranging from the EMC RSA Authentication Manager, Symantec Verisign VIP, CA Strong Authentication, and Vasco Identikey Digipass.

Continue Reading About FIDO (Fast Identity Online)

Picture of System Administrator

File sync-and-share

by System Administrator - Friday, 7 November 2014, 6:03 PM
 

For enterprise file sync-and-share, security is king

 

by Jake O'Donnell

IT should rest easy about where their data lives in the consumerization age, but there's no one-size-fits-all approach to reaching that peace of mind.

The thought of data ending up in the wrong hands can keep IT admins awake at night.

When it comes to enterprise file sync-and-share options, IT can take many different approaches to secure access on all devices.

Security should be top of mind when considering an enterprise file sync-and-share platform, said James Gordon, first VP of IT and operations at Needham Bank in its namesake city in Massachusetts.

"[Security] is the end-all, be-all," he said. "When the IT admin doesn't have the apps that people perceive they need to do their job efficiently on their device, you've created this yin-yang symbol of give and take, or fighting back."

Vendors small and large offer products for data collaboration, sharing and storage both through cloud and on-premises installations. Enterprises can also secure third-party apps through an enterprise mobility management (EMM) platform. Here's how three companies take each of these individual approaches.

On-premises and encryption security

U.S. companies now have a secure enterprise file sync-and-share option previously only available across the Atlantic.

Two years ago, Berger Group, a financial advisory organization with companies based in Italy and Switzerland, began to look for a secure way to store, transfer and edit confidential documents between its two companies and third parties like clients and legal counsel.

Berger Group researched data loss prevention vendors but found implementation and maintenance would require additional staff and changes to its infrastructure it couldn't afford, said Claudio Ciapetti, Berger Group's controller and IT operations manager.

[Security] is the end-all, be-all.

James Gordon

first vice president of IT, Needham Bank

Eventually the company found Boole Server, an enterprise file sync-and-share vendor based in Milan, Italy, with an on-premises product that provides encryption for data in transit, at rest, within applications and even when in use. In addition to 256-bit Advanced Encryption Standard, Boole Server uses a proprietary algorithm that applies a 2048-bit random encryption key to each file.

Enterprises hold the encryption keys for Boole Server, unlike some cloud-based enterprise file sync-and-share competitors such as Dropbox and Amazon Zocalo.

With Boole Server, Berger Group maintains ownership of its files even when accessed outside the company and sets restrictions on actions like copying, pasting and printing.

"We set it up to make sure a third party can connect with our server to look at documentation and make amendments, but still leave the document in our server," Ciapetti said.

Boole Server recently launched its product offerings in the U.S. after previously only being available in Europe. Boole Server is available in three versions: Small to Medium Business (SMB), Corporate and Enterprise. Storage space is capped at 1 TB for SMB and unlimited for Corporate and Enterprise. Enterprise customers receive an unlimited number of guest and user profiles per license while Corporate is capped at 1,000 and SMB at 150. Boole Server is available as a onetime purchase starting at $10,000 for SMB and Corporate and $25,000 for Enterprise, which includes two server licenses.

Securing highly regulated industries

Security is even more important in highly regulated industries, and one enterprise file sync-and-share company builds its products specifically for those industries.

Comfort Care Services Ltd., based in Slough, England, provides support for adults with mental health and learning disabilities to help them integrate back into communities after leaving hospital care. As recently as three years ago there were 15 corporate-issued laptops in the whole company and most other business was conducted on paper, said Gee Bafhtiar, director of IT operations at Comfort Care Services.

"It's cumbersome and takes an amazing amount of time to get data from one place to another," Bafhtiar said.

Comfort Care Services began its technological turnaround by implementing desktop virtualization from Terminal Service Plus, but still needed a quicker and more secure option for document editing, sharing and collaborating with external users.

When a patient sought to join Comfort Care Services, it previously took upward of a month to complete paperwork that involved sending medical records and support plans back and forth between the patient, Comfort Care Services and government commissioning bodies. While the company continues to use Terminal Service Plus, only internal users access the system.

The company considered Box and Citrix for enterprise file sync-and-share but found neither offered the granular control for auditing capabilities Comfort Care Services required, Bafhtiar said. Enter Workshare, which focuses on secure collaboration products and applications for highly regulated industries such as legal, government, finance and healthcare. The London-based company also allows customers to hold encryption keys.

Comfort Care Services uses Workshare Connect, a cloud application providing collaboration and file sharing among employees and outside parties with permitted access. It found Workshare Connect afforded more of the granular controls around access to specific internal and external users and tracking changes to documents it could not find with other platforms, Bafhtiar said.

At first, Comfort Care Services couldn't conduct remote wipes of files in Workshare if a device was lost or stolen or if an employee left the company. Workshare later added that capability.

"There's always a compromise that needs to be made but we found that we had to do a lot less compromising with Workshare," Bafhtiar said.

Through Workshare, Comfort Care Services can release an individual document to anybody it chooses by inviting them in and giving them access to that document for a limited amount of time. The company can see what changes are made and who made them for security and auditability.

Comfort Care Services has simplified documentation processing and cut the approval time for new patients in half. Employees can use Web and mobile versions of the Workshare app on laptops and mobile devices to securely edit and share documents.

Workshare is available in four formats that range from $30 to $175 per user per year. The formats include Protect for metadata removal and policies, Compare for document version management, Connect for secure file collaboration and Workshare Pro 8, which combines the other three formats into one platform.

EMM platforms secure cloud apps, repositories

Yet another option for IT is using file sync-and-share options directly from EMM platforms. Some of these include Citrix's ShareFileAirWatch by VMware's Secure Content Locker, Good Technology's Secure Mobility Solution and MobileIron's Docs@Work.

MobileIron recently updated Docs@Work to allow companies to connect with cloud services including Box, Dropbox, Microsoft Office 365 and SharePoint Online. Users can search, download and save documents across all of those different services directly within the Docs@Work browser. From there, documents can be edited both locally on the device and remotely through the browser.

Secure Content Locker and ShareFile, by comparison, allow companies to integrate with content repositories for file access. ShareFile uses Personal Cloud Connectors to access Box, Dropbox, Google Drive and OneDrive accounts and allows users to edit files stored in content repositories like SharePoint and EMC Documentum.

MobileIron wants to ensure a consistent user experience across platforms with the update, said Needham Bank's Gordon, whose bank uses Docs@Work along with the rest of MobileIron's EMM platform.

Docs@Work helps Needham Bank employees securely access files within SharePoint on their iOS, Android and Windows Phone devices. It allows Gordon's IT department to keep track of which users access files and logs the access time.

"You're authenticating not only the user but the device, because the users are already enrolled with MobileIron certificates," Gordon said.

The connection of Docs@Work with these cloud applications is the first part of an overall security content and collaboration platform that MobileIron currently has in development and would like to roll out to customers within the next year. This includes file-level encryption for files located in those cloud platforms, the company said.

Docs@Work is not available as a standalone product and can only be purchased as part of the Gold and Platinum bundles of MobileIron's EMM platform. AirWatch Secure Content Locker and ShareFile, by comparison, are available standalone. There is no additional cost for existing MobileIron customers, and list pricing for the bundles starts at $4 per device per month.

Link: http://searchconsumerization.techtarget.com

Picture of System Administrator

Flash Storage for Database Workloads

by System Administrator - Wednesday, 14 October 2015, 2:47 PM
 

Flash Storage for Database Workloads

Flash storage technologies can immediately address the performance and I/O latency problems encountered by many database deployments.

Please read the attached whitepapers.

Picture of System Administrator

Fuzz Testing (Fuzzing)

by System Administrator - Thursday, 6 October 2016, 7:34 PM
 

Fuzz Testing (Fuzzing)

Fuzz testing or fuzzing is a software testing technique used to discover coding errors and security loopholes in software, operating systems or networks by inputting massive amounts of random data, called fuzz, to the system in an attempt to make it crash. If a vulnerability is found, a tool called a fuzz tester (or fuzzer), indicates potential causes. Fuzz testing was originally developed by Barton Miller at the University of Wisconsin in 1989.

Fuzzers work best for problems that can cause a program to crash, such as buffer overflow, cross-site scripting, denial of service attacks, format bugs and SQL injection. These schemes are often used by malicious hackers intent on wreaking the greatest possible amount of havoc in the least possible time. Fuzz testing is less effective for dealing with security threats that do not cause program crashes, such as spyware, some viruses, worms, Trojans and keyloggers.

Fuzz testing is simple and offers a high benefit-to-cost ratio. Fuzz testing can often reveal defects that are overlooked when software is written and debugged. Nevertheless, fuzz testing usually finds only the most serious faults. Fuzz testing alone cannot provide a complete picture of the overall security, quality or effectiveness of a program in a particular situation or application. Fuzzers are most effective when used in conjunction with extensive black box testing, beta testing and other proven debugging methods.

"Fuzz testing is most useful for software that accepts input documents, images, videos or files that can carry harmful content. These are the serious bugs that it's worth investing to prevent." - David Molnar

Link: http://searchsecurity.techtarget.com

Continue Reading About fuzz testing (fuzzing)

Picture of System Administrator

G (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:40 PM
 
Picture of System Administrator

Gestión (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 12:47 AM
 

CONCEPTOS RELATIVOS A LA GESTIÓN

Sistema: Conjunto de elementos mutuamente relacionados o que interactúan.

Gestión: Actividades coordinadas para dirigir y controlar una organización.

Sistema de gestión: Sistema para establecer la política y los objetivos y para lograr dichos objetivos.

Alta dirección: Persona o grupo de personas que dirigen y controlan al más alto nivel una organización.

Política de calidad: Intenciones globales y orientación de una organización relativas a la calidad tal como se expresan formalmente por la alta dirección.

Sistema de gestión de calidad: Sistema de gestión para dirigir y controlar una organización con respecto a la calidad.

Gestión de la calidad: Actividades coordinadas para dirigir y controlar una organización en lo relativo a la calidad.

Objetivo de calidad: Algo ambicionado, o pretendido, relacionado con la calidad.

Mejora continua: Acción recurrentes para aumentar la capacidad para cumplir los requisitos.

Mejora de la calidad: Parte de la gestión de la calidad orientada a aumentar la capacidad de cumplir con los requisitos de calidad.

Aseguramiento de la calidad: Parte de la gestión de la calidad orientada a proporcionar confianza en que se cumplirán los requisitos de calidad.

Control de calidad: Parte de la gestión de calidad orientada al cumplimiento de los requisitos de calidad.

Planificación de la calidad: Parte de la gestión de calidad enfocada al establecimiento de los objetivos de calidad y a la especificación de los procesos operativos necesarios y de los recursos relacionados para cumplir los objetivos de la calidad.

Eficacia: Extensión en la que se realizan las actividades planificadas y se alcanzan los resultados planificados.

Eficiencia: Relación entre el resultado alcanzado y los recursos utilizados.

 

Picture of System Administrator

Going Mobile with Electronic Signatures

by System Administrator - Monday, 6 July 2015, 8:55 PM
 

Going Mobile with Electronic Signatures

With more and more professionals using mobile phones and applications, the “mobile wave” is impacting everybusiness function across the globe and transforming the way we do business. Technologies like mobile electronic signatures can help businesses finish business faster than ever before. Mobile electronic signatures let you and your contacts legally send and sign documents using mobile phones and other devices–anytime, anywhere.

Please read the attached whitepaper.

Picture of System Administrator

Group Think

by System Administrator - Friday, 26 June 2015, 6:58 PM
 

Group Think

Posted by Margaret Rouse

Group think (also spelled groupthink) is a phenomenon that occurs when group's need for consensus supercedes the judgment of individual group members.

Group think (also spelled groupthink) is a phenomenon that occurs when group's need for consensus supercedes the judgment of individual group members. Group think often occurs when there is a time constraint and individuals put aside personal doubts so a project can move forward or when one member of the group dominates the decision-making process. 

In a group think scenario, consensus is often derived by social pressures or by work flow processes that cannot accommodate change. Group thinking, which carries a negative connotation, can be contrasted with  collaboration, a scenario in which individual group members are encouraged to be creative, speak out and weigh many options before arriving at a consensus.

In acceding to group think, group members often choose not to explore alternative solutions as part of the decision-making process, either because it is easier not to go with the flow or because they do not want to be perceived as troublemakers and lose status within the group. As such, group think can blind individuals from considering future consequences, warnings and risks that result from their choices.

Glossary

Picture of System Administrator

Guía de Protección de Sitios Web

by System Administrator - Wednesday, 8 July 2015, 8:42 PM
 

Guía de Protección de Sitios Web

Aprende a:

• Justificar la importancia de proteger un sitio web con argumentos empresariales sólidos.
• Explicar por qué la tecnología SSL es la base de la protección de un sitio web.
• Elegir e instalar los certificados SSL más adecuados en tu caso.
• Adoptar las prácticas necesarias para que tu sitio web sea seguro e inspire confianza.

Please read the attached eGuide

Picture of System Administrator

Guide to Patch Management Best Practices

by System Administrator - Tuesday, 23 December 2014, 2:33 PM
 

Guide to Patch Management Best Practices

With the sophistication and sheer volume of exploits targeting major applications and operating systems, the speed of assessment and deployment of security patches across your complex IT infrastructure is key to mitigating risks and remediating vulnerabilities. Here are the Lumension-recommended steps to cure your patch management headache.

Please read the attached whitepaper

Picture of System Administrator

Guide to Retirement Income

by System Administrator - Monday, 27 February 2017, 12:56 PM
 

Guide to Retirement Income

by Fisher Investments

Please read the attached paper.

Picture of System Administrator

H (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:41 PM
 
Picture of System Administrator

H - 1B

by System Administrator - Wednesday, 23 August 2017, 4:27 PM
 

H - 1B

Publicado por Margaret Rouse

Traducido automáticamente con Google

H-1B es una clasificación de visas de Servicio de Inmigración de los Estados Unidos que permite a los empleadores contratar a trabajadores extranjeros altamente calificados que poseen la aplicación teórica y práctica de un cuerpo de conocimiento especializado. El solicitante debe tener una licenciatura o el equivalente en la especialidad.

Además de las ocupaciones de especialidad en campos como la ciencia, la medicina, la salud, la educación, la tecnología de la información y las empresas, la visa también se aplica a los extranjeros que buscan realizar servicios de mérito y capacidad excepcional relacionados con un Departamento de Defensa (DOD) Proyecto de desarrollo, o servicios como un modelo de moda de distinguido mérito o capacidad.

Para ser elegible para una visa H-1B, un extranjero debe tener un patrocinador del empleador. Se requiere que el empleador declare o demuestre que un trabajador de los Estados Unidos no será desplazado por el solicitante H-1B y que presentará una solicitud ante los Servicios de Inmigración y Ciudadanía de los Estados Unidos (USCIS) en nombre del extranjero. En 2015, dos tercios de las peticiones concedidas en 2015 eran para empleados en ocupaciones relacionadas con la informática. 

Las leyes actuales limitan el número anual de trabajadores extranjeros calificados que pueden obtener una nueva visa a 65.000 con un tope de 20.000 adicionales bajo la exención de grado avanzado H-1B. Los empleados extranjeros en las organizaciones de investigación del gobierno, el instituto de educación superior o la organización de investigación sin fines de lucro pueden estar exentos de la tapa.

Las solicitudes de nuevas visas se aceptan cada año el 1 de abril. Si el número de solicitudes supera el tope aprobado por el Congreso después de cinco días, un proceso de selección computarizado (a veces denominado lotería) selecciona 20,000 solicitudes de grado avanzado del grupo solicitante . Los solicitantes que no son aceptados se agregan al pool regular y el proceso de selección por computadora continúa hasta que se hayan concedido 65,000 visas adicionales.

La duración de la estancia permitida por una visa H-1B es de hasta tres años, pero las prórrogas son permitidas. Los titulares de visas H-1B que quieran seguir trabajando en los Estados Unidos después de seis años, pero que no han obtenido la residencia permanente, deben vivir fuera de los Estados Unidos por un año antes de solicitar una nueva visa H-1B. La duración máxima de la visa H-1B es de diez años para trabajos excepcionales del Departamento de Defensa de los Estados Unidos.  

En 2017, proyectos de ley para reformar el programa H-1B fueron introducidos tanto en la Cámara como en el Senado.

Link: http://searchcio.techtarget.com

Picture of System Administrator

Hadoop 2 and YARN

by System Administrator - Thursday, 8 October 2015, 4:22 PM
 

Hadoop 2 definition

Posted by: Margaret Rouse

Apache Hadoop 2 (Hadoop 2.0) is the second iteration of the Hadoop framework for distributed data processing. 

Hadoop 2 adds support for running non-batch applications through the introduction of YARN, a redesigned cluster resource manager that eliminates Hadoop's sole reliance on the MapReduce programming model. Short for Yet Another Resource Negotiator, YARN puts resource management and job scheduling functions in a separate layer beneath the data processing one, enabling Hadoop 2 to run a variety of applications. Overall, the changes made in Hadoop 2 position the framework for wider use in big data analytics and other enterprise applications. For example, it is now possible to run event processing as well as streaming, real-time and operational applications. The capability to support programming frameworks other than MapReduce also means that Hadoop can serve as a platform for a wider variety of analytical applications.

Hadoop 2 also includes new features designed to improve system availability and scalability. For example, it introduced an Hadoop Distributed File System (HDFS) high-availability (HA) feature that brings a new NameNode architecture to Hadoop. Previously, Hadoop clusters had one NameNode that maintained a directory tree of HDFS files and tracked where data was stored in a cluster. The Hadoop 2 high-availability scheme allows users to configure clusters with redundant NameNodes, removing the chance that a lone NameNode will become a single point of failure (SPoF) within a cluster. Meanwhile, a new HDFS federation capability lets clusters be built out horizontally with multiple NameNodes that work independently but share a common data storage pool, offering better compute scaling as compared to Apache Hadoop 1.x.

Hadoop 2 also added support for Microsoft Windows and a snapshot capability that makes read-only point-in-time copies of a file system available for data backup and disaster recovery (DR). In addition, the revision offers all-important binary compatibility with existing MapReduce applications built for Hadoop 1.x releases.

Link: http://searchdatamanagement.techtarget.com

   

Apache Hadoop YARN (Yet Another Resource Negotiator) definition

Posted by: Margaret Rouse

Apache Hadoop YARN (Yet Another Resource Negotiator) is a cluster management technology.

YARN is one of the key features in the second-generation Hadoop 2 version of the Apache Software Foundation's open source distributed processing framework. Originally described by Apache as a redesigned resource manager, YARN is now characterized as a large-scale, distributed operating system for big data applications.

In 2012, YARN became a sub-project of the larger Apache Hadoop project. Sometimes called MapReduce 2.0, YARN is a software rewrite that decouples MapReduce's resource management and scheduling capabilities from the data processing component, enabling Hadoop to support more varied processing approaches and a broader array of applications. For example, Hadoop clusters can now run interactive querying and streaming data applications simultaneously with MapReduce batch jobs. The original incarnation of Hadoop closely paired the Hadoop Distributed File System (HDFS) with the batch-oriented MapReduce programming framework, which handles resource management and job scheduling on Hadoop systems and supports the parsing and condensing of data sets in parallel.

YARN combines a central resource manager that reconciles the way applications use Hadoop system resources with node manager agents that monitor the processing operations of individual cluster nodes.  Running on commodity hardware clusters, Hadoop has attracted particular interest as a staging area and data store for large volumes of structured and unstructured data intended for use in analytics applications. Separating HDFS from MapReduce with YARN makes the Hadoop environment more suitable for operational applications that can't wait for batch jobs to finish.

See also:  yacc (yet another compiler compiler)

Link: http://searchdatamanagement.techtarget.com

Please read the attached whitepaper about "The Hitchhiker's Guide to Hadoop 2"

Picture of System Administrator

How to move from Web to mobile business apps

by System Administrator - Thursday, 16 July 2015, 9:11 PM
 

How to move from Web to mobile business apps

by Amy Reichert

Moving from Web to mobile business apps goes beyond reaching out to customers. Mobile apps extend enterprise application functionality to mobile workers.

Solid mobile business apps offer only a subset of features available in enterprise Web applications. When moving from Web to mobile apps, a development team's biggest challenge is deciding which features to develop for mobile apps and how to deliver them. With the right set of functions in place, mobile business apps drive productivity, delight users and easily provide ROI.

It's difficult to create mobile business apps that are useful to users, even after the majority of features are stripped away. It's critical that the mobile application performs basic features -- and the more features it can handle, the better. 

In order to present a mobile option quickly, many corporations attempt to outsource mobile application development. If you choose this road, be sure to explicitly define the requirements. Otherwise, you might find that your mobile business apps lack key functionality.

Determine the five main features required

The first rule of thumb is not to reduce a mobile user's productivity. In other words, figure out which application features customers use or need most in a mobile device. Remember, some features may not translate well, so they may require additional development effort and creativity to provide the most value. 

For example, a mobile version of an electronic health record (EHR) application for hospital physicians needs access to nearly all of the EHR's features. Physicians need the ability to enter and edit patient data, such as current medications, as well as view existing information. Features related to patient insurance, however, may not be as important while doctors are making their rounds.

Determine which features provide the core functionality of the application and reproduce them in the mobile version. Reproducing a full application is unrealistic, so it's critical to pick the application's top functional features.

Include offline features in your mobile app

The ability to work offline is a required feature for many mobile business apps. Mobile connectivity has improved over the years, but it's not perfect. Users may not be able to connect to the network for various reasons. Don't rely on your end users having steady Internet connections -- even for the duration of a single session.

Mobile applications that provide offline features allow users to continue working in the application even though the application is not connected. Users can store work until they are able to reconnect. This is similar to saving work on a laptop, and then connecting to upload or send data to another location.

An example is allowing a physician to create orders for a patient and cache them in a file until they are ready or able to connect and update the record. Users can save email or text documents and place them in "draft" status until they connect. In this manner, users can continue working and save their work to upload for another time.

Provide configuration options

Another consideration when planning to move an application from Web to mobile is how many configuration options to retain. Development design needs to narrow down the available options similar to features. Determine which configuration options customers use the most and which match up with the features selected for the mobile application version. Make sure a feature isn't included without its configuration options. 

Similar to features, not all configuration options are necessary. However, to avoid reducing the mobile application's usefulness, it is critical to include configuration settings related to the included features. Providing useful and valuable features is essential for the success of mobile business apps

Streamline the user experience

The user experience on a mobile device is different than Web applications. But it's not just because the screen is smaller. Without proper planning, smaller screen sizes can force users to scroll excessively or click through too many screens -- both of which are distractions to avoid.

More importantly, mobile business apps need to be simple to understand and learn. Try to keep the mobile version visually similar to the original Web version, using consistent wording and iconography. Try to keep menu options in the same order to prevent users from having to go on a treasure hunt to find them. The simpler it is for the end user, the more productive they will be.

If a software development team needs to move an application from Web to mobile, it's valuable to take the time and determine which application features need to be present in the mobile version, and then create a development plan and timeline. Keep the end users in mind and how they plan to use the mobile version. Many times, slapping together a mobile version that only allows users to view data or records is not useful enough. Build what the application customers need rather than being restricted by available development time. The essential mobile application must be feature rich, function in similar ways to the Web app, and -- above all -- not create additional work or negatively affect a user's productivity.

Next Steps

Picture of System Administrator

How to Write Effective Titles and Headlines

by System Administrator - Thursday, 3 September 2015, 7:41 PM
 

How to Write Effective Titles and Headlines

 

The thing about content marketing is that to do it right, you need a plan. To make that plan, you have to really understand what drives readers to click on, read, share, or interact with your content’s titles or headlines.

In this guide, we’re going to share data analysis from two established leaders in the space, Outbrain and HubSpot, to help you gain insights on what makes headlines successful.

The data comes from a sample of more than 3.3 million paid link headlines from the pool of English language paid links that ran across Outbrain’s network of 100,000+ publisher sites with supporting data from HubSpot’s Marketing blog.

In this ebook, you’ll see data that explains:

  • Why headlines matter
  • What copy compels people to click
  • How to increase engagement and conversions
  • The optimal length for your titles and headlines
  • How headlines affect search and promotion

Please read the attached eBook.

Picture of System Administrator

HPE Hyper Converged

by System Administrator - Tuesday, 28 March 2017, 1:04 PM
 

Credit: Peter Sayer

HPE unveils a new SimpliVity appliance

by Peter Sayer

After the OmniCube comes the HPE SimpliVity 380 with OmniStack

Two months after acquiring SimpliVity for US$650 million, Hewlett Packard Enterprise is beginning to reshape the company's converged infrastructure offering in its own image. 

SimpliVity’s hyperconverged infrastructure appliance, the OmniCube, replaces storage switches, cloud gateways, high-availability shared storage, and appliances for backup and deduplication, WAN optimization, and storage caching. The company also offers OmniStack, the software powering the OmniCube, packaged for other vendors’ hardware.

Now HPE has qualified that software on its workhorse ProLiant DL380 server and will sell it as the snappily titled HPE SimpliVity 380 with OmniStack, Mark Linesch, the vice president for global strategy and operations of HPE's enterprise group, said Tuesday at the Cebit trade show in Hanover, Germany.

SimpliVity's website already lists the 380 among the product options, alongside versions of OmniStack tailored for Dell PowerEdge, Lenovo System x, and Cisco UCS servers for which it provided first-line support, handing off hardware matters to the vendors.

The website still lists the OmniCube for sale, too.

Linesch said HPE will continue to provide the same support for that hardware as SimpliVity did, although going forward, it hopes to see more customers on the ProLiant version.

SimpliVity used to guarantee that the OmniCube would offer a 90 percent capacity saving across production and backup storage while improving application performance, and HPE will offer the same guarantee for the SimpliVity 380, Linesch said. 

Three versions of the SimpliVity 380 are available, with five, nine or 12 SSDs of a capacity of 1.9 terabytes each. The servers have dual Intel E5-2600 v4 (Broadwell) processors, and customers can configure them with up to 44 cores. Depending on how much RAM is ordered, usable memory will range from about 140 GB to 1.4 TB. Depending on the configuration requested by the customer, the total cost will vary between $26,000 and $100,000, an HPE spokesman said.

Last November, HPE released a software update for another converged appliance built on the ProLiant 380, the Hyper Converged 380. Building on the existing stack of VMware virtualization software and HPE management tools, the update added integrated analytics and multi-tenant workspaces to simplify the management of servers as a single resource pool.

HPE's two hyperconvergence product lines will undergo some convergence of their own at some point in the future, combining the best features of the SimpliVity 380 and theHyper Converged 380 into a new product line, Linesch said. However, HPE will continue to sell the existing products, at least according to the slide he showed.

This story has been corrected to give the correct capacity for the SSDs in the eighth paragraph.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
 

Related:

Peter Sayer covers European public policy, artificial intelligence, the blockchain, and other technology breaking news for the IDG News Service.

Link: http://www.networkworld.com

Picture of System Administrator

I (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:42 PM
 
Picture of System Administrator

IBM Predictive Customer Intelligence

by System Administrator - Thursday, 3 September 2015, 7:16 PM
 

IBM Predictive Customer Intelligence

por IBM

Create personalized, relevant customer experiences with a focus on driving new revenue.

Please read the attached whitepaper.

Picture of System Administrator

Improving Server Performance and Security

by System Administrator - Tuesday, 23 December 2014, 2:23 PM
 

Improving Server Performance and Security

Server systems are, by definition, more important than individual endpoints. They must provide services to hundred, or even thousands, of endpoints and, naturally, must be secure. Traditional anti-virus (AV) solutions can provide protection for servers. However, constantly running AV processes, along with potentially frequent signature updates, can consume resources that could otherwise be used to provide application services to users. Read this evaluation by Tolly, commissioned by Lumension, as the dive into the impact on server resources of the alternative application control solution compared with traditional AV solutions from Microsoft Corp, Symantec Corp, and Trend Micro, Inc.

Please read the attached whitepaper

Picture of System Administrator

Improving the Management and Governance of Unstructured Data

by System Administrator - Friday, 26 June 2015, 6:06 PM
 

Improving the Management and Governance of Unstructured Data

Maximize efficiency with deeper insight to data value and automated, policy-based compliance, retention & disposition.

Picture of System Administrator

Incident Response: How to Fight Back

by System Administrator - Wednesday, 7 January 2015, 4:02 PM
 

 

Incident Response: How to Fight Back

Highly public breaches at companies such as Target, Evernote and Living Social, which collectively compromised more than 200 million customer records, are pushing many organizations to develop in-house incident response (IR) capabilities to prevent such data breaches.

IR teams, typically operating under a formalized IR plan, are designed to detect, investigate and, when necessary, remediate organizational assets in the event of a critical incident. SANS conducted a survey focused on the current state of IR during May and June 2014, polling security professionals from more than 19 industries and various-sized companies and organizations. The goal was to get a clearer picture of what IR teams are up against today—the types of attacks they see and what defenses they have in place to detect and respond to these threats. In addition, the survey measured the IR teams’ perceived effectiveness and obstacles to incident handling.

Of the 259 survey respondents, 88% work in an IR role, making this a target audience for soliciting close to real-time data on the current state of IR. Respondents represented 13 different regions and countries and work in management (28%), or as security analysts (29%), incident responders (13%) and forensic examiners (7%). This broad representation helps shed light on both present and future IR capabilities.

Please read the attached whitepaper.

Picture of System Administrator

Indirect Competition

by System Administrator - Thursday, 13 August 2015, 4:41 PM
 

Indirect Competition

Posted by: Margaret Rouse

Indirect competition is the conflict between vendors whose products or services are not the same but that could satisfy the same consumer need. 

The term contrasts with direct competition, in which businesses are selling products or services that are essentially the same. Cloud storage providers are direct competitors, for example, as are manufacturers of notebook computers

However, in recent years, desktop computer sales have dropped as many consumers purchased notebooks instead. Sellers of desktop PCs and notebooks are indirect competitors. 

In the 1960s, Theodore Levitt wrote a highly-influential article called "Marketing Myopia” for the Harvard Business Review recommending that businesses should take a much broader view of the competitive environment. Leavitt argued that the market’s central organizing element is human needs and that the satisfaction of those needs should be the focus of businesses. Products and services are transient but human needs are not. From that perspective, the distinction between direct and indirect competition is unimportant.

Related Terms

Definitions

Glossaries

  • Business terms

    - Terms related to business, including definitions about project management and words and phrases about human resources, finance and vertical industries.

  • Internet applications

    - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Picture of System Administrator

Industrial Internet of Things (IIoT)

by System Administrator - Tuesday, 7 April 2015, 6:57 PM
 

Industrial Internet of Things (IIoT)

Posted by Margaret Rouse

IIoT harnesses the sensor data, machine-to-machine communication and automation technologies that have existed in industrial settings for years.

The Industrial Internet of Things (IIoT) is the use of Internet of Things (IoT) technologies in manufacturing.

Also known as the Industrial Internet, IIoT incorporates machine learning and big data technology, harnessing the sensor data, machine-to-machine (M2M) communication and automation technologies that have existed in industrial settings for years. The driving philosophy behind the IIoT is that smart machines are better than humans at accurately, consistently capturing and communicating data. This data can enable companies to pick up on inefficiencies and problems sooner, saving time and money and supporting business intelligence efforts. In manufacturing specifically, IIoT holds great potential for quality control, sustainable and green practices, supply chain traceability and overall supply chain efficiency.

A major concern surrounding the Industrial IoT is interoperability between devices and machines that use different protocols and have different architectures. The nonprofit Industrial Internet Consortium, founded in 2014, focuses on creating standards that promote open interoperability and the development of common architectures.

Continue Reading About Industrial Internet of Things (IIoT)

Link: http://searchmanufacturingerp.techtarget.com

Picture of System Administrator

Infographic: US employees concerned about BYOD reimbursement

by System Administrator - Monday, 13 July 2015, 5:22 PM
 

US employees concerned about BYOD reimbursement

Picture of System Administrator

Information Governance Best Practice: Adopt a Use Case Approach

by System Administrator - Monday, 16 February 2015, 4:10 PM
 

Information Governance Best Practice: Adopt a Use Case Approach

by Debra LoganAlan DayleySheila Childs

VIEW SUMMARY

Massive data growth, new data types, litigation, regulatory scrutiny and privacy/information risks have all created an urgent need for information governance. IT professionals considering MDM, e-discovery, information archiving or cloud migration should start implementing information governance now.

Overview

Impacts
  • Data migration projects present an opportunity for legal and IT professionals to eliminate redundant, outdated and trivial data, by up to 60% in some cases, decreasing data management costs and reducing legal and regulatory risks.
  • Master data management (MDM), data quality, archiving, enterprise content management (ECM), records management or e-discovery system implementation can be used as a starting point for chief information officers (CIOs) to create specific information governance policies and set the stage for using information assets to drive business growth.
  • Increasing concerns about data security, privacy, personally identifiable information, intellectual property (IP) protection and e-discovery mean that IT has new business partners, such as chief legal and chief risk officers, to assist with its information governance efforts.
Recommendations
  • Use data migration and system retirement as an opportunity to undertake an information governance program, especially "defensible deletion" or legacy information clean up.
  • Focus on MDM, enterprise content management and data quality projects if your organization is seeking cost optimization, of the business benefits associated with growth enablement, service improvement, reduced risk or regulatory compliance.
  • Avoid wasting time and money on overlapping and redundant efforts by bringing the information governance projects that are proliferating in the areas of privacy and data security together now.
Analysis

Interest in information governance among Gartner clients continues to be strong with "information management" or "information governance" being the topic of over 1,900 inquiries in the six months to September 2013.

Organizations have been talking about information governance for quite a few years, but it is only now that we see more widespread understanding of what it takes to accomplish it. Information governance is starting to expand beyond the traditional litigation and regulatory retention requirements (for risk and cost control) into possible business value propositions. These ideas have finally broken through the ingrained mentality that many had about storage being "inexpensive" and that it was easier to simply keep information than to delete it, or that search technology would allow enterprises to forgo the effort and expense of organizing themselves and devoting resources to governance (see Note 1 for Gartner's definitions of "governance" and "information governance" and how these relate to overall corporate governance).

While more and more organizations are talking about information governance, they are also realizing that governance is technically complex, organizationally challenging and politically sensitive. In addition, it is often difficult to get executive-level sponsorship for governance programs because, in general, executives do not recognize the need for governance — not least because the effects of a lack of information governance are not as readily apparent as other pressing IT concerns. This is starting to change, however, as executives realize that many kinds of difficulties — such as failing to comply with regulatory regimes, excessive litigation costs and a lack of decision-making transparency — are, in fact, failures that have a root cause in poor information governance.

An approach to information governance based on specific use cases is one way to break through these barriers to adoption. This impact assessment presents different information governance use cases, all of which can be used as starting points for larger programs. This approach is one that has been proven successful by many organizations, and our impacts and recommendations can help your enterprise to achieve the same early success in beginning — or continuing — its information governance program (see the Note 2 for examples).

Information governance is a topic of interest both inside and outside IT. CIOs, chief data officers, infrastructure managers, chief information security officers, risk and compliance officers and general counsel can use this research to make decisions about where to start their information governance programs.

Figure 1. Impacts and Top Recommendations for Information Governance Use Cases

ECM = enterprise content management; CIO = chief information officer; IP = intellectual property; MDM = master data management | Source: Gartner (November 2013)

Impacts and Recommendations

Data migration projects present an opportunity for legal and IT professionals to eliminate redundant, outdated and trivial data, by up to 60% in some cases, decreasing data management costs and reducing legal and regulatory risks

Data migration and IT infrastructure modernization are two of the most common information governance use cases. There are a number of variations on this use case, such as migrating file shares to ECM or SharePoint, files to cloud storage (including file sync and share services), and moving data from legacy storage to more modern and cost-effective platforms.

Clients who undertake analysis of existing data stores always tell us that redundant, outdated, trivial and risky data represents between 15% and 60% of what they have (see the Evidence section)

Another example is the migration of legacy enterprise information archiving systems to next-generation, on-premises or SaaS products or services. Enterprise information archiving systems are the target system type in many migrations. Archiving solves several problems that cannot be handled in native email systems, social media systems or by using file shares as primary storage. Archiving systems have been put in place as solutions for storage management, e-discovery, compliance, indexing, search and business or market analysis.

There are two primary use cases here:

  1. The migration of email or files from the email system or from file shares to an archiving system.
  2. The migration from one archiving system to another.

In the process of moving files from one location to another, many enterprises take the opportunity to create rules that allow data to be identified, classified and assessed for ongoing retention or for deletion. In practice, what has happened over the years is that companies have "over-retained" email and files and migration presents an opportunity to delete data that no longer has any business value and doesn't need to be retained for legal or regulatory purposes.

The Recommended Reading section has more advice on the legal and regulatory implications of legacy application retirement.

Recommendations:

  • Use data migration and system retirement as an opportunity to undertake an information governance program, especially "defensible deletion" or legacy information clean up.
  • Storage managers or other IT professionals who are considering any archiving scenario should work with legal and compliance professionals to create rules for retaining only the data that is necessary, usually no more than three years' worth, or that which has had a "litigation hold" placed on it. In many cases legal will have asked for the data to be held, but never rescinded the litigation hold, even though the matter is no longer ongoing.
  • When moving files to an ECM system or SharePoint, organizations should include a component of data classification and tagging, again with the involvement of legal and compliance users.
  • Use hardware refreshes and storage redesign projects as an opportunity to introduce information governance to IT.

MDM, data quality, archiving, ECM, records management or e-discovery system implementation can be used as a starting point for CIOs to create specific information governance policies and set the stage for using information assets to drive business growth.

Information governance can be proactive or reactive. Many organizations find themselves in the position of having to retrospectively apply policy and assign responsibility for data, because that was not done at the outset of the project or when the data was created. Proactive information governance takes place at the time of system planning or process creation. The types of projects that lend themselves well to setting up governance structures, roles and policies include MDM, data quality, application archiving and retirement, ECM, records management, e-discovery data collection, business analytics, social analytics and social media compliance

Determining decision rights and responsibilities — along with accountability for setting policy, implementing policy and enforcing policy — should all be part of the project plan for any of these systems. Having carried out this work for one type of project will enable you to extend it to other systems, both old and new, within your enterprise. As a best practice it is essential that these projects be linked and that governance methods be consistent across the full range of information types, irrespective of system of origin or where the data ends up.

Another best practice is the creation of data stewards, giving specific responsibility and accountability to individuals who have an ongoing responsibility for managing the driving revenue, improving service and decreasing time to market are the business benefits that are often sought when implementing MDM, ECM, data quality and e-discovery projects. The starting point for any proactive information governance program must begin with an effort to value the information as an asset.

Questions that make good starting points include

  • "What is the most critical business information we have?"
  • "What information is shared across business processes on an enterprise wide basis?"
  • "Where is our intellectual property?"

To get maximum leverage and value from customer data that is the subject of an MDM project, one must also consider how that data will be used, who gets to use it and how as well as the legalities of doing so.

Recommendations:

  • When planning MDM, ECM, data quality and e-discovery projects or programs, use Gartner's methodology (see "Toolkit: Information Governance Project") to identify stakeholders and assess their roles in the management of the data, according to a standard responsible, accountable, consulted and informed (RACI) chart. The two main questions that need to be answered initially are:
    • "Who is responsible for information decisions and policy?"
    • "Who is responsible for data-related policy and processes?"
  • In order to eliminate duplication of effort and data redundancy, or the need for reconciliation, ensure that implementation of policy, workflow, data dictionaries, business glossaries, taxonomies, reference data and other organizational and definitional elements of information governance are led by business subject matter experts and accessed by all governance programs and personnel.

Increasing concerns about data security, privacy, personally identifiable information, IP protection and e-discovery mean that IT has new business partners, such as chief legal and chief risk officers, to assist with its information governance efforts

According to Gartner's annual privacy survey, organization spending on privacy programs around consumers or citizens is as follows:

  • 36% spend $10 or more per employee per year.
  • 32% spend $100 or more on each employee per year.
  • 11% spend $1,000 or more on each employee per year.

Table 1 contains selected data from Fulbright and Jaworski's Annual Litigation Trends Survey (2012).

 

2010

2012

 

 

Companies spending more than $1 million on litigation

46%

54%

Large companies spending more than $1 million on litigation

26%

81%

 

 

Companies that had at least one regulatory proceeding commenced against them

37%

42%

 

 

 

Companies that dealt with at least one investigation in 2012 (by industry sector)

  • Energy
  • Technology/communications
  • Retail/wholesale
  • Healthcare
  • Insurance
 

58%

47%

40%

33%

40%

Source: Gartner (November 2013)

Compliance managers trying to understand the regulations that will apply to them can be overwhelmed by global regulatory proliferation, and this is further complicated by regulations that conflict with each other. This creates serious legal and compliance risks.

Corporate governance, security breach notification, privacy and data protection, and industry-specific regulations — such as money-laundering or bribery laws — have added layer upon layer of compliance to IT processes and activities. Typically, a new regulation or other binding requirement (such as payment card industry compliance) is followed by a revised corporate and departmental policy, which is then translated into a new set of controls that must be maintained by someone in the IT organization. Over time, these controls begin to overlap and audits are conducted by separate groups of internal auditors, regulatory examiners and assessors from business partners — with each group issuing its own questionnaire and requiring its own report.

There is no way to stay in compliance, safeguard privacy, protect IP or decrease litigation costs while responding to the appropriate legal challenges and regulatory requests outside of a unified information governance framework.

Recommendations:

  • Work with the legal department to compile a list of regulations.
  • Complete a compliance risk assessment to prioritize regulatory compliance efforts.
  • Map regulations to policies and controls to identify overlaps, redundancies and gaps in policies, controls and records retention requirements.
  • Redesign policies and controls so they can meet multiple regulations without unnecessary duplication.
  • Implement technology that can provide metadata and content analysis of information and to support policy creation by providing snapshots of your organization's data.

DOWNLOAD ATTACHMENTS

© 2013 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This publication may not be reproduced or distributed in any form without Gartner’s prior written permission. If you are authorized to access this publication, your use of it is subject to theUsage Guidelines for Gartner Services posted on gartner.com. The information contained in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This publication consists of the opinions of Gartner’s research organization and should not be construed as statements of fact. The opinions expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see “Guiding Principles on Independence and Objectivity.”

Link: http://www.gartner.com 

Picture of System Administrator

Infrastructure (IT Infrastructure)

by System Administrator - Thursday, 13 April 2017, 8:55 PM
 

Infrastructure (IT Infrastructure)

Posted by: Margaret Rouse | Contributor: Clive Longbottom

Infrastructure is the foundation or framework that supports a system or organization. In computing, infrastructure is composed of physical and virtual resources that support the flow, storage, processing and analysis of data. Infrastructure may be centralized within a data center, or it may be decentralized and spread across several data centers that are either controlled by the organization or by a third party, such as a colocation facility or cloud provider.

In a data center, infrastructure includes the power, cooling and building elements necessary to support hardware. On the internet, infrastructure also includes transmission media, such as network cables, satellites, antennas, routers, aggregators, repeaters and other devices that control data transmission paths. Cloud computing provides a flexible IT infrastructure in which resources can be added and removed as workloads change.

The way IT infrastructures are created is continually changing. Today, some vendors provide pre-engineered blocks of compute, storage and network equipment that optimize the IT hardware and virtualization platform into a single system that can be easily interconnected to other systems. This modular approach is called converged infrastructure.
Regardless of how it is created, an IT infrastructure must provide a suitable platform for all the necessary IT applications and functions an organization or individual requires. Viewing IT infrastructure as a single entity can result in better effectiveness and more efficiency. It allows resources to be optimized for different workloads, and the impact of any changes on interrelated resources to be more readily understood and handled.

Infrastructure management is sometimes divided into categories of systems managementnetwork management, and storage managementHands-off infrastructure management uses a software-defined approach to management and automation to minimize the need for physical interaction with infrastructure components. 

Types of infrastructures

An immutable infrastructure is an approach to managing services and software deployments on IT resources wherein components are replaced rather than changed. An application or services is effectively redeployed each time any change occurs.

composable infrastructure is a framework that treats physical compute, storage and network fabric resources as services. Resources are logically pooled so that administrators don't have to physically configure hardware to support a specific software application.

dynamic infrastructure is a framework that can automatically provision and adjust itself as workload demands change. IT administrators can also choose to manage these resources manually.

critical infrastructure is a framework whose assets are so essential that their continued operation is required to ensure the security of a given nation, its economy, and the public’s health and/or safety.

contact center infrastructure is a framework composed of the physical and virtual resources that a call center facility needs to operate effectively. Infrastructure components include automatic call distributors, integrated voice response units, computer-telephony integration and queue management.

cloud infrastructure includes an abstraction layer that virtualizes resources and logically presents them to users over the internet through application program interfaces and API-enabled command-line or graphical interfaces.

dark infrastructure is that part of a framework that is composed of undocumented but active software or services whose existence and function is unknown to system administrators -- despite the fact that it may be integral to the continued operation of documented infrastructure.

cloud storage infrastructure is a framework composed of hardware and software framework that supports the computing requirements of a private or public cloud storage service. 

Link: http://go.techtarget.com

Picture of System Administrator

Innovation Process Management (IPM)

by System Administrator - Thursday, 2 July 2015, 7:46 PM
 

innovation process management (IPM) definition

Posted by Margaret Rouse

Innovation process management (IPM) refers to the management of processes used to spur product and organizational innovation. The purpose of innovation process management is to trigger the creative capabilities of employees, create an environment that encourages innovation and develop repeatable processes that make innovation an integral part of the workplace.

According to the consultancy Gartner Inc., companies that can successfully manage and maintain innovation within the workplace can increase revenue, improve operational effectiveness, and pursue new business models.

Common tools or strategies used to elicit this creativity from employees include brainstorming, virtual prototyping, product lifecycle management, idea management, product line planning, portfolio management and more.

Innovation processes often fall into two categories: "pushed" or "pulled." A pushed process is when a company has access to existing or emerging technologies and tries to find a profitable application for it. A pulled process is when the company focuses on areas where the customers' needs are not met and a solution is found.

An important aspect of keeping innovation, especially IT innovation, alive within a company is cultivating and maintaining an innovative culture.

One type of innovation culture is a formulaic innovation culture. A formulaic innovation management style instills a vision throughout the workplace and continually supports that vision through operational processes that enable employees to take measured risks. New ideas are encouraged, can come from anyone within the company and, when good ideas do surface, that idea is supported through one of the company's time-tested processes. The possible drawbacks to this type of business innovation management is that companies can begin to value the system over the breakthroughs, and the culture within the organization can become complacent.

Another type of innovation culture is an entrepreneurial innovation culture. This type of innovation culture is rare and usually features, especially early on in the company's maturity, a single innovator or leader. Steve Jobs, the cofounder of Apple Inc. was an example of the single leader inspired innovation culture, as is Mark Zuckerberg, chairman and CEO of Facebook. These types of companies are usually willing to take risks that most companies would not. These types of companies strive for major disruption rather than incremental growth and they use emerging and disruptive technologies to change how a certain product or service is used. One possible drawback is that the company can rely too heavily on the innovative leader.

Gartner's recommendation to IT leaders interested in launching an innovation management program is to follow a disciplined approach. Here are five steps Gartner recommends IT leaders and their companies take to develop an innovation management program:

  1.  Strategize and plan: Settle on an agreement of the vision for the initiative that is also in line with business goals. Then establish the resources and budget, and integrate the vision with IT and business plans.
  2. Develop governance: Establish a process for making decisions. This includes identifying and engaging stakeholders, agreeing on who is in charge and what the flow for decision making is, and also having feedback mechanisms in place.
  3. Drive change management: Have systems by which people can communicate and socialize via multiple channels; get buy-in from stakeholders at all levels; and assess which open innovation initiatives and cultural shifts will help the company optimize contributions to innovation.
  4. Execute: Make sure to draw from a wide range of sources to generate ideas for innovations that will transform the business, align the initiatives with business goals, and then update and drive new elements of the initiatives in response to changing business requirements.
  5. Measure and improve: Once the innovative initiative is in place, monitor and measure how it has affected business outcomes. It is also important to seek feedback from stakeholders and to continue to study innovation best practices and case studies from other organizations. Also make sure to continually drive improvements through process changes and upgrades.

Link: http://searchcio.techtarget.com

Picture of System Administrator

Insecure File Sharing

by System Administrator - Thursday, 17 September 2015, 8:35 PM
 

Breaking Bad: The Risk of Insecure File Sharing

by Intralinks

Data leakage and loss from negligent file sharing and information collaboration practices is becoming just as significant a risk as data theft. Just like malicious threats from hackers and others, data leakage through the routine and insecure sharing of information is a major threat to many organizations. Being able to securely share valuable corporate data is a critical requirement for all organizations, but especially regulated companies like financial services and life sciences firms.

Many companies have few provisions in place – process, governance, and technology – to adequately protect data. Yet, more and more sensitive information is being shared outside the organization, often without the knowledge or approval of CIOs or GRC professionals who are arguably losing control. Employees are 'behaving badly' – they acknowledge risky behavior and in turn experience the consequences of risky behavior regularly.

For the first time, the study Breaking Bad: The Risk of Insecure File Sharing explores the link between organizational and individual behavior when using increasingly popular file sync-andshare solutions. As shown in this research, organizations are not responding to the risk of ungoverned files-sharing practices among employees as well as with external parties, such as business partners, contractors, vendors and other stakeholders.

Consumer grade file-sharing cloud applications are popular with both employees and organizations because they make it possible for busy professionals to work efficiently together.

However, the findings in this report identify the holes in document and file level security in part caused by their expanded use. The goal is to provide solutions to reduce the risk created by employees' document and file sharing practices. More than 1,000 IT and IT security practitioners were surveyed in the United States, United Kingdom and Germany. The majority of respondents are at the supervisor level or above with expertise and understanding of their organization's use of file-sharing solutions and overall information security and data privacy policies and strategies.

Following are the key takeaways from this study...

Please read the attached whitepaper.

Picture of System Administrator

Integrating Big Data into Business Processes and Enterprise Systems

by System Administrator - Wednesday, 10 September 2014, 9:18 PM
 

Integrating Big Data into Business Processes and Enterprise Systems

In the paper, "Integrate Big Data into Your Business Processes and Enterprise Systems" you'll learn how to drive maximum value with an enterprise approach to Big Data. Topics discussed include:

  • How to ensure that your Big Data projects will drive clearly defined business value
  • The operational challenges each Big Data initiative must address
  • The importance of using an enterprise approach for Hadoop batch processing

Please read the attached whitepaper

Picture of System Administrator

Integrating Physical Layer Management Systems into Today’s Networks

by System Administrator - Tuesday, 4 November 2014, 5:40 PM
 

Integrating Physical Layer Management Systems into Today’s Networks

BY INDUSTRY PERSPECTIVES

DAMON DEBENEDICTIS
TE Connectivity

Damon DeBenedictis has had a 17-year career at TE Connectivity, managing copper and fiber product portfolios that have led to market-changing technologies for data centers, office networks, and broadcast networks.

Physical layer management (PLM) systems provide complete visibility into the physical state of the network at any given time, but integrating such systems into a network and business processes may seem like a complex project. Where does one start? When do you integrate PLM and how do you do it? In this article, we’ll look PLM and at some key considerations when integrating a PLM system into a network.

Breaking down a PLM system

A PLM system is a tool that network managers use to access and catalogue real-time status information about their physical layer networks. PLM systems bring layer 1 to the same visibility as layers 2-7 by including intelligent connectors on patch cords and intelligent ports on patch panels. The solution software reports the state of every network connection: whether or not it is connected, how much bandwidth a circuit can carry, and the type of circuit (i.e., Cat5/6 Ethernet or single- or multi-mode fiber). The PLM system also provides circuit mapping, alarming, and reporting.

Areas of consideration prior to integration

The key opportunity for implementing a PLM system arises when there is a new data center or data center expansion project. This is the time to consider PLM.

There are two basic ways to integrate a PLM system into a network:

  1. Use the PLM system’s own application and database;
  2. Use a middleware API in the PLM system to integrate its output into an existing network management system.

The decision about which route to take depends on the network manager’s tolerance for using an additional management system on top of the others he or she is already using, and whether or not it’s worth the effort to adopt a new system.

Two ways to integrate: the pros and cons of both

The advantage to using the PLM system’s own application and database is that it manages the entire physical layer, mapping circuits, issuing work orders, reserving ports for new connections, reporting on circuit and patch panel inventories, and other functions. However, using a new application may require some duplication of effort as the manager compares the PLM system’s output with the output of other management systems. In addition, the PLM application will require process changes to employee workflows as a new work order system is integrated.

With the middleware approach, the manager need not change anything about employee workflows. However, the value of the input is limited to what the target management system can accept. For example, if the management system doesn’t understand the network at the patch cord level, then patch cord status and locations will not be available to the network manager.

Choosing between the two, what’s right for you?

One key to deciding between the application and middleware approaches is to determine whether or not the existing work order and documentation systems are working well. Large carriers use existing or home grown software tools to manage their networks. Frequently, these systems include work order management systems that automatically email work orders to the appropriate technicians. In smaller organizations, however, network documentation may be done manually on spreadsheets. Either way, these manual data entry tools are fraught with errors and very labor-intensive.

If a company has a robust work order management system and simply wants to add awareness of the physical network to its suite of tools, then integrating PLM middleware into an existing management system is the way to go. But for companies that struggle with work order management, using the PLM application will be well worth whatever changes must take place in employee workflows.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

 

Link: http://www.datacenterknowledge.com

Picture of System Administrator

Intel® RealSense™ SDK

by System Administrator - Wednesday, 4 November 2015, 3:52 PM
 

Building Gesture Recognition Web Apps with Intel® RealSense™ SDK

by Jimmy Wei | Intel Corporation

In this article, we will show you how to build a web application that can detect various types of gestures using the Intel® RealSense™ SDK and front facing (F200) camera.

Editorial Note

This article is in the Product Showcase section for our sponsors at CodeProject. These reviews are intended to provide you with information on products and services that we consider useful and of value to developers.

Introduction

In this article, we will show you how to build a web application that can detect various types of gestures using the Intel® RealSense™ SDK and front facing (F200) camera. Gesture recognition will give users of your application another innovative means for navigation and interface interaction. You will need basic knowledge of HTML, JavaScript*, and jQuery in order to complete this tutorial.

Hardware Requirements

  • 4th generation (or later) Intel® CoreTM processor
  • 150 MB free hard disk space
  • 4 GB RAM
  • Intel® RealSense™ camera (F200)
  • Available USB3 port for the Intel RealSense camera (or dedicated connection for integrated camera)

Software Requirements

  • Microsoft Windows* 8.1 (or later)
  • A web browser such as Microsoft Internet Explorer*, Mozilla Firefox*, or Google Chrome*
  • The Intel RealSense Depth Camera Manager (DCM) for the F200, which includes the camera driver and service, and the Intel RealSense SDK. Go here to download components.
  • The Intel RealSense SDK Web Runtime. Currently, the best way to get this is to run one of the SDK’s JavaScript samples, which can be found in the SDK install directory. The default location is C:\Program Files (x86)\Intel\RSSDK\framework\JavaScript. The sample will detect that the web runtime is not installed, and prompt you to install it.

Setup

Please make sure that you complete the following steps before proceeding further.

  1. Plug your F200 camera into a USB3 port on your computer system.
  2. Install the DCM.
  3. Install the SDK.
  4. Install the Web Runtime.
  5. After installing the components, navigate to the location where you installed the SDK (we’ll use the default path):

C:\Program Files (x86)\Intel\RSSDK\framework\common\JavaScript

You should see a file called realsense.js. Please copy that file into a separate folder. We will be using it in this tutorial. For more information on deploying JavaScript applications using the SDK, click here.

Code Overview

For this tutorial, we will be using the sample code outlined below. This simple web application displays the names of gestures as they are detected by the camera. Please copy the entire code below into a new HTML file and save this file into the same folder as the realsense.js file. Alternatively, you can download the complete web application by clicking on the code sample link at the top of the article. We will go over the code in detail in the next section.

The Intel RealSense SDK relies heavily on the Promise object. If you are not familiar with JavaScript promises, please refer to this documentation for a quick overview and an API reference.

Refer to the Intel RealSense SDK documentation to find more detail about SDK functions referenced in this code sample. The SDK is online, as well as in the doc directory of your local SDK install.

<html>
<head>
    <title>RealSense Sample Gesture Detection App</title>
    <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
    <script type="text/javascript" src="https://autobahn.s3.amazonaws.com/autobahnjs/latest/autobahn.min.jgz"></script>
    <script type="text/javascript" src="https://www.promisejs.org/polyfills/promise-6.1.0.js"></script>
    <script type="text/javascript" src="realsense.js"></script>
    <script>
        var sense, hand_module, hand_config
        var rs = intel.realsense

        function DetectPlatform() {
            rs.SenseManager.detectPlatform(['hand'], ['front']).then(function (info) {
                if (info.nextStep == 'ready') {
                    Start()
                }
                else if (info.nextStep == 'unsupported') {
                    $('#info-area').append('<b> Platform is not supported for Intel(R) RealSense(TM) SDK: </b>')
                    $('#info-area').append('<b> either you are missing the required camera, or your OS and browser are not supported </b>')
                }
                else if (info.nextStep == 'driver') {
                    $('#info-area').append('<b> Please update your camera driver from your computer manufacturer </b>')
                }
                else if (info.nextStep == 'runtime') {
                    $('#info-area').append('<b> Please download the latest web runtime to run this app, located <a href="https://software.intel.com/en-us/realsense/webapp_setup_v6.exe">here</a> </b>')
                }
            }).catch(function (error) {
                $('#info-area').append('Error detected: ' + JSON.stringify(error))
            })
        }

        function Start() {
            rs.SenseManager.createInstance().then(function (instance) {
                sense = instance
                return rs.hand.HandModule.activate(sense)
            }).then(function (instance) {
                hand_module = instance
                hand_module.onFrameProcessed = onHandData
                return sense.init()
            }).then(function (result) {
                return hand_module.createActiveConfiguration()
            }).then(function (result) {
                hand_config = result
                hand_config.allAlerts = true
                hand_config.allGestures = true
                return hand_config.applyChanges()
            }).then(function (result) {
                return hand_config.release()
            }).then(function (result) {
                sense.captureManager.queryImageSize(rs.StreamType.STREAM_TYPE_DEPTH)
                return sense.streamFrames()
            }).catch(function (error) {
                console.log(error)
            })
        }

        function onHandData(sender, data) {
            for (g = 0; g < data.firedGestureData.length; g++) {
                $('#gesture-area').append(data.firedGestureData[g].name + '<br />')
            }
        }

    $(document).ready(DetectPlatform)
    </script>
</head>

<body>
    <div id="info-area"></div>
    <div id="gesture-area"></div>
</body>
</html>

The screenshot below is what the app looks like when you run it and present different types of gestures to the camera.

 

Detecting the Intel® RealSense™ Camera on the System

Before we can use the camera for gesture detection, we need to see if our system is ready for capture. We use the detectPlatform function for this purpose. The function takes two parameters: the first is an array of runtimes that the application will use and the second is an array of cameras that the application will work with. We pass in ['hand'] as the first argument since we will be working with just the hand module and ['front'] as the second argument since we will only be using the F200 camera.

The function returns an info object with a nextStep property. Depending on the value that we get, we can determine if the camera is ready for usage. If it is, we call the Start function to begin gesture detection. Otherwise, we output an appropriate message based on the string we receive back from the platform.

If there were any errors during this process, we output them to the screen.

rs.SenseManager.detectPlatform(['hand'], ['front']).then(function (info) {
    if (info.nextStep == 'ready') {
        Start()
    }
    else if (info.nextStep == 'unsupported') {
        $('#info-area').append('<b> Platform is not supported for Intel(R) RealSense(TM) SDK: </b>')
        $('#info-area').append('<b> either you are missing the required camera, or your OS and browser are not supported </b>')
    }
    else if (info.nextStep == 'driver') {
        $('#info-area').append('<b> Please update your camera driver from your computer manufacturer </b>')
    }
    else if (info.nextStep == 'runtime') {
        $('#info-area').append('<b> Please download the latest web runtime to run this app, located <a href="https://software.intel.com/en-us/realsense/webapp_setup_v6.exe">here</a> </b>')
    }
}).catch(function (error) {
    $('#info-area').append('Error detected: ' + JSON.stringify(error))
})

Setting Up the Camera for Gesture Detection

rs.SenseManager.createInstance().then(function (instance) {
    sense = instance
    return rs.hand.HandModule.activate(sense)
})

You need to follow a sequence of steps to set up the camera for gesture detection. First, create a new SenseManager instance and enable the camera to detect hand movement. The SenseManager is used to manage the camera pipeline.

To do this, we will call the createInstance function. The callback returns the instance that we just created, which we store in the sense variable for future use. We then call the activate function to enable the hand module, which we will need for gesture detection.

.then(function (instance) {
    hand_module = instance
    hand_module.onFrameProcessed = onHandData
    return sense.init()
})

Next, we need to save the instance of the hand tracking module that was returned by the activate function into the hand_module variable. We then assign the onFrameProcessed property to our own custom callback function called onHandData whenever new frame data is available. Finally, we initialize the camera pipeline for processing by calling the Init function

.then(function (result) {
    return hand_module.createActiveConfiguration()
})

To configure the hand tracking module for gesture detection, you have to create an active configuration instance. This is done by calling the createActiveConfiguration function.

.then(function (result) {
    hand_config = result
    hand_config.allAlerts = true
    hand_config.allGestures = true
    return hand_config.applyChanges()
})

The CreateActiveConfiguration function returns the instance of the configuration, which is stored in the hand_config variable. We then set the allAlerts property to true to enable all alert notifications. The alert notifications give us additional details such as the frame number, timestamp, and the hand identifier that triggered the alert. We also set the allGestures property to true, which is needed for gesture detection. Finally, we call the applyChanges function to apply all parameter changes to the hand tracking module. This makes the current configuration active.

.then(function (result) {
    return hand_config.release()
})

We then call the release function to release the configuration.

.then(function (result) {
    sense.captureManager.queryImageSize(rs.StreamType.STREAM_TYPE_DEPTH)
    return sense.streamFrames()
}).catch(function (error) {
    console.log(error)
})

Finally, the next sequence of functions sets up the camera to start streaming frames. When new frame data is available, the onHandData function will be invoked. If any errors were detected, we catch them and log all errors to the console.

The onHandData function

function onHandData(sender, data) {
    for (g = 0; g < data.firedGestureData.length; g++) {
        $('#gesture-area').append(data.firedGestureData[g].name + '<br />')
    }
}

The onHandData callback is the main function where we check to see if a gesture has been detected. Remember this function is called whenever there is new hand data and that some of the data may or may not be gesture-related data. The function takes in two parameters, but we use only the data parameter. If gesture data is available, we iterate through the firedGestureData array and get the gesture name from the name property. Finally, we output the gesture name into the gesture-area div, which displays the gesture name on the web page.

Note that the camera remains on and continues to capture gesture data until you close the web page.

Conclusion

In this tutorial, we used the Intel RealSense SDK to create a simple web application that uses the F200 camera for gesture detection. We learned how to detect whether a camera is available on the system and how to set up the camera for gesture recognition. You could modify this example by checking for a specific gesture type (e.g., thumbsup or thumbsdown) using if statements and then writing code to handle that specific use case.

About the Author

Jimmy Wei is a software engineer and has been with Intel Corporation for over 9 years.

Related Resources

License

  • No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty rising from course of performance, course of dealing, or usage in trade.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Link: http://www.codeproject.com

 
Picture of System Administrator

Intrapreneur (Intrapreneurship)

by System Administrator - Thursday, 23 April 2015, 7:54 PM
 

Intrapreneur (Intrapreneurship)

Picture of System Administrator

ISO 9001:2015​

by System Administrator - Tuesday, 17 November 2015, 6:27 PM
 

Hablemos sobre la ISO 9001:2015​

by Normas9000.com

Nigel Croft, Presidente del subcomité ISO que desarrolló y revisó la norma: "[la ISO 9001:2015] es un proceso evolutivo más que revolucionario. [...] Estamos trayendo la ISO 9001 firmemente al siglo XXI. Las versiones anteriores de la ISO 9001 eran muy prescriptivas, con muchos requisitos de procedimientos documentados y registros. En las ediciones 2000 y 2008, nos hemos centrado más en la gestión de procesos y menos en la documentación. [...] Ahora hemos ido un paso más allá, y la ISO 9001:2015 es menos prescriptiva que su predecesor, centrándose en el rendimiento. Hemos logrado esto combinando el enfoque de procesos con el pensamiento basado en riesgos, y empleando el ciclo PHVA "Planificar-Hacer-Verificar-Actuar" (Plan-Do-Check-Act) en todos los niveles en la organización."

¿Por qué ha sido revisada la ISO 9001?

Las Normas de Sistemas de Gestión ISO son revisadas en cuanto a efectividad y sostenibilidad aproximadamente cada 5 a 8 años. 

La ISO 9001:2015 reemplaza las ediciones anteriores, los entes certificadores tendrán hasta tres años para migrar los certificados a la nueva versión. 

La ISO 9000, que establece los conceptos y el lenguaje utilizado a lo largo de la familia de normas ISO 9000, también ha sido revisada y una nueva edición está disponible.

La Norma ha sido revisada con los objetivos de:

  • Mayor importancia del servicio.
  • Mayores expectativas de las partes interesadas.
  • Mejor integración con otros Estándares de Sistemas de Gestión.
  • Adaptarse a complejas cadenas de suministros.
  • Globalización. 

También ha habido un cambio en la estructura de la norma. La ISO 9001:2015 y todo sistema de gestión futuro, seguirán la nueva estructura común de normas del sistema de gestión. Esto ayudará a las organizaciones con los sistemas integrados de gestión.

"Teniendo en cuenta que las organizaciones de hoy tienen varias normas de gestión implantadas, hemos diseñado la versión 2015 para integrarse fácilmente con otros sistemas de gestión. La nueva versión también ofrece una sólida base para las normas del sector de calidad (industria automotriz, aeroespacial, médica, etc.) y toma en cuenta las necesidades de los reguladores." Nigel Croft, Presidente del subcomité ISO.

Resumen de los principales cambios de la ISO 9001

  • La norma hace hincapié en el enfoque de proceso.
  • La norma solicita un pensamiento basado en riesgos.
  • Existe mayor flexibilidad en la documentación.
  • La norma se focaliza más en los actores.

Esta versión de la norma se basa en Siete Principios de Gestión de Calidad: 

La norma ISO 9001:2008 se basa en principios de calidad que son utilizados generalmente por la Alta Gerencia como una guía para la mejora de la calidad. Están definidos en la ISO 9000 y la ISO 9004. Sin embargo, estos principios han sido modificados en la versión 9001:2015. La nueva versión de la norma se basa en siete principios, los cuales incluyen:  

  1. Enfoque al cliente.
  2. Liderazgo.
  3. Compromiso de las personas.
  4. Enfoque en proceso.
  5. Mejora (fusión de anteriores: enfoques de sistema y de proceso). 
  6. Toma de decisiones basada en la evidencia. 
  7. Gestión de las relaciones.

Los dos primeros principios, enfoque al cliente y liderazgo, no han cambiado desde la versión 2008. El tercer principio, la participación de las personas, ha sido renombrado en Compromiso de las personas. El cuarto principio enfoque de procesos se mantiene igual. El quinto ha sido integrado al cuarto por lo que el número de principios pasa a siete.

El enfoque de procesos es usado en la versión 2008 de la ISO 9001, y se utiliza para desarrollar, implementar y mejorar la eficacia de un sistema de gestión de calidad. Sin embargo, la norma ISO 9001:2015 proporciona una perspectiva adicional en el enfoque de procesos, y aclara el porqué es esencial adoptarlo en cada proceso gerencial de la organización. Las organizaciones ahora están obligadas a determinar cada proceso necesario para el Sistema de Gestión de Calidad y mantener documentada la información necesaria para apoyar la operación de dichos procesos.

El Pensamiento basado en riesgos es también uno de los principales cambios en la nueva versión de la ISO 9001. El estándar no incluye el concepto de acción preventiva, pero hay otras dos series de requisitos que cubren el concepto y que incluyen requisitos sobre la gestión de riesgo.

La flexibilidad de la documentación es otro cambio importante en la Norma ISO 2001:2015. Los documentos de términos y registros han sido reemplazados por  "información documentada". No hay requisitos específicos que determinen tener documentados los procedimientos, sin embargo los procesos deberían ser documentados para demostrar conformidad.

Focalizarse más en los actores es otro cambio en la norma ISO 9001 2015. La nueva versión de la norma a menudo plantea el tema de las partes interesadas, que en este contexto se refiere a las partes interesadas, tanto internas como externas de la organización, con intereses en el proceso de gestión de la calidad. La norma requiere que las organizaciones se centren no sólo en requisitos de cliente, sino también en los requerimientos de otros actores o partes interesadas tales como empleados, proveedores y así sucesivamente, que puedan afectar el sistema de gestión de calidad.

La nueva norma tiene 10 cláusulas:

  • Alcance.
  • Referencias de normativa.
  • Términos y definiciones.
  • Contexto de la organización y del liderazgo.
  • Calendario para el Sistema de Gestión de la Calidad.
  • Soporte.
  • Operación.
  • Evaluación de desempeño. 
  • Mejora. 

ISO 9001:2015 - Just published!

by Maria Lazarte

The latest edition of ISO 9001, ISO's flagship quality management systems standard, has just been published. This concludes over three years of revision work by experts from nearly 95 participating and observing countries to bring the standard up to date with modern needs.

With over 1.1 million certificates issued worldwide, ISO 9001 helps organizations demonstrate to customers that they can offer products and services of consistently good quality. It also acts as a tool to streamline their processes and make them more efficient at what they do. Acting ISO Secretary-General Kevin McKinley explains: “ISO 9001 allows organizations to adapt to a changing world. It enhances an organization’s ability to satisfy its customers and provides a coherent foundation for growth and sustained success.”

The 2015 edition features important changes, which Nigel Croft, Chair of the ISO subcommittee that developed and revised the standard, refers to as an “evolutionary rather than a revolutionary” process. “We are just bringing ISO 9001 firmly into the 21st century. The earlier versions of ISO 9001 were quite prescriptive, with many requirements for documented procedures and records. In the 2000 and 2008 editions, we focused more on managing processes, and less on documentation.

“We have now gone a step further, and ISO 9001:2015 is even less prescriptive than its predecessor, focusing instead on performance. We have achieved this by combining the process approach with risk-based thinking, and employing the Plan-Do-Check-Act cycle at all levels in the organization.

“Knowing that today’s organizations will have several management standards in place, we have designed the 2015 version to be easily integrated with other management systems. The new version also provides a solid base for sector-quality standards (automotive, aerospace, medical industries, etc.), and takes into account the needs of regulators.”

As the much anticipated standard comes into being, Kevin McKinley concludes, “The world has changed, and this revision was needed to reflect this. Technology is driving increased expectations from customers and businesses. Barriers to trade have dropped due to lower tariffs, but also because of strategic instruments like International Standards. We are seeing a trend towards more complex global supply chains that demand integrated action. So organizations need to perform in new ways, and our quality management standards need to keep up with these expectations. I am confident that the 2015 edition of ISO 9001 can help them achieve this.”

The standard was developed by ISO/TC 176/SC 2, whose secretariat is held by BSI, ISO member for the UK. “This is a very important committee for ISO,” says Kevin, “one that has led the way in terms of global relevance, impact and utilization. I thank the experts for their hard effort.”

ISO 9001:2015 replaces previous editions and certification bodies will have up to three years to migrate certificates to the new version.

ISO 9000, which lays down the concepts and language used throughout the ISO 9000 family of standards, has also been revised and a new edition is available.

Learn all about the new ISO 9001:2015 in our five minute video: