Glosario KW | KW Glossary

Ontology Design | Diseño de Ontologías

All categories

Page: (Previous)   1  2  3  4  5  6  7  8  9  10  11  ...  63  (Next)


Picture of System Administrator

Project Reports and the PMBOK

by System Administrator - Sunday, 19 October 2014, 3:32 PM

Project Reports and the PMBOK

por Lynda Bourne

One of the less well understood parts of the PMBOK® Guide 5th Edition has been the significant refinement in the way project data is transformed into useful project reports; mainly due to the distributed information.  This short article will map the flow.

The starting point is Chapter 3.8 – Project Information


Figure 3-5. Project Data, Information and Report Flow
© 2013 Project Management Institute

This section recognises information changes in character as it is processed:

  • Work performance data is the raw observations and measurements made during the execution of the project work. Data has little direct value.
  • Work performance information is created when the data is analyzed and assessed. The information used to help control the work is derived from the analysis in context of the work performance data. This information also forms the basis of project reports.
  • Work performance reports are the physical or electronic representation of work performance information compiled in project documents and used for project decision making.
  • Work performance reports are also distributed or made available through the project communication processes with the intention of influencing and informing the actions of stakeholders (both internal and external).

This overall flow is defined in more detail in each of the PMBOK’s knowledge areas.

The actual ‘work’ of the project is defined in process ‘4.3 Direct and Manage Project Work’ (and a limited number of other processes).  These ‘work’ focused processes all have ‘work performance data’ as an output.

Assessing and analysing the data is part of the controlling process in each of the specialist knowledge areas from ‘scope’ to ‘stakeholders’. For example, process ‘13.4 Control Stakeholder Engagement’ has work performance data as an input and work performance information as an output (as do all of the other controlling processes).

These nine sets of specific performance information pertaining to scope, time, cost, etc., are brought together in process ‘4.4 Monitor and Control Project Work’, to produce work performance reports as an output.


Figure 4-9. Monitor and Control Project Work Data Flow Diagram
© 2013 Project Management Institute

These reports are of course used for internal project management purposes, but also form a key element of project communication.

A key element within this process is knowledge management and gathering and using lessons learned; this aspect is discussed in our White Paper: Lessons Learned.

Process 10.2 Manage Communications receives the work performance reports as an input, and uses communication technology, models and methods to create, disseminate, store and ultimately dispose of performance reports and other project communications based on this information.

The art of effective communication being to get the right information to the right stakeholder, in the right format at the right time!  For more on this see: The three types of stakeholder communication.

So in summary, the PMBOK® Guide 5th Edition has got the flow from raw data to useful reports fairly well defined, the only challenge is knowing where to look! Hopefully the 6th Edition will do a better job of ‘joining up the dots’.



Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 9:50 PM
Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 9:51 PM
Picture of System Administrator

Rackspace Carina

by System Administrator - Thursday, 29 October 2015, 2:18 PM

Rackspace Carina brings 'zero infrastructure' Docker deployment to public cloud

Picture of System Administrator


by System Administrator - Friday, 27 February 2015, 11:27 AM

RASP helps apps protect themselves, but is it ready for the enterprise?

by Nicole Laskowski

A new technology called runtime application self-protection is being touted as a next big thing in application security. But not everyone is singing its praises.

In the application economy, a perimeter defense is no longer a good offense. With the proliferation of mobile devices and cloud-based technologies, perimeters are all but disappearing, according to Joseph Feiman, an analyst with Gartner Inc. "The more we move from place to place with our mobile devices, the less reliable perimeter-based technology becomes," he said.

Firewalls and intrusion prevention systems, which enterprises spent an estimated $9.1 billion on last year, still serve a vital purpose. But, given the enterprise infrastructure's growing sprawl, CIOs should be thinking about security breadth as well as security depth and how to scale their strategies down to the applications themselves, even building in a strikingly human feature: self-awareness.

A new tool for the application security toolbox known as runtime application self-protection (RASP) could help CIOs get there, but, according to one expert, it's no silver bullet.

Joseph Feiman

Guarding the application

The security measures many CIOs have in place don't do much to safeguard actual applications, according to Feiman. Network firewalls, identity access management, intrusion detection or endpoint protection provide security at different levels, but none of them can see beyond the application layer. "Can you imagine a person who walks out of the house and into the city always surrounded by bodyguards because he has no muscles and no skills," Feiman said. "That is a direct analogy with the application." Strip away features like perimeter firewalls, and the application is basically defenseless.

Defenseless applications leave enterprises vulnerable to external -- and internal -- threats. "High-profile security breaches illustrate the growing determination and sophistication of attackers," said Johann Schleier-Smith, CTO at if(we), a social and mobile technology company based in San Francisco. "They have also forced the industry to confront the limitations of traditional security measures."

Gary McGraw

Application security testing tools help detect flaws and weaknesses, but the tools aren't comprehensive, Feiman said during a Gartner Security and Risk Management Summit last summer. Static application security testing, for example, analyzes source, binary or byte code to uncover bugs but only before the application is operational. Dynamic application security testing, on the other hand, simulates attacks on the application while it's operational and analyzes the response but only for Web applications that use HTTP, according to Gary McGraw, CTO of the software security consulting firm Cigital Inc.

Even when taken together, these two technologies still can't see what happens inside the application while it's operational. And, according to Feiman's research report Stop Protecting Your Apps; It's Time for Apps to Protect Themselves, published in September 2014, static and dynamic testing, whether accomplished with premises-based tools or purchased as a service, can be time-consuming and hard to scale as the enterprise app portfolio multiples.

Is RASP the answer?

That's why Feiman is keeping an eye on a budding technology Gartner calls RASP or runtime application self-protection. "It is the only technology that has complete insight into what's going on in the application," he said.

RASP, which can be applied to Web and non-Web applications, doesn't affect the application design itself; instead, detection and protection features are added to the servers an application runs on. "Being a part of the virtual machine, RASP sees every instruction being executed, and it can see whether a set of instructions is an attack or not," he said. The technology works in two modes: It can be set to diagnostic mode to sound an alarm; or it can be set to self-protection mode to "stop an execution that would lead to a malicious exploit," Feiman said.

The technology is offered by a handful of vendors. Many, such as Waratek, founded in 2009, are new to the market, but CIOs will recognize at least one vendor getting into the RASP game: Hewlett-Packard. Currently, RASP technology is built for the two popular application servers: Java virtual machine and .NET Common Language Runtime. Additional implementations are expected to be rolled out as the technology matures.

While Feiman pointed to the technology's "unmatched accuracy," he did note a couple of challenges: The technology is language dependent, which means the technology will have to be implemented separately for Java virtual machine versus .NET CLR. Because RASP sits on the application server, it uses CPUs. "Emerging RASP vendors report 2% to 3% of performance overhead, and some other evidence reports 10% or more," Feiman wrote inRuntime Application Self-Protection: Technical Capabilities, published in 2012.

Is it ready for primetime?

Not everyone is ready to endorse RASP. "I don't think it's ready for primetime," said Cigital's McGraw. RASP isn't a bad idea in principle, he said, "but in practice, it's only worked for one or two weak categories of bugs."

The statement was echoed by if(we)'s Schleier-Smith: "What remains to be seen is whether the value RASP brings beyond Web application firewalls and other established technologies offsets the potential additional complexity," he said.

CIOs may be better off creating an inventory of applications segmented by type -- mobile, cloud-based, Web-facing. "And choose the [security] technology stack most appropriate for the types of applications found in their portfolio," McGraw said.

Even Feiman stressed that CIOs need to find a use case for the technology and consider how aggressive in general the organization is when adopting emerging technologies. For more conservative organizations, investing in RASP could still be two to five years out, he said.

To strengthen application security right now, McGraw urged CIOs to remember the power of static testing, which works on all kinds of software. And he suggested they investigate how thoroughly tools such as static and dynamic testing are being utilized by their staff. "The security people are not really testing people," he said, referring to software developers. "So when they first applied dynamic testing to security, nobody bothered to check how much of the code was actually tested. And the answer was: Not very much."

An even better strategy: Rather than place too much emphasis on RASP or SAST or DAST, application security should start with application design. "Half of software security issues are design problems and not silly little bugs," McGraw said.

Let us know what you think of the story; email Nicole Laskowski, senior news writer, or find her on Twitter @TT_Nicole.


Picture of System Administrator

Real-Time Analytics

by System Administrator - Thursday, 6 October 2016, 1:52 AM

Real-Time Analytics

Real-time analytics is the use of, or the capacity to use, data and related resources as soon as the data enters the system. The adjective real-time refers to a level of computer responsiveness that a user senses as immediate or nearly immediate.

Technologies that support real-time analytics include:

  • Processing in memory (PIM) -- a chip architecture in which the processor is integrated into a memory chip to reduce latency. 
  • In-database analytics -- a technology that allows data processing to be conducted within the database by building analytic logic into the database itself. 
  • Data warehouse appliances -- combination hardware and software products designed specifically for analytical processing. An appliance allows the purchaser to deploy a high-performance data warehouse right out of the box.
  • In-memory analytics -- an approach to querying data when it resides in random access memory (RAM), as opposed to querying data that is stored on physical disks. 
  • Massively parallel programming (MPP) -- the coordinated processing of a program by multiple processors that work on different parts of the program, with each processor using its own operating system and memory.

Applications of real-time analytics

In CRM (customer relations management), real-time analytics can provide up-to-the-minute information about an enterprise's customers and present it so that quicker and more accurate business decisions can be made -- perhaps even within the time span of a customer interaction. In a data warehouse context, real-time analytics supports unpredictable, ad hoc queries against large data sets. Another application is in scientific analysis such as the tracking of a hurricane's path, intensity, and wind field, with the intent of predicting these parameters hours or days in advance.

The adjective real-time refers to a level of computer responsiveness that a user senses as immediate or nearly immediate, or that enables a computer to keep up with some external process (for example, to present visualizations of Web site activity as it constantly changes).

This definition is part of our Essential Guide: Enterprise data analytics strategy: A guide for CIOs

See also: 


Please read the attached whitepaper: May the Data Be With You: Download Our Customer Data E-book.


Picture of System Administrator

Real-World Time Management

by System Administrator - Friday, 11 September 2015, 1:12 AM

Real-World Time Management

by Roy Alexander and Michael S. Dobson

Most of us dream about having a few extra hours in our day for taking care of business, relaxing, or engaging in the activities we most enjoy. But how can we make the most of our time when it seems as though there aren’t enough hours in the day? This instructive guide to time management is full of tips, techniques, and commonsense advice that will make anyone more productive. 

In this newly updated edition of Real-World Time Management, Michael Dobson includes invaluable tips on setting priorities, tricks for staying on track, keeping a closed-door policy, avoiding interrupters, and techniques for reducing stress through time management. Readers will also learn how to handle distractions, stop procrastinating, delegate tasks, deal with meetings, and manage time effectively while traveling. Instructive and helpful, Real-World Time Management will help all readers organize their time—no matter how hectic their lives may seem. 

About the Author 

Roy Alexander (New York, NY) heads his own consulting firm in New York City and is particularly noted for his sales and communications consultations in energy-related fields. 

Michael S. Dobson (New York, NY) is a consultant and popular seminar leader in project management, communications and personal success. He is the president of his own consulting firm whose clients include Calvin Klein Cosmetics and the Department of Health and Human Services. He is the author of several books including Managing Up  (978-0-8144-7042-8).

Please read the attached eBook.


Picture of System Administrator

Relentless Incrementalism

by System Administrator - Monday, 6 July 2015, 10:13 PM

Relentless Incrementalism

Posted by: Margaret Rouse

Relentless incrementalism is a process in which something substantial is built through the accumulation of small but incessant additions. 

Relentless incrementalism is often recommended as an approach to accomplishing a daunting goal. A seemingly impossible objective may be achieved by steadily working towards it, perhaps by completing subtasks or sharing the work among multiple individuals. The essential components of relentless incrementalism are: 1. Getting started and accomplishing even small tasks or work segments regularly, and  2. Not stopping until the goal is achieved. 

The concept of relentless incrementalism derives from economics and social policy, is used in various areas of information technology and business management. Applied to a large effort like enterprise security, for example, the approach helps businesses start on a fundamental level and build on the initial efforts, decreasing their vulnerability as they do so. 

Relentless incrementalism is also an effective time management approach. Because it emphasizes the importance of accomplishing even small tasks regularly, it can help prevent employees from feeling overwhelmed by large projects. 

Agile project management, which is an iterative approach, can be considered an implementation of relentless incrementalism.


  • quality control (QC)

    - Quality control (QC) is a procedure or set of procedures intended to ensure that a manufactured product or performed service adheres to a defined set of quality criteria or meets the requirements o... (

  • communication plan

    - A communications management formally defines who should be given specific information, when that information should be delivered and what communication channels will be used to deliver the informat... (

  • Respect for People principle

    - Continuous Improvement (CI) and Respect for People are the two foundational principles of the Toyota Way, the company's business management guide. (


  • Project management

    - Terms related to project management, including definitions about project management methodologies and tools.

  • Internet applications

    - This glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 9:52 PM
Picture of System Administrator

SaaS-based LDAP

by System Administrator - Monday, 12 January 2015, 8:19 PM

Learn the Benefits of a SaaS-based LDAP

By: JumpCloud

As an IT admin you like the control that LDAP gives you over servers and IT applications. Unfortunately, like most admins, you dislike the amount of time required to install, configure, and manage a full LDAP implementation. Add in cloud infrastructure and the complexity level increases significantly.

Download this new guide to learn how moving to a Directory-as-a-Service provides innovative organizations tighter security, reduced expenses, and increased agility. Don’t let the complexity of LDAP stop you from safely taking advantage of the cloud era. Get the knowledge you need to keep up with this quickly changing landscape.

Please read the attached whitepaper.

Picture of System Administrator

Sales Funnel

by System Administrator - Wednesday, 19 October 2016, 12:33 AM

Sales Funnel

sales funnel, also called a purchase funnel, is the visual representation of a how a sale proceeds in a linear fashion from customer awareness to customer action.

The funnel, which is sometimes referred to as a sales grinder, illustrates the idea that every sale begins with a large number of prospective customers and ends with a much smaller number of people who actually make a purchase. The number of levels assigned to a sales funnel will vary by company but generally, sales funnels are divided into four sections -- those who are aware of a company, those who have had contact with a company, those who have repeated contact with a company and those who have made a purchase.

Companies use various metrics to analyze and score leads and prospects in the funnel in order to evaluate the success of their sales team. Examples include quantifying the value of every sales opportunity in the funnel, determining the optimal flow rate -- which refers to the average amount of time leads are in each stage of the funnel -- and evaluating the average percentage of deal closings, also known as the win rate.

In today's Age of the Customer, the journey a customer takes is less likely to be linear. For that reason, some experts maintain that the traditional sales funnel is obsolete. Others contend that the sales funnel is still a valuable tool as long as marketing and sales teams understand two things -- that today's qualified sales lead may enter the funnel closer to the bottom than they would have ten years ago and that marketing's role is changing. In the past, finding leads (the top -- and broadest part of the funnel) was typically the responsibility of marketing departments while sales was responsible for nurturing leads and guiding prospects through the sales funnel. Today, a successful company relies on both sales and marketing to guide the customer through the sales funnel and build customer loyalty, taking advantage of content marketing, customer data analytics and the two-way communication that social media marketing provides.


"The sales funnel -- that inverse-pyramid-shaped image that depicts the pool of prospects that marketers are supposed to entice then shunt over to sales -- has become a key tool in nurturing prospects, then converting them into customers." - Lauren Horwitz




Salesforce Analytics Cloud gives the sales funnel a B-12 shot
Sales and marketing are known for misalignment. Here's how two companies use the Salesforce Analytics Cloud to come together and fix the sales funnel.

How marketers are reining in the sales funnel process
Marketing teams can no longer rely on a linear, sequential sales funnel process. Can sales funnel management tools harness erratic customer journeys?

Tactics for fixing the sales funnel and finding ROI
Many companies want ROI from the sales funnel, but they're still stuck treating it as a dumping ground for low-value leads. Find out how to turn that around.

Sales-marketing alignment fuels successful account-based marketing
Account-based marketing is taking the industry by storm. Find out if this strategy is the right fit for your business.

Can predictive analytics make sales funnel management work?
Better sales funnel management is core to company growth and sales efficiency. How can predictive analytics help sales teams get there? Expert Steve Robins has insights.


500+ Twitter Ads Examples to get inspired
The best advertisers are those that constantly try new ways to improve their performances. New designs, new audiences, new ads' copy. In this eBook you'll find 500+ real world Twitter Ads Examples to get inspiration from for your campaigns!


Picture of System Administrator

Security Intelligence

by System Administrator - Wednesday, 11 November 2015, 12:49 PM

A Framework to Surface Cyber Threats via Security Intelligence

by LogRhythm

The Security Intelligence Maturity Model (SIMM) provides a systematic guide for an organization to assess and actively achieve a heightened security posture. Find out where your organization sits on the SIMM and identify gaps that need to be filled.

Please read the attached whitepaper.


Picture of System Administrator

Seguridad de la Nube Privada

by System Administrator - Thursday, 8 January 2015, 3:04 PM

Resguarde la seguridad de la nube privada con procedimientos y herramientas estrictos

por Brad Casey

Garantizar la seguridad de la nube privada comienza examinando la seguridad de la red en la que reside la nube. Dependiendo de la naturaleza de la nube privada específica, esto puede tomar muchas formas. Sin embargo, hay protocolos y controles comunes a la mayoría de las redes.

El primer paso en el mantenimiento de la seguridad en su nube privada es la planificación por adelantado. Implemente protocolos y procedimientos para acceder a los datos en la nube privada en sus etapas de planificación. Si la nube solo está destinada a ser visitada internamente, claramente una empresa debe asegurarse de que estos servicios no se puedan acceder externamente. Pero si es necesario acceder a los recursos de la nube privada cuando los miembros del personal están fuera de la red de la empresa, decida cómo se protegerán los datos y ponga un mecanismo de autenticación. Además, determine qué restricciones –en su caso– se deben colocar sobre el acceso a los recursos. Si varias personas tienen acceso a los recursos, creando múltiples máquinas virtuales (VM) y ejecutando múltiples aplicaciones, la nube privada puede verse seriamente sobrecargada en términos de cómputo, amenazando su seguridad. Por lo tanto, planifique hacia adelante para mitigar este riesgo y hacer cumplir sus protocolos.

Al construir una nube privada, además de o en lugar de una nube pública, asegúrese de que su empresa tiene el personal de seguridad para mitigar los riesgos. El personal que estará asegurando el entorno debe estar preparado para responder en consecuencia durante eventos catastróficos.

Pruebe la seguridad de su nube privada

Comience por realizar capturas periódicas de Wireshark o TShark en las máquinas físicas que albergan la infraestructura virtualizada. Una vez que los administradores tienen una idea general sobre qué tipos de tráfico deben y no deben estar entrando o saliendo de la red, pueden fácilmente escribir comandos sobre esto. Es también una buena manera de desarrollar una línea de base con respecto a lo que es el comportamiento normal de la red. Por ejemplo, si los administradores de red saben que no existe ningún servidor DHCP en su nube privada, y sin embargo empiezan a ver mensajes "DHCP OFFER" que aparecen en una captura de Wireshark, es crucial que investiguen más a fondo.

Al utilizar Wireshark dentro de un entorno de nube privada, asegúrese de que la captura se realiza desde un máquina host. Esto permitirá una captura más exhaustiva del tráfico de la red, en lugar de simplemente capturar el tráfico desde dentro de una máquina virtual.

Además, realice auditorías frecuentes de los registros del sistema, ya que pertenecen al entorno de nube privada. Hay numerosos dispositivos de hardware y aplicaciones de software que realizan análisis de registro automatizado robustos, completos con mensajes de alerta y disparadores de  alarmas. Por ejemplo, si un individuo está ingresando a la nube privada a las 2 de la mañana de un sábado, esto podría ser considerado irregular por un sistema automatizado y registrado como tal. Sin embargo, estos sistemas son solo tan buenos como las personas que los crearon y nunca pueden sustituir por completo a un par experimentado de ojos humanos que saben lo que están buscando. Por lo tanto, se le debe permitir a un profesional experimentado que está cómodo realizando auditorías realizarlas de manera frecuente.

¿Merece la pena moverse a la nube pública?

Muchas organizaciones se están moviendo a la nube pública porque la descarga del costo y la responsabilidad de mantener su propia infraestructura de nube se consideraba que bien valía la pena su tiempo y dinero. Sin embargo, ¿es el mejor movimiento para la seguridad? Bueno, sí y no.

Muchas empresas se sienten menos vulnerables a los DoS y otros ataques debido a que su infraestructura reside en, por ejemplo, uno de los centros de datos masivos de Amazon Web Services. El proveedor es responsable si la infraestructura de una organización es víctima de un ataque. Sin embargo, la empresa sería responsable de llamar a los administradores de sistemas y redes durante el fin de semana, y dedicar grandes cantidades de tiempo y recursos a mitigar un ataque a una nube privada. Ventaja: la nube pública.

Por otro lado, las empresas que decidan trasladarse a la nube pública tienen muy poca –si es que tienen alguna– idea de dónde residen los datos y cómo se están tratando. Cuando una empresa utiliza una nube pública, no tiene acceso de root a la máquina física donde reside. Por lo tanto, los individuos nefastos con acceso de raíz a una caja determinada pueden causar estragos en los datos de una empresa. Por ahora, todavía hay pros y contras para ambas nubes públicas y privadas.

Sobre el autor: Brad Casey es un antiguo experto de Posee una Maestría en Ciencias en Aseguramiento de la Información por la Universidad de Texas en San Antonio, y tiene una amplia experiencia en las áreas de pruebas de penetración, infraestructura de clave pública, VoIP y análisis de paquetes de red. También tiene conocimiento en los ámbitos de administración de sistemas, Active Directory y Windows Server 2008. Pasó cinco años haciendo pruebas de evaluación de seguridad en la Fuerza Aérea de Estados Unidos, y en su tiempo libre puede encontrarlo viendo capturas de Wireshark y jugando con diferentes distribuciones de Linux en máquinas virtuales.

Picture of System Administrator

Self-Service Business Intelligence (BI)

by System Administrator - Wednesday, 3 February 2016, 5:40 PM

Self-Service Business Intelligence (BI)

Posted by Margaret Rouse

Self-service business intelligence (SSBI) is an approach to data analytics that enables business users to access and work with corporate data even though they do not have a background in statistics, business intelligence or data mining. Allowing end users to make decisions based on their own queries and analyses frees up the organization's business intelligence (BI) and information technology (IT) teams from creating the majority of reports and allows them to focus on other tasks that will help the organization reach its goals.

Because self-service BI software is used by people who may not be tech-savvy, it is imperative that the user interface (UI) for BI software be intuitive, with a dashboard and navigation that is user friendly. Ideally, training should be provided to help users understand what data is available and how that information can be used to make data-driven decisions to solve business problems, but once the IT department has set up the data warehouse and data marts that support the business intelligence (BI) system, business users should be able to query the data and create personalized reports with very little effort. 

While self-service BI encourages users to base decisions upon data instead of intuition, the flexibility it provides can cause unnecessary confusion if there is not a data governance policy in place. Among other things, the policy should define what the key metrics for determining success are, what processes should be followed to create and share reports, what privileges are necessary for accessing confidential data and how data quality, security and privacy will be maintained. 

Continue Reading About self-service business intelligence (BI) 


Picture of System Administrator

Serious Games | Juegos Serios

by System Administrator - Wednesday, 24 May 2017, 12:41 PM


Serious Games | Juegos serios

De Wikipedia, la enciclopedia libre

 Los juegos serios (del inglés "serious game"), también llamados "juegos formativos", son juegos diseñados para un propósito principal distinto del de la pura diversión. Normalmente, el adjetivo "serio" pretende referirse a productos utilizados por industrias como la de defensa, educación, exploración científica, sanitaria, urgencias, planificación cívica, ingeniería, religión y política.


Descripción general

El término «juego serio» ha existido desde mucho antes de la entrada en el mundo del entretenimiento de los dispositivos informáticos y electrónicos. En 1970, Clark Abt ya definió este término en su libro Serious Games, publicado por Viking Press.[1] En este libro, Abt habla principalmente de los juegos de mesa y de los juegos de cartas, pero proporciona una definición general que puede aplicarse con facilidad a los juegos de la era informática:

Reducido a su esencia formal, un juego es una actividad entre dos o más personas con capacidad para tomar decisiones que buscan alcanzar unos objetivos dentro de un contexto limitado. Una definición más convencional es aquella en la que un juego es un contexto con reglas entre adversarios que intentan conseguir objetivos. Nos interesan los juegos serios porque tienen un propósito educativo explícito y cuidadosamente planeado, y porque no están pensados para ser jugados únicamente por diversión.

En 2005, Mike Zyda abordó este término de una forma actualizada y lógica en un artículo publicado en la revista ''Computer'' de la IEEE Computer Society que llevaba por título «From Visual Simulation to Virtual Reality to Games».[2] Zyda define primero «juego» y continúa a partir de aquí:

  • Juego: una prueba física o mental, llevada a cabo de acuerdo con unas reglas específicas, cuyo objetivo es divertir o recompensar al participante.
  • Videojuego: una prueba mental, llevada a cabo frente a una computadora de acuerdo con ciertas reglas, cuyo fin es la diversión o esparcimiento, o ganar una apuesta.
  • Juego serio: una prueba mental, de acuerdo con unas reglas específicas, que usa la diversión como modo de formación gubernamental o corporativo, con objetivos en el ámbito de la educación, sanidad, política pública y comunicación estratégica.

Mucho antes de que el término «juego serio» empezara a ser usado por la Serious Games Initiative en 2002, ya empezaron a crearse juegos con un propósito distinto del entretenimiento. El continuo fracaso de los juegos de entretenimiento educativo en cuanto a su rentabilidad, junto a las crecientes capacidades técnicas de los juegos para proporcionar escenarios realistas, llevó a la reexaminación del concepto de juegos serios a finales de la década de los 90. Durante este tiempo algunos estudiosos comenzaron a examinar la utilidad de los juegos para otros propósitos, contribuyendo al creciente interés por emplearlos con nuevos fines. Además, la capacidad de los juegos para contribuir a la formación se vio ampliada con el desarrollo de los juegos multijugador. En 2002, el Centro Internacional para Académicos Woodrow Wilson en Washington D.C. creó la Serious Games Initiative con el fin de fomentar el desarrollo de juegos sobre temas políticos y de gestión. Aparecieron grupos más especializados en 2004, como por ejemplo Games for Change, centrado en temas sociales y en cambio social, y Games for Health, sobre aplicaciones relacionados con la asistencia sanitaria.

No hay una única definición del término «juego serio», aunque se entiende que hace referencia a juegos usados en ámbitos como la formación, la publicidad, la simulación o la educación. Definiciones alternativas incluyen conceptos propios de los juegos y las tecnologías, así como nociones provenientes de aplicaciones no relacionadas con el entretenimiento. Los juegos serios empiezan a incluir también hardware específico para videojuegos, como por ejemplo de los videojuegos para mejorar la salud y la forma física.

Los videojuegos son una herramienta a tener en cuenta en la estimulación cognitivo afectiva, que favorecen el aprendizaje, la autoestima, potencian la creatividad y las habilidades digitales, al mismo tiempo que generan motivación y entretenimiento. Los videojuegos suponen una modalidad de enseñanza que debe ser aprovechada por la comunidad educativa, por la cantidad de elementos emocionales que integran, su estimulación sensorial y la posibilidad de inmersión a través de los ambientes virtuales en los que se desenvuelven.[3]

Los juegos serios están dirigidos a una gran variedad de público, desde estudiantes de educación primaria y secundaria a profesionales y consumidores. Los juegos serios pueden ser de cualquier género, usar cualquier tecnología de juegos y estar desarrollados para cualquier plataforma. Algunos los consideran un tipo de entretenimiento educativo, aunque el grueso de la comunidad se resiste a utilizar este término.

Un juego serio puede ser una simulación con la apariencia de un juego, pero está relacionado con acontecimientos o procesos que nada tienen que ver con los juegos, como pueden ser las operaciones militares o empresariales (aunque muchos juegos populares de entretenimiento están basados en operaciones militares o empresariales). Los juegos están hechos para proporcionar un contexto de entretenimiento y autofortalecimiento con el que motivar, educar y entrenar a los jugadores. Otros objetivos de estos juegos son el marketing y la publicidad. Los grandes usuarios (algo no demostrado por la inteligencia empresarial) de los juegos serios parecen ser el gobierno de los Estados Unidos y los médicos.[cita requerida] Otros sectores comerciales están también persiguiendo activamente el desarrollo de este tipo de herramientas.



La idea de usar juegos en la educación data de los días anteriores a la aparición de las computadoras, pero se considera que el primer juego serio fue Army Battlezone, un proyecto fallido liderado por Atari en 1980 que fue diseñado para usar el videojuego arcade Battlezone como entrenamiento militar. En los últimos años, el gobierno y el ejército de Estados Unidos han buscado periódicamente desarrolladores de videojuegos para crear simulaciones de bajo coste que fueran precisas y entretenidas por igual. La experiencia de los desarrolladores de videojuegos en la mecánica y el diseño de juegos los convierte en los candidatos perfectos para desarrollar este tipo de simulaciones que cuestan millones de dólares menos que las simulaciones tradicionales que, con frecuencia, requieren de un hardware especial o de completas instalaciones para su uso.

Fuera del ámbito gubernamental, existe un considerable interés en juegos sobre educación, formación profesional, asistencia médica, publicidad y políticas públicas. Por ejemplo, juegos de sitios web como son, en palabras de Henry Jenkins, director del programa de estudios comparativos de medios de MIT, «juegos muy políticos creados fuera del sistema empresarial» que están «planteando asuntos a través de los medios pero usando las propiedades únicas de los juegos para atraer a la gente desde una nueva perspectiva». Estos juegos, ha dicho Henry Jenkins, constituyen un «trabajo de ficción radical». La Universidad Estatal de Míchigan ofrece un máster y un certificado de posgrado sobre diseño de juegos serios.[4] En Europa, la Universidad de Salford creó en 2005 un máster sobre juegos creativos.[5]


Los desarrolladores de videojuegos están acostumbrados a desarrollar juegos de forma rápida y son duchos en crear juegos que simulan —en diversos grados— entidades funcionales como radares y vehículos de combate. Usando la infraestructura existente, los desarrolladores de videojuegos pueden crear juegos que simulen batallas, procedimientos y eventos por una fracción del costo de los contratistas tradicionales del gobierno.

El desarrollo y empleo de los simuladores tradicionales cuesta normalmente millones de dólares, además de que en general estos simuladores requieren de hardware especializado. El coste medio de los juegos serios es muy bajo. En vez de los grandes volúmenes de medios o computadoras que necesitan los simuladores de alta calidad, los juegos serios no requieren más que un DVD o un CD-ROM, exactamente igual que los videojuegos tradicionales. Su distribución se limita a enviarlos por correo o permitir su acceso mediante un sitio web dedicado.

Por último, al tiempo que los juegos serios están pensados para formar o educar a los usuarios, lo están también para entretener. Los desarrolladores de videojuegos tienen experiencia a la hora de crear juegos divertidos y atractivos ya que su sustento depende de ello. En el curso de los eventos y los procedimientos que se simulan, los desarrolladores automáticamente inyectan dosis de entretenimiento y jugabilidad a sus aplicaciones.


Clasificación y subgrupos de juegos serios

Aunque la clasificación de los juegos serios es algo que todavía tiene que consolidarse, existen sin embargo una serie de términos cuyo uso razonablemente común permite su inclusión aquí.

  • Advergaming: del inglés advertising y game, es decir, publicidad y juego, es la práctica de usar videojuegos para publicitar una marca, producto, organización o idea.
  • Edutainment: este es un término que resulta de la unión de education y entertainment, es decir, educación y entretenimiento o diversión. Se aplica a los programas que enseñan mediante el uso de recursos lúdicos.
  • Aprendizaje basado en juegos (del inglés Educational game): estos juegos tienen como objetivo mejorar el aprendizaje. Están diseñados en general manteniendo un equilibrio entre, por un lado, la materia y, por otro, la jugabilidad y la capacidad del jugador para retener y aplicar dicha materia en el mundo real.[6] Este último de juegos se utilizan en el mundo empresarial para mejorar las capacidades de los empleados en temas, atención al público y negociaciones.
  • Edumarket Games: cuando un juego serio combina varios aspectos (por ejemplo, los propios del advergaming y del edutainment u otros relacionados con la prensa y la persuasión), se dice que la aplicación es un juego de tipo edumarket, término que resulta de la unión de education (educación) y marketing. Un ejemplo es Food Force, un juego con objetivos en el ámbito de las noticias, la persuasión y el edutainment.
  • News Games: son juegos periodísticos (del inglés news, es decir, noticia) que informan sobre eventos recientes o expresan un comentario editorial.
  • Simuladores o videojuegos de simulación: son juegos que se emplean para adquirir o ejercitar distintas habilidades o para enseñar comportamientos eficaces en el contexto de situaciones o condiciones simuladas. En la práctica, son muy usados los simuladores de conducción de vehículos (coches, trenes, aviones, etc., como por ejemplo FlightGear), los simuladores de gestión de compañías (por ejemplo, Transport Tycoon) y los simuladores sobre negocios en general, que ayudan a desarrollar el pensamiento estratégico y enseñan a los usuarios los principios de la micro y macroeconomía y de la administración de empresas (por ejemplo, Virtonomics).
  • Juegos persuasivos: del inglés persuasive games, son juegos que se usan como tecnología de la persuasión.
  • Juegos organizativos dinámicos: del inglés organizational-dynamic games, son juegos que enseñan y reflejan la dinámica de las organizaciones a tres niveles: individual, de grupo y cultural.
  • Juegos para la salud: del inglés games for health, son juegos diseñados como terapia psicológica, o juegos para el entrenamiento cognitivo o la rehabilitación física.
  • Juegos artísticos: del inglés art games, son juegos usados para expresar ideas artísticas, o arte creado utilizando como medio los videojuegos.
  • Militainment: este es un término que resulta de la unión de military y entertainment, es decir, militar y entretenimiento o diversión. Son juegos financiados por el ejército o que, de lo contrario, reproducen operaciones militares con un alto grado de exactitud.

Julian Alvarez y Olivier Rampnoux (del European Center for Children's Productos de la Universidad de Poitiers) han tratado de clasificar los juegos serios en 5 categorías principales: advergaming, edutainment, juegos de tipo edumarket, juegos de denuncia (que los autores denominan diverted games) y juegos de simulación.[7]

Ejemplos de juegos serios

Hotzone: es un simulador multijugador en red que utiliza la tecnología de los videojuegos para entrenar a los equipos de emergencia, bomberos y efectivos de protección civil para responder ante situaciones de peligro. El objetivo principal de la simulación es la comunicación, la observación, y toma de decisiones críticas.

Food Force: juego educativo elaborado bajo la supervisión del Programa de Alimentación Mundial de las Naciones Unidas, en el que el objetivo es acabar con la situación de hambruna que ha generado un conflicto bélico en una zona determinada. Entre cada prueba se proyectan vídeos que dan a conocer la situación de estos países y la forma en que la ONU se enfrenta a ellos.

Re-Mission: es un videojuego completamente gratuito creado para HopeLab, una asociación de ayuda a enfermos con cáncer. Permite luchar contra la enfermedad al mostrar a un nanorobot “Roxxie” erradicando células cancerosas gracias a la quimioterapia. A través de este juego se informa de diferentes tipos de cáncer, y tiene también utilidad para liberar la rabia y sentimientos de rechazo hacia la enfermedad. Se trata de un juego de acción con un alto valor pedagógico y de concienciación sobre el cáncer.

Merchants: es un videojuego creado para la formación de habilidades de negociación y gestión de conflictos. Los jugadores inmersos en la Venecia del siglo XV se enfrentan a diferentes situaciones por medio de las cuales ponen en práctica los contenidos teóricos impartidos durante el juego. Navieros es un producto de Gamelearn

Triskelion: es un videojuego creado para la formación en gestión del tiempo y productividad personal. Los jugadores se convierten en Robert Wise, personaje mediante el cual tendrán que seguir las pistas que revelan el secreto de la Orden de la sabiduría. Triskelion es un producto de Gamelearn

GABALL: Gamed Based Language Leaning, es un juego para profesionales gerentes de PYMES para fomentar la internacionalización de estas. El objetivo del proyecto GABALL va dirigido mejorar las competencias y habilidades de los gerentes de las PYMEs y Micro empresas para poner en marcha procesos de internacionalización de los mercados internos y externos (Brasil) a través de plataformas de comercio electrónico. GABALL también se dirige a los estudiantes de los últimos cursos de educación superior que potencialmente pueden llegar a ser empresarios y/o están promoviendo proyectos emprendedores. El objetivo último será mejorar sus competencias culturales y en lengua extranjera y así optimizar la utilización del marketing electrónico y las herramientas de comercio electrónico, el establecimiento de relaciones a través de medios electrónicos apoyados en las redes sociales y el fomento del espíritu emprendedor.[8]

Save the PKU Planet: Videojuego que enseña a los niños con fenilcetonuria cómo manejar de manera correcta su enfermedad, principalmente mediante el control de alimentos bajos en proteínas, la discriminación de los alimentos permitidos de los prohibidos y la motivación a la ingesta del complemento alimenticio sustitutivo rico en Aminoácidos y otros nutrientes pero exento de fenilalanina. Desarrollado por el Proyecto de Innovación Docente número 10-120 de la Universidad de Granada en colaboración con la Fundación Alicia y FX Animation. Disponible y adaptado para niños con fenilcetonuria de España, Dinamarca, Reino Unido y Estados Unidos. También está disponible con subtítulos en catalán.

DonostiON es un videojuego casual desarrollado por Ikasplay centrado en el día grande de la tamborrada de San Sebastián, que se celebra cada 20 de enero en la capital Guipuzcoana. Es un videojuego que promueve la cultura tamborrera que se desarrolla en esos días en la ciudad.

Hackend es un juego gratuito para aprender cibereseguridad en las empresas, eso sí, de forma amena de INCIBE. Ayudarás a un empresario, llamado Max, a resolver nueve casos en los que se ve comprometida la seguridad de su pequeña empresa. Es una aventura gráfica en la que Max, con tu inestimable ayuda, persigue a los ciberdelincuentes hasta dar con ellos.

Juegos serios de realidad alternativa

Los juegos de realidad alternativa (Serious ARG, Alternate reality game), son juegos serios basados en el mundo real que, utilizando toda una serie de recursos audiovisuales, desarrollan una historia que se verá constantemente afectada por la intervención de sus participantes. Los objetivos se centran en favorecer las actitudes de superación personal, aprovechar la dinámica de juego para intensificar los procesos de aprendizaje, fomentar los entornos colaborativos y la comunicación para solucionar los problemas y acertijos planteados.

World without oil es un juego serio de realidad alternativa (Serious ARG) en el que los videojugadores, son los responsables de gestionar el problema que nuestra sed desenfrenada de petróleo representa para nuestra economía, el clima y la calidad de vida y dar soluciones trabajando de forma colaborativa y con creatividad.


  1. Abt, C.: Serious Games, New York: Viking Press, 1970.
  2. Zyda, M.: "From visual simulation to virtual reality to games", en Computer, 38, 2005, pp. 25-32.
  3. MARCANO LÁREZ, Beatriz (2006) Estimulación emocional de los videojuegos: efectos en el aprendizaje. En GARCIA CARRASCO, Joaquín (Coord.) Estudio de los comportamientos emocionales en la red. Revista electrónica Teoría de la Educación.
  4. Máster sobre diseño de juegos serios de la Universidad Estatal de Míchigan.
  5. Máster sobre juegos creativos de la Universidad de Salford.
  6. El libro Digital Game-Based Learning de Marc Prensky fue la primera publicación importante en definir este término: sitio web oficial del libro Digital Game-Based Learning de Marc Prensky
  7. Alvarez, J. and O. Rampnoux: “Serious Game: Just a question of posture?”, en Artificial & Ambient Intelligence (AISB '07), 2007, pp. 420-423.
  8. Proyecto GABALL. 

Enlaces externos


Picture of System Administrator

Serverless Computing (Function as a Service)

by System Administrator - Wednesday, 15 February 2017, 6:57 PM

Serverless Computing (Function as a Service)

Posted by: Margaret Rouse

Serverless computing is an event-driven application design and deployment paradigm in which computing resources are provided as scalable cloud services. In traditional application deployments, the server’s computing resources represent fixed and recurring costs, regardless of the amount of computing work that is actually being performed by the server. In a serverless computing deployment, the cloud customer only pays for service usage; there is never any cost associated with idle, down-time.

Serverless computing does not eliminate servers, but instead seeks to emphasize the idea that computing resource considerations can be moved into the background during the design process. The term is often associated with the NoOps movement and the concept may also be referred to as "function as a service (Faas)” or “runtime as a service (RaaS)."

One example of public cloud serverless computing is the AWS Lambda service. Developers can drop in code, create backend applications, create event handling routines and process data – all without worrying about servers, virtual machines (VMs), or the underlying compute resources needed to sustain an enormous volume of events because the actual hardware and infrastructure involved are all maintained by the provider. AWS Lambda can also interact with many other Amazon services, allowing developers to quickly create and manage complex enterprise-class applications with almost no consideration of the underlying servers.


Picture of System Administrator

Seven Wastes

by System Administrator - Thursday, 25 May 2017, 12:11 AM

Seven Wastes

Posted by: Margaret Rouse

The seven wastes are categories of unproductive manufacturing practices. The seven wastes are an integral part of lean production, a just-in-time production model that seeks to limit overproduction, unnecessary wait times and excess inventory.

The idea of categorizing seven wastes is credited to Engineer Taiichi Ohno, the father of the Toyota Production System (TPS). Although the classifications were intended to improve manufacturing, they can be adapted for most types of workplaces.

Following are the seven wastes, as categorized by Taiichi Ohno:

  • Overproduction -- Manufacture of products in advance or in excess of demand wastes money, time and space.
  • Waiting -- Processes are ineffective and time is wasted when one process waits to begin while another finishes. Instead, the flow of operations should be smooth and continuous. According to some estimates, as much as 99 percent of a product's time in manufacture is actually spent waiting.
  • Transportation -- Moving a product between manufacturing processes adds no value, is expensive and can cause damage or product deterioration.
  • Inappropriate processing -- Overly elaborate and expensive equipment is wasteful if simpler machinery would work as well.
  • Excessive inventory - This wastes resources through costs of storage and maintenance.
  • Unnecessary motion -- Resources are wasted when workers have to bend, reach or walk distances to do their jobs. Workplace ergonomics assessment should be conducted to design a more efficient environment.
  • Defects -- Quarantining defective inventory takes time and costs money.

Since the categories of waste were established, others have been proposed for addition, including:

  • Underutilization of employee skills -- Although employees are typically hired for a specific skill set, they always bring other skills and insights to the workplace that should be acknowledged and utilized.
  • Unsafe workplaces and environments -- Employee accidents and health issues as a result of unsafe working conditions waste resources.
  • Lack of information or sharing of information -- Research and communication are essential to keep operations working to capacity.
  • Equipment breakdown -- Poorly maintained equipment can result in damage and cost resources of both time and money.


Picture of System Administrator

SIP Trunking (Session Initiation Protocol trunking)

by System Administrator - Friday, 23 January 2015, 7:37 PM

SIP trunking (Session Initiation Protocol trunking)

Posted by Margaret Rouse

Session Initiation Protocol (SIP) trunking is the use of voice over IP (VoIP) to facilitate the connection of a private branch exchange (PBX) to the Internet.


In effect, the Internet replaces the conventional telephone trunk, allowing an enterprise to communicate with fixed and mobile telephone subscribers worldwide. (SIP is an IETF standard for initiating interactive multimedia user sessions; a trunk is a line or link that can carry many signals at once, connecting major switching centers or nodes in a communications system.)

In order to take advantage of SIP trunking, an enterprise must have a PBX that connects to all internal end users, an Internet telephony service provider (ITSP) and a gateway that serves as the interface between the PBX and the ITSP. One of the most significant advantages of SIP trunking is its ability to combine data, voice and video in a single line, eliminating the need for separate physical media for each mode. The result is reduced overall cost and enhanced reliability for multimedia services. With SIP trunking, subscribers can:

  • Initiate and receive local calls
  • Initiate and receive long-distance calls
  • Make emergency calls (911)
  • Access directory assistance
  • Use fixed and mobile telephone sets
  • Employ e-mail and texting
  • Browse the World Wide Web.


Picture of System Administrator

Skills to Look for in IT Project Managers

by System Administrator - Tuesday, 13 January 2015, 12:07 PM

Skills to Look for in IT Project Managers

By Sharon Florentine

Good IT project management can make or break key business initiatives, but finding top talent means identifying a unique mix of technical know-how and soft skills. Here are eight skills to look for when hiring project management professionals.

Skills to Look for in IT Project Managers

As the economy continues to climb out of recession, demand for project management professionals has skyrocketed. Finding the right project management talent for mission-critical IT projects can be difficult, as the role requires a unique mix of technical and soft skills.

In addition to the usual suspects -- attention to detail, focus on process, time management and capability to multitask, for instance -- there are some less obvious, but equally crucial, skills that separate the good from the great. Here, our experts weigh in on what to look for when hiring IT project managers.

Ability to Manage Resource Conflicts

"No matter how big your business, no matter how large your company, you always have these kinds of resource allocation conflicts. You're always limited by costs, by technology constraints, by time and by personnel availability. The project managers who can decide how to best allocate limited resources to the projects that will have the greatest positive business impact are very valuable," says Tushar Patel, vice president of marketing at Innotas, a cloud-based project portfolio management solutions company.

Familiarity With a Variety of Technical Platforms/Methodologies


For IT project managers familiarity with the standard way in which software and applications are developed, designed, built and delivered is a necessary skill, says Patel. Nowadays, most IT organizations are using the agile development methodology, so that's an important framework to understand.

"In the past, agile was only used by software development teams, but more than half of the companies we talk to today are applying the agile methodology to an increasing number of their technical projects. So, concepts like iteration, sprints, scrum, and how to translate changing requirements into end-user functionality based on customer feedback are some of the skills IT project managers must possess," says Patel. Of course, every organization interprets agile differently, so project managers must also understand how agile is used and applied in the organization they're working for.

A Focus on Business Strategy and Agility


A project management team that's focused on how projects contribute to a company's growth, innovation and the greater business strategy rather than simply on completing discrete tasks can give businesses can a major competitive advantage, says Patel. "Using agile concepts outside of the IT department to create business agility is critical for good project management," he says.

"You want project managers to understand not just how to be responsive to customers and markets, but to do so even when your market changes, or your internal strategy changes; to do so if your company's acquired, if your company's acquiring another, getting a new CEO - any number of major changes. Project managers must be able to show they have the ability to turn on a dime. To manage the business' priorities in the face of sweeping change," Patel says.

It used to be enough that businesses were quick to react when markets changed, but nowadays, project managers must be proactive and anticipate every possible change and shift that could happen and how those could affect not just their projects, but their business as a whole, he says.

"One of the traits we're evangelizing is being predictive - forecasting the need to be flexible and adaptive; planning staffing, costs, time constraints and the like as much as six months out and determining which projects will be the key to success then," he says. "It's not easy, for sure, but this is something good project managers must do."

Excellent Communication Skills

Communication is obviously a must-have when hiring project management talent, according to Hallie Yarger, regional recruiting director, Midwest region for Mondo, a digital marketing and tech talent sourcing and consulting firm. Project managers must be able to reach people from all different backgrounds, with all different personalities, and to be able to quickly and concisely inform employees, executives, customers and all other stakeholders about the status of the project. "Communication skills are a no-brainer for PMs, but the key is that these skills be multi-dimensional, touching on both internal and external stakeholders," says Yarger.

Management Skills


Hand-in-hand with communication skills are management skills. Project managers must be able to navigate tough situations and make difficult decisions based on the needs of the business without being political. Being able to understand and empathize with stakeholders that may have different viewpoints, personalities, communication styles and needs is difficult when projects are going smoothly -- being able to do so in times of crisis is incredibly valuable for a project manager.

"You almost have to have a little bit of a psychology background to figure out how to effectively motivate, push and cajole each person involved to make sure projects are completed on time and with a minimum of conflict," Yarger says.

Ability to Accurately Assess Risk

"With every IT project, there are risks involved", says Yarger. Risks that resources are allocated to certain projects and not others, risk that projects will not meet the expectations and standards set by clients and stakeholders, risks that deadlines will be missed and projects won't be delivered on time. However, a good project manager should be able to assess and mitigate all these by prioritizing the value of each asset, while minimizing the risk of project failures by ensuring the right team members have the tools, knowledge and information they need.

Speaking the Right Language

Especially in IT, trust is a key factor in establishing rapport as a project manager. Software developers, in particular, can be a finicky bunch, according to Yarger, so it's crucial to find project management professionals with the street cred to manage and motivate developers.

"You have to find someone with whom software developers will gladly work and who they will respect; someone who's familiar with the languages and platforms they're using, who knows the ins-and-outs of the software development lifecycle (SDLC), who understands their challenges and strengths - someone who can talk the talk and walk the walk," Yarger says.

Global Experience or Vertical Experience

Today's current global, digital economy means that some projects will be handled by teams in distinct geographical locations. Yarger points out that project managers with experience working with or managing offshore teams, or who've worked on projects in other countries are in especially high demand.

"What our clients are demanding right now are project managers with global experience, as well as experience in verticals like healthcare -- especially EHR/EMR experience -- and finance, for issues like regulatory compliance," says Yarger.


Picture of System Administrator

SLAs for the Cloud

by System Administrator - Thursday, 13 July 2017, 9:49 AM



Picture of System Administrator

Slicing the Big Data Analytics Stack

by System Administrator - Wednesday, 10 September 2014, 9:06 PM

Slicing the Big Data Analytics Stack

In this special report we have provided a deeper view into a series of technical tools and capabilities that are powering the next generation of big data analytics. From the pipes and platforms to the analytical interfaces and data management tools, we hope to help you develop a better ear to tune into the big data noise with. The goal is to empower you to make strong decisions in a noisy world of optionsall of which seem to promise similar end results. 

Please read the attached whitepaper

Picture of System Administrator

Software Asset Management: Pay Attention or Pay Up

by System Administrator - Friday, 12 September 2014, 1:17 AM

Software Asset Management: Pay Attention or Pay Up

There is a wide range of options for managing software assets, from in-house solutions to the cloud to managed services providers. Read this whitepaper to learn about:

  • Using SAM to inform software investments
  • Avoiding software fees and fines
  • What's the best approach to SAM?Making the most of volume-based licensing
  • Hands-free SAM: How vendors can unburden IT leaders

 Please read the attached whitepaper.


Picture of System Administrator

Software Defined Networking

by System Administrator - Wednesday, 25 February 2015, 3:55 PM

Software Defined Networking (Source: Intel)

Software Defined Networking: En busca de la automatización de la red.

by Frost & Sullivan

Software Defined Network (SDN) es uno de los temas más candentes en el mercado de networking, con más de US$250 millones de capital de riesgo en “startups” y más de US$1.5 billones en adquisiciones relacionadas con este cambio de arquitectura en el mundo. SDN representa un nuevo paradigma capaz de volver las redes más eficientes, escalables, ágiles y dinámicas, mediante el aumento de programación y automatización.

Los beneficios son reducción de los costos operativos, una notable mejora en el rendimiento de las redes, aprovisionamiento más rápido, y la promesa de una arquitectura abierta, basada en estándares, lo que permite una mayor variedad de proveedores para las empresas que adoptan el SDN.

El principal beneficio obtenido por las empresas es la capacidad de crear una facilidad de gestión, en particular para las empresas que ya están avanzando hacia un centro de datos virtualizado, lo que implica una búsqueda de soluciones SDN para implementar en sus centros de datos en los próximos años. Se destacan también en la búsqueda por estas soluciones empresas que quieren mejorar flexibilidad, agilidad y simplificación de la gestión a través de ofertas en nube.

Please read the attached whitepaper.

Picture of System Administrator

Software Interactive eGuide

by System Administrator - Wednesday, 17 December 2014, 2:02 PM

Software Interactive eGuide

The way that enterprises procure and use software is changing rapidly. Many organizations have grown tired of traditional software licensing models, which are often complex and expensive, and are looking to cloud computing and Software-as-a-Service (SaaS) as viable alternatives. However, these new approaches pose their own sets of challenges.

In this eGuide, Computerworld along with sister publications InfoWorld and IDG News Service look at recent trends and advancements in cloud and SaaS models. Read on to learn the best approaches for your organization.

Please read the attached eGuide

Picture of System Administrator

Specification by Example (SBE)

by System Administrator - Tuesday, 30 December 2014, 3:21 PM

Specification by Example (SBE)

Posted by Margaret Rouse

Specification by example (SBE) is a user-driven contextual approach to defining software requirements. This approach encourages communication between a project's business owners and the software development team while also aligning software specifications with user acceptance testing.

Specification by example (SBE) is a user-driven contextual approach to defining software requirements.

SBE requires business stakeholders to provide realistic scenarios for how the software will be used and those examples are used to determine the scope of the project. This approach has two major benefits -- it encourages communication between the business owners of a project and the software development team and it helps the developers align software specifications with user acceptance testing (UAT). When done right, the specifications can be validated through automated software tests that run frequently. 

In order for SBE to succeed, it's important that the business owners provide the development team with precise examples that illustrate how slices of the system should behave. It's equally important for the development team to make sure each specification by example is testable. SBE may deliver less than optimal outcomes if examples focus on how the software works rather than on the business goals it seeks to achieve. This is where communication and collaboration become key. For example, if the business stakeholders spend too much time describing  how they would like an online form to be formatted, it is up to the SBE project manager to bring the focus of the conversation back to how the data that is entered in the form will be used to drive productivity, profitability and business growth. When SBE is implemented appropriately, it can simplify design, reduce unnecessary code in development and speed deployment by shortening or eliminating feedback loops. 

SBE is often used in iterative software development methodologies such asAgileScrum and Extreme Programming (XP).  Depending upon the implementation, the examples that the business owners provide may also be referred to as executable requirements or use cases.  What the team decides to call project artifacts is not important -- the only thing that matters is that the team agrees upon a common language and uses it consistently. It is equally important that documentation be created and updated throughout the project to ensure that code can be maintained or updated easily when the project is over. SBE project managers call this "living documentation."  Whatever the team decides to call the project's documentation, it should serve as a way for the IT team to demonstrate additional business value when change is required.

As a concept, SBE is credited to Gojko Adzic, a software development consultant who wrote a book in 2011 entitled  "Specification by Example: How Successful Teams Deliver the Right Software."  In the real world, the concepts presented in the book may also be referred to as example-driven development (EDD) or behavior-driven development (BDD), two similar approaches that are also the subjects of books.  

See alsofunctional specification


Picture of System Administrator

SQL-on-Hadoop tools

by System Administrator - Friday, 11 September 2015, 2:37 AM

Evaluating SQL-on-Hadoop tools? Start with the use case

by Jack Vaughan

In a Q&A, Clarity Solution Group CTO Tripp Smith says to base SQL-on-Hadoop software decisions on actual workloads. Some Hadoop tools target batch jobs, while others are intended for interactive ones.

The flowering of the Hadoop ecosystem is both a blessing and a curse for prospective users. The numerous technologies revolving around the distributed processing framework augment the functionality found in Hadoop itself. But there are so many to choose from that evaluating them and finding the right one can be difficult. That's particularly true in the emerging SQL-on-Hadoop space, where tools such as Drill, Hawq, Hive, Impala and Presto vie for attention.

To get a better view of them, SearchDataManagement recently turned to Tripp Smith, CTO at Clarity Solution Group LLC, a Chicago-based data management and analytics consultancy that works with user organizations on Hadoop deployments and other big data projects. In an interview, Smith said the path to selecting among the surge of SQL-on-Hadoop tools begins with understanding use cases.

Hadoop has been around for a while, but in terms of going mainstream, it still seems very new to a lot of people. And when they seek to tame Hadoop to gain business benefits from big data, it often turns into a multiyear effort.


Tripp Smith

Tripp Smith: I think SQL interfaces to Hadoop are helping to bridge that gap. They also enhance portability for business logic from legacy applications, both to Hadoop and to different execution engines that now run within the Hadoop platform. We saw it start with the introduction of Hive. A lot of very smart folks at Facebook introduced that to the Hadoop ecosystem, and now the concept has expanded in a lot of different directions, not the least of which are Spark SQL, Impala and Presto, the latter also [coming] out of Facebook.

What SQL is doing for Hadoop is to bring kind of a common language for the average business user working on the legacy analytics platforms, as well as to the seasoned engineers and data scientists. It's easier now to trade off information and data processing between different components when you have Agile data teams using SQL on Hadoop.

By most counts, there are even more Hadoop tools than we've just talked about. What parameters do you look at when trying to evaluate products in this wide group of tools?

Each of the tools has a specialization. But that is where there's still a lot that needs to be fleshed out.

- Tripp SmithCTO, Clarity Solution Group

Smith: What you find is that the decision you make on SQL-on-Hadoop tools should be based on the use cases that you have. We look at Hadoop through the lens of what we call MESH -- that's a strategic architecture framework for 'mature, enterprise-strength Hadoop.' It looks at data management and analytical capabilities, as well as data governance capabilities and platform components.

Tool selection and approaches vary depending on the nuance of the problem you're trying to solve -- depending on whether you're looking at doing more of an extract, transform and load or to do extract, load and transform data integration, or you're looking at a real-time data integration use case, or whether you're looking at interactive queries. Each of the tools has a specialization. But that is where there's still a lot that needs to be fleshed out.

What are the steps people take as they walk through the process of choosing between these new technologies?

Smith: Most of the people we work with are not 'greenfield' -- they're into managing these tools without arbitrarily increasing their portfolio diversity. Admittedly, that may be a buzzword-full answer. But usually, they have an idea of how to judge how their workloads fit with the different SQL-on-Hadoop tools.

They will find that some of these tools have a limited type of [SQL] grammar for the things they want to do. I would throw Impala, as it first emerged, into that group. It was leading the pack around performance but maybe providing a limited subset of capabilities. Hive has been around the longest, and is relatively mature for the Hadoop ecosystem -- that is probably more focused to your data integration batch processing workload.

In each case, there is a bit of discovery required around taking your business use cases, what your infrastructure is today [and] where the new Hadoop components would fit in within the context of managing an IT portfolio. You have to have a process to introduce new components for your analytical workloads.

Jack Vaughan is SearchDataManagement's news and site editor. Email him at, and follow us on Twitter: @sDataManagement.

Next Steps


Picture of System Administrator


by System Administrator - Friday, 5 December 2014, 9:34 PM


A stakeholder is any person with a legitimate interest, or stake, in the actions of an organization. 

R. Edward Freeman and David L. Reed defined the term stakeholder in their 1983 article, Stockholders and Stakeholders: A New Perspective on Corporate Governance, as "any group or individual who can affect the achievement of an organization's objectives or who is affected by the achievement of an organization's objectives."  Traditionally, stockholders are the most important people in a company and business decisions are made to increase the value of the stock.  Freeman and Reed proposed that there are other people who are just as important and good business decisions align everyone's interests with those of the stockholders. In this context, "everyone" might include employees, suppliers, customers and business partners as well as unions, government agencies or trade associations. 

Quite literally, a stakeholder is a person who holds the prize in a contest or the money in a bet. According to Freeman and Reed, the term stakeholder was first used in business in an internal memorandum at the Stanford Research Institute in 1963 and had the more narrow meaning of "those groups without whose support the organization would cease to exist." 

Continue Reading About stakeholder


Picture of System Administrator

Storage Configuration Guide (II)

by System Administrator - Tuesday, 16 June 2015, 9:35 PM

Storage Configuration Guide

Learn how to meet specific storage requirements ranging from backup to virtual server infrastructures. With the ever-changing and fast moving IT market, it is more critical than ever to efficiently design a storage system as it provides an underpinning for all elements within the IT infrastructure.

Please read the attached whitepaper.

Picture of System Administrator

Strategic Asset Management

by System Administrator - Wednesday, 10 September 2014, 9:22 PM

The Path to Strategic Asset Management

Companies today look for ways to gain more control and accuracy when it comes to fixed asset data. Fixed assets can represent a significant sum on the balance sheets of many organizations. This white paper introduces best practices for integrating fixed assets management technology into your organization as part of a strategic asset management initiative. These best practices are based on lessons learned over the course of decades of successfully implementing and integrating GAAP and tax depreciation and fixed assets management solutions in both SMBs and Fortune 500 companies.

Please read the attached whitepaper

Picture of System Administrator

System z mainframe

by System Administrator - Tuesday, 13 January 2015, 10:20 PM

New System z mainframe may lift IBM's cloud, mobile fortunes


by Ed Scannell

IBM looks to pivot in a new direction with a revitalized mainframe aimed at mobile and cloud markets along with a rumored major reorganization.

To revive its sagging hardware fortunes, IBM will introduce a new member to its System z series of mainframes with a major technology overhaul. It is intended to lure new users that need more muscle for applications involving cloud, analytics and, in particular, mobile applications.

The new z13, as it's being referred to, is designed from the ground up to more efficiently handle transaction processing, including the billions of transactions conducted by users with a wide assortment of mobile devices, a source close to the company said. Big Blue has reportedly spent five years and $1 billion developing the new system, quietly beta testing it among some 60 global accounts.

The system, to be introduced this week, features a new eight-core processor with a wider instruction pipeline, more memory and support processors than any of its zSeries predecessors, improved multi-threading, larger caches and a "significantly improved intelligent I/O system," that dramatically improves the performance of applications involved with transaction processing, according to sources close to the company.

"The whole thing is tuned for better performance, especially its souped-up intelligent I/O, where you can have dedicated channels for individual types of I/O," said one source who requested anonymity. "Essentially [IBM has] tuned this for environments focused on mobility, analytics and the cloud."

To further improve the system's capabilities for mobile transactions, IBM reportedly focused on improving security for the system, coming up with new cryptography technology, similar to that used by vendors such as Apple and major credit card companies, according to sources.

"They have implemented some new forms -- plural -- of encryption, similar to what is used in Chrome and Firefox, as well as Apple's technology for messaging," one IT industry source who works with IBM said. "[IBM], I think wisely, have adapted the security schemes here to meet users' needs, which increasingly have to do with mobility and credit card transaction processing."

Given the deal IBM signed with Apple last year to distribute the latter's mobile products to corporate users, and the emphasis IBM will put on the new mainframe's mobile transaction capabilities, synching up with Apple's security technology may be more than a coincidence.

Also not so coincidental may be the timing of this week's announcement, given its close proximity to the company's 2014 revenues and earnings report later this month. With sales of the company's proprietary Power series of servers stumbling badly the past five or six quarters, and mainframe sales dipping the past quarter or two as part of its natural sales cycle in addition to fourth quarter hardware numbers figuring to be down again, company officials may be looking for some good hardware news to distract Wall Street's attention away from further bad news. The new system could be the answer.

IBM may also talk about its intent to promote the system's appeal to Millennials. With an increasing number of aging mainframe veterans retiringin ever larger numbers, Big Blue wants to make it clear to 20-somethings that they could have a lucrative career working in mainframe environments, as opposed to lower-end distributed environments. Company officials will reportedly talk about a new jobs board that matches up younger workers to new job opportunities in the mainframe area.

"[IBM] is trying to overcome this fear, the psychological barrier that Millennials have toward mainframes," according to another source familiar with the company's plans.

There have also been reports that IBM may be edging toward a major reorganization, one that would give clearer focus and emphasis on mobility, analytics, security and, of course, cloud. How the newly designed mainframe figures into this corporate realignment will be interesting. The reorganization is being driven by IBM CEO Ginny Rometty, whose performance has been under close scrutiny by Wall Street and the company's corporate accounts over the past year.

It will be ironic -- or perhaps poetically just -- depending on your perspective, if IBM's hardware resurgence is led by a mainframe that makes a big splash in the mobile market.

Read more:


Page: (Previous)   1  2  3  4  5  6  7  8  9  10  11  ...  63  (Next)