Glosario KW | KW Glossary

Ontology Design | Diseño de Ontologías

All categories

Page: (Previous)   1  2  3  4  5  6  7  8  9  10  ...  63  (Next)


Picture of System Administrator

Customer Service Model

by System Administrator - Wednesday, 11 November 2015, 12:59 PM

Mastering the Modern Customer Service Model

by Wheelhouse Enterprises

Perfecting your in-house customer service system has never been easy until now. The cloud has made customer service tools readily available and revolutionized how they are implemented. Our newest white paper details the tools necessary for the most modern, up-to-date customer service tools for your organization. Whether you're looking for specific tools for your contact center or CRM, we have you covered.

Please read the attached whitepaper.

Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 9:36 PM
Picture of System Administrator

Data center design standards bodies

by System Administrator - Thursday, 12 March 2015, 7:49 PM

Words to go: Data center design standards bodies

 by Meredith Courtemanche

Need a handy reference sheet of the various data center standards organizations? Keep this list by your desk as a reference.

Several organizations produce data center design standards, best practices and guidelines. This glossary lets you keep track of which body produces which standards, and what each acronym means.

Print or bookmark this page for a quick reference of the organizations and associated websites and standards that data center designers and operators need to know.

  • ASHRAEThe American Society of Heating, Refrigerating and Air-Conditioning Engineers produces data center standards and recommendations for heating, ventilation and air conditioning installations. The technical committee develops standards for data centers' design, operations, maintenance and energy efficiency. Data center designers should consult all technical documents from ASHRAE TC 9.9: Mission Critical Facilities, Technology Spaces and Electronic
  • BISCI: The Building Industry Consulting Service International Inc. is a global association that covers cabling design and installation. ANSI/BICSI 002-2014, Data Center Design and Implementation Best Practices, covers electrical, mechanical and telecommunications structure in a data center, with comprehensive considerations from fire protection to data center infrastructure
  • BREEAMThe BRE Environmental Assessment Method (BREEAM) is an environmental standard for buildings in the U.K. and nearby countries, covering design, construction and operation. The code is part of a framework for sustainable buildings that takes into account economic and social factors as well as environmental. It is managed by BRE Global, a building science center focused on research and certification.
  • The Green Grid AssociationThe Green Grid Association is well-known for its PUE metric, defined as power usage effectiveness or efficiency. PUE measures how well data centers use power by a ratio of total building power divided by power used by the IT equipment alone. The closer to 1 this ratio comes, the more efficiently a data center is consuming power. Green Grid also publishes metrics for water (WUE) and carbon (CUE) usage effectiveness based on the same concept.
  • IDCA: The International Data Center Authority is primarily known as a training institute, but also publishes a holistic data center design and operations ranking system: the Infinity Paradigm. Rankings cover seven layers of data centers, from location and facility through data infrastructure and applications.
  • IEEEThe Institute of Electrical and Electronics Engineers provides more than 1,300 standards and projects for various technological fields. Data center designers and operators rely on the Ethernet network cabling standard IEEE 802.3ba, as well as IEEE 802 standards, for local area networks such as IEEE 802.11 wireless LAN specifications.
  • ISOThe International Organization for Standardization is an overarching international conglomeration of standards bodies. The ISO releases a wide spectrum of data center standards, several of which apply to facilities. ISO 9001 measures companies' quality control capabilities. ISO 27001 certifies an operation's security best practices, regarding physical and data security as well as business protection and continuity efforts. Other ISO standards that data center designers may require include environmental practices, such as ISO 14001 and ISO 50001.
  • LEEDThe Leadership in Energy and Environmental Design is an international certification for environmentally conscious buildings and operations managed by the U.S. Green Building Council. Five rating systems -- building design, operations, neighborhood development and other areas -- award a LEED level -- certified, silver, gold or platinum -- based on amassed credits. The organization provides a data-center-specific project checklist, as the LEED standard includes adaptations for the unique requirements of data centers.
  • NFPA: The National Fire Protection Association publishes codes and standards to minimize and avoid damage from hazards, such as fire. No matter how virtualized or cloudified your IT infrastructure, fire regulations still govern your workloads. NFPA 75 and 76 standards dictate how data centers contain cold/cool and hot aisles with obstructions like curtains or walls. NFPA 70 requires an emergency power off button for the data center to protect emergency respondents.
  • NIST: The National Institute of Standards and Technology oversees measurements in the U.S. NIST's mission includes research on nanotechnology for electronics, building integrity and diverse other industries. For data centers, NIST offers recommendations on authorization and access. Refer to special publications 800-53, Recommended Security Controls for Federal Information Systems, and SP 800-63, Electronic Authentication Guideline.
  • OCP: The Open Compute Project is known for its server and network design ideas. But OCP, started by Internet giant Facebook to promote open source in hardware, also branches into data center design. OCP's Open Rack and optical interconnect projects call for 21 inch rack slots and intra-rack photonic connections. OCP's data center design optimizes thermal efficiency with 277 Volts AC power and tailored electrical and mechanical components.
  • OIX: The Open IX Association focuses on Internet peering and interconnect performance from data centers and network operators, along with the content creators, distribution networks and consumers. It publishes technical requirements for Internet exchange points and data centers that support them. The requirements cover designed resiliency and safety of the data center, as well as connectivity and congestion
  • Telcordia: Telcordia is part of Ericsson, a communications technology company. The Telcordia GR-3160 Generic Requirements for Telecommunications Data Center Equipment and Spaces particularly relates to telecommunications carriers, but the best practices for network reliability and organizational simplicity can benefit any data center that delivers applications to end users or host applications for third-party operators. The standard deals with environmental protection and testing for hazards, ranging from earthquakes to lightning surges.
  • TIA: The Telecommunications Industry Association produces communications standards that target reliability and interoperability. The group's primary data center standard, ANSI/TIA-942-A, covers network architecture and access security, facility design and location, backups and redundancy, power management and more. TIA certifies data centers to ranking levels on TIA-942, based on redundancy in the cabling


  • The Uptime InstituteThe Uptime Institute certifies data center designs, builds and operations on a basis of reliable and redundant operating capability to one of four tier levels. Data center designers can certify plans; constructed facilities earn tier certification after an audit; operating facilities can prove fault tolerance and sustainable practices. Existing facilities, which cannot be designed to meet tier level certifications, can still obtain theManagement Operations Stamp of Approval from

Next Steps




Picture of System Administrator

Data Center Efficiency

by System Administrator - Wednesday, 26 August 2015, 7:17 PM

eGuide: Data Center Efficiency

APC by Schneider Electric

Data center efficiency is one of the cornerstones of an effective IT infrastructure. Data centers that deliver energy efficiency, high availability, density, and scalability create the basis for well-run IT operations that fuel the business. With the right approach to data center solutions, organizations have the potential to significantly save on costs, reduce downtime, and allow for future growth.

In this eGuide, Computerworld, CIO, and Network World examine recent trends and issues related to data center efficiency. Read on to learn how a more efficient data center can make a difference in your organization.

Please read the attached eGuide.

Picture of System Administrator

Data Citizen

by System Administrator - Sunday, 19 November 2017, 12:51 PM

Data Citizen

Posted by: Margaret Rouse

A data citizen is an employee who relies on digital information to make business decisions and perform job responsibilities.

In the early days of computing, it took a specialist with a strong background in data science to mine structured data for information. Today, business intelligence (BI) tools allow employees at every level of an organization to run ad hoc reports on the fly. Changes in how data can be analyzed and visualized allow workers who have no background in mathematics, statistics or programming be able to make data-driven decisions.

In both a government and data context, however, citizenship comes with responsibilities as well as rights. For example, a citizen who has been granted the right of free speech also has the responsibility to obey federal, state and local laws -- and an employee who has been granted the right to access corporate data also has a responsibility to support the company's data governance policies.

As data citizens increasingly expect more transparent, accessible and trustworthy data from their employers, it has become more important than ever for the rights and responsibilities of both parties to be defined and enforced through policy. To that end, data governance initiatives generally focus on high-level policies and procedures, while data stewardship initiatives focus on maintaining agreed-upon data definitions and formats, identifying data quality issues and ensuring that business users adhere to specified standards.

In addition to enforcing the data citizen's right to easily access trustworthy data, governance controls ensure that data is used in a consistent manner across the enterprise. To support ongoing compliance with external government regulations, as well as internal data policies, audit procedures should also be included in the controls.


Picture of System Administrator

Data Confabulation

by System Administrator - Tuesday, 12 May 2015, 12:30 AM

Data Confabulation

Posted by: Margaret Rouse

Data confabulation is a business intelligence term for the selective and possibly misleading use of data to support a decision that has already been made.

Within the volumes of big data there are often a lot of small bits of evidence that are contradictory to even clearly data-supported facts. Generally, this data noise can be seen as such and, in the context of the body of data, it is clearly outweighed. When data is selectively chosen from vast sources, however, a picture can often be created to support a desired view, decision or argument that would not be supported by a more rigorously controlled method.

Data confabulation can be used both intentionally and unintentionally to promote the user’s viewpoint. When a decision is made before data is examined, there is a danger of falling prey to confirmation bias even when people are trying to be honest. The term confabulation comes from the field of psychology, where it refers to the tendency of humans to selectively remember, misinterpret or create memories to support a decision, belief or sentiment.

Related Terms


  • de-anonymization (deanonymization)

    - De-anonymization is a method used to detect the original data that was subjected to processes to make it impossible -- or at least harder -- to identify the personally identifiable information (PII... (

  • data anonymization

    - The purpose of data anonymization is to make its source untraceable. Data anonymization processes include encryption, substitution, shuffing, number and data variance and nulling out data. (

  • change management

    - Change management is a systematic approach to dealing with change, both from the perspective of an organization and on the individual level. (


  • Business intelligence - business analytics

    - Terms related to business intelligence, including definitions about business analytics and words and phrases about gathering, storing, analyzing and providing access to business data.

  • Internet applications

    - This glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Picture of System Administrator

Data Lake

by System Administrator - Thursday, 25 June 2015, 10:29 PM


Author: John O’Brien

It would be an understatement to say that the hype surrounding the data lake is causing confusion in the industry. Perhaps, this is an inherent consequence of the data industry's need for buzzwords: it's not uncommon for a term to rise to popularity long before there is clear definition and repeatable business value. We have seen this phenomena many times when concepts including "big data," "data reservoir," and even the "data warehouse" first emerged in the industry. Today's newcomer to the data world vernacular—the "data lake"—is a term that has endured both the scrutiny of pundits who harp on the risk of digging a data swamp and, likewise, the vision of those who see the potential of the concept to have a profound impact on enterprise data architecture. As the data lake term begins to come off its hype cycle and face the pressures of pragmatic IT and business stakeholders, the demand for clear data lake definitions, use cases, and best practices continues to grow.

This paper aims to clarify the data lake concept by combining fundamental data and information management principles with the experiences of existing implementations to explain how current data architectures will transform into a modern data architecture. The data lake is a foundational component and common denominator of the modern data architecture enabling, and complementing specialized components, such as enterprise data warehouses, discovery-oriented environments, and highly-specialized analytic or operational data technologies within or external to the Hadoop ecosystem. Therefore, the data lake has become the metaphor for the transformation of enterprise data management, and will continue to evolve the data lake definition according to established principles, drivers, and best practices that will quickly emerge as hindsight is applied at companies.

Please read the attached guide.


Picture of System Administrator

Data Profiling

by System Administrator - Tuesday, 30 December 2014, 3:24 PM

Data Profiling

Posted by Margaret Rouse

Data profiling, also called data archeology, is the statistical analysis and assessment of data values within a data set for consistency, uniqueness and logic.

The data profiling process cannot identify inaccurate data; it can only identify  business rules violations and anomalies.The insight gained by data profiling can be used to determine how difficult it will be to use existing data for other purposes.  It can also be used to provide metrics to assess data quality and help determine whether or not metadata accurately describes the source data. 

Profiling tools evaluate the actual content, structure and quality of the data by exploring relationships that exist between value collections both within and across data sets. For example, by examining the frequency distribution of different values for each column in a table, an analyst can gain insight into the type and use of each column. Cross-column analysis can be used to expose embedded value dependencies and inter-table analysis allows the analyst to discover overlapping value sets that represent foreign keyrelationships between entities.  

See also: data modelingdata dictionarydata deduplication


Picture of System Administrator

Data Silo

by System Administrator - Monday, 20 July 2015, 4:59 PM

Data Silo

Posted by Margaret Rouse

A data silo is a repository of fixed data that an organization does not regularly use in its day-to-day operation.

A data silo is a repository of fixed data that an organization does not regularly use in its day-to-day operation. So-called siloed data cannot exchange content with other systems in the organization. The expressions "data silo" and "siloed data" arise from the inherent isolation of the information. The data in a silo remains sealed off from the rest of the organization, like grain in a farm silo is closed off from the outside elements.

In recent years, data silos have faced increasing criticism as an impediment to productivity and a danger to data integrity. Data silos also increase the risk that current (or more recent) data will accidentally get overwritten with outdated (or less recent) data. When two or more silos exist for the same data, their contents might differ, creating confusion as to which repository represents the most legitimate or up-to-date version.

Cloud-based data, in contrast to siloed data, can continuously evolve to keep pace with the needs of an organization, its clients, its associates, and its customers. For frequently modified information, cloud backup offers a reasonable alternative to data silos, especially for small and moderate quantities of data. When stored information does not need to be accessed regularly or frequently, it can be kept in a single cloud archive rather than in multiple data silos, ensuring data integration (consistency) among all members and departments in the organization. For these reasons, many organizations have begun to move away from data silos and into cloud-based backup and archiving solutions.

Continue Reading About data silo



Picture of System Administrator

Database-as-aService (DBaaS)

by System Administrator - Monday, 16 February 2015, 3:42 PM

Why Database-as-aService (DBaaS)?

IBM Cloudant manages, scales and supports your fast-growing data needs 24x7, so you can stay focused on new development and growing your business.

Fully managed, instantly provisioned, and highly available

In a large organization, it can take several weeks for a DBMS instance to be provisioned for a new development project, which limits innovation and agility. Cloudant DBaaS helps to enable instant provisioning of your data layer, so that you can begin new development whenever you need. Unlike Do-It-Yourself (DIY) databases, DBaaS solutions like Cloudant provide specific levels of data layer performance and up time. The managed DBaaS capability can help reduce risk of service delivery failure for you and your projects.

Build more. Grow more

With a fully managed NoSQL database service, you do not have to worry about the time, cost and complexity associated with database admnistration, architecture and hardware. Now you can stay focused on developing new apps and growing your business to new heights.

Who uses DBaaS?

Companies of all sizes, from start ups to mega-users use Cloudant to manage data for large or fast-growing web and mobile apps in ecommerce, on-line education, gaming, financial services, and other industries. Cloudant is best suited for applications that need a database to handle a massively concurrent mix of low-latency reads and writes. Its data replication and synchronization technology also enables continuous data availability, as well as off-line app usage for mobile or remote users.

As a JSON document store, Cloudant is ideal for managing multi- or un-structured data. Advanced indexing makes it easy to enrich applications with location-based (geo-spatial) services, full-text search, and near real-time analytics.

Please read the attached whitepaper.

Picture of System Administrator

Delivering Data Warehousing as a Cloud Service

by System Administrator - Wednesday, 8 July 2015, 9:27 PM

Delivering Data Warehousing as a Cloud Service

The current data revolution has made it an imperative to provide more people with access to data-driven insights faster than ever before. That's not news. But in spite of that, current technology seems almost to exist to make it as hard as possible to get access to data.

That's certainly the case for conventional data warehouse solutions, which are so complex and inflexible that they require their own teams of specialists to plan, deploy, manage, and tune them. By the time the specialists have finished, it's nearly impossible for the actual users to figure out how to get access to the data they need.

Newer 'big data' solutions do not get rid of those problems. They require new skills and often new tools as well, making them dependent on hard-to-find operations and data science experts.

Please read the attached whitepaper.

Picture of System Administrator

Designing and Building an Open ITOA Architecture

by System Administrator - Tuesday, 16 June 2015, 10:51 PM

Designing and Building an Open ITOA Architecture

This white paper provides a roadmap for designing and building an open IT Operations Analytics (ITOA) architecture. You will learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets. After weighing the role of each IT data source for your organization, you can learn how to combine them in an open ITOA architecture that avoids vendor lock-in, scales out cost-effectively, and unlocks new and unanticipated IT and business insights.
Please read the attached whitepaper.
Picture of System Administrator

Desktop as a Service (DaaS)

by System Administrator - Wednesday, 11 November 2015, 6:29 PM

Desktop as a Service (DaaS)

Posted by Margaret Rouse

Desktop as a Service (DaaS) is a cloud service in which the back-end of a virtual desktop infrastructure (VDI) is hosted by a cloud service provider.

DaaS has a multi-tenancy architecture and the service is purchased on a subscription basis. In the DaaS delivery model, the service provider manages the back-end responsibilities of datastoragebackup, security and upgrades. Typically, the customer's personal data is copied to and from the virtual desktop during logon/logoff and access to the desktop is device, location and network independent. While the provider handles all the back-end infrastructure costs and maintenance, customers usually manage their own desktop images, applications and security, unless thosedesktop management services are part of the subscription.

Desktop as a Service is a good alternative for a small or mid-size businesses (SMBs) that want to provide their end users with the advantages a virtual desktop infrastructure offers, but find that deploying a VDI in-house to be cost-prohibitive in terms of budget and staffing.

This definition is part of our Essential Guide: What you need to know about cloud desktops and DaaS providers


Picture of System Administrator


by System Administrator - Wednesday, 15 February 2017, 7:07 PM


How to utilize it in your IT workspace

by TechTarget

Please read the attached whitepaper.


Picture of System Administrator

DevOps (PMI)

by System Administrator - Monday, 29 December 2014, 5:45 PM

Definición de DevOps: mejor explicamos lo que no es

by Jennifer Lent

Mucho se ha escrito acerca de lo que es DevOps: Un camino para que los desarrolladores y directores de operaciones colaboren; un conjunto de mejores prácticas para la gestión de aplicaciones en la nube; una idea Ágil que se basa en la integración continua, lo que permite frecuentes liberaciones de código.

Según Wikipedia: "DevOps es un acrónimo inglés de development (desarrollo) y operations (operaciones), que se refiere a una metodología de desarrollo de software que se centra en la comunicación, colaboración e integración entredesarrolladores de software y los profesionales de operaciones en las tecnologías de la información (IT). DevOps es una respuesta a la interdependencia del desarrollo de software y las operaciones IT. Su objetivo es ayudar a una organización a producir productos y servicios software rápidamente. Las empresas con entregas (releases) muy frecuentes podrían requerir conocimientos de DevOps. Flickr desarrolló un sistema DevOps para cumplir un requisito de negocio de diez despliegues al día. A este tipo de sistemas se les conoce como despliegue continuo (continuous deployment) o entrega continua (continuous delivery), y suelen estar asociados a metodologías lean startup. Grupos de trabajo, asociaciones profesionales y blogs usan el término desde 2009."

La definición de DevOps cubre todas estas cosas y más. Pero dado que el término ha adquirido estatus de palabra de moda, puede ser más interesante preguntarse no lo que es DevOps, sino lo que no es. En este artículo, SearchSoftwareQuality preguntó a algunos profesionales de software exactamente eso. He aquí lo que dijeron.

1. DevOps no es un puesto de trabajo.

Publicaciones en sitios de empleo sugieren otra cosa, pero DevOps no es un puesto de trabajo, dijo el consultor de Agile, Scott Ambler. "¿Gestor de DevOps? No sé lo que es eso". DevOps no debe ser un rol laboral, dijo. "DevOps se trata de que los desarrolladores entiendan la realidad de las operaciones y de que el equipo de operaciones comprenda lo que involucra el desarrollo." DevOps, el concepto, es un aspecto importante del desarrollo y la entrega de software, dijo Ambler. "Pero el puesto de DevOps es un síntoma de que las organizaciones que contratan [gerentes de DevOps] no entienden lo que DevOps es realmente. Ellos no lo entienden todavía."

La postura de Ambler sobre DevOps va en contra de la sabiduría convencional. DevOps apareció en la lista de 10 títulos de trabajo que es probable encontrar, de acuerdo con

2. DevOps no es una categoría de herramienta de software.

DevOps no se trata de herramientas, sino de cultura, dijo Patrick Debois en unapresentación titulada “DevOps: tonterías, herramientas y otras cosas inteligentes”, durante la Conferencia GOTO. Debois, quien acuñó el término "DevOps" y fundó una conferencia conocida como DevOpsDays, dijo que las herramientas juegan un papel importante en el apoyo al enfoque de DevOps para la entrega y la gestión de software, pero DevOps no se trata de las herramientas en sí.

Ambler dijo la noción de que hay "herramientas que hacen que DevOps" refleje la realidad actual: DevOps, la palabra de moda, todavía se está moviendo hacia el pico de la curva. "Cada herramienta es una herramienta DevOps", agregó que mientras los vendedores de software continúan empujando sus visiones de DevOps, "mucha de la discusión es ingenua."

3. DevOps no se trata de resolver un problema de TI.

A pesar de sus muchos significados, DevOps es ampliamente entendido como una forma de resolver un problema de TI: permite que desarrollo y operaciones colaboren en la entrega de software. Pero ese no es su objetivo final, dijo Damon Edwards, socio gerente de consultoría de TI, Soluciones DTO, en Redwood City, California. "El punto de DevOps es permitirle a su empresa reaccionar ante las fuerzas del mercado lo más rápido, eficiente y confiable como sea posible. Sin el negocio, no hay otra razón para que estemos hablando de problemas DevOps, mucho menos pasar tiempo resolviéndolos", escribió Edwards escribió en su blog.

Kevin Parker, experto de SearchSoftwareQuality, dijo que el nuevo reto que encaran los gerentes de DevOps es toda la atención que el tema obtiene por parte del negocio. "Lo que antes era una tarea arcana, de elaborada coordinación y gestión de proyectos es ahora en parte diplomacia, parte protector –y una buena cantidad de innovación."

4. DevOps no es sinónimo de integración continua.

DevOps se originó en Agile como una forma de apoyar la práctica ágil de liberaciones de código más frecuentes. Pero DevOps es más que eso, dijo Ambler. "El hecho de que se practique la integración continua no significa que se está haciendo DevOps." Él ve a los gerentes de operaciones como los principales interesados ​​que los equipos ágiles necesitan trabajar para liberar software.


5. DevOps no... desaparecerá.

A pesar de las ideas falsas a su alrededor, DevOps está aquí para quedarse y sigue siendo importante para la entrega exitosa de software. "Tanto si lo llamamos DevOps o no, la gestión de cambios y versiones está experimentando una [exponencial] expansión en importancia", dijo Parker. Hay sustancia de fondo en DevOps, añadió el analista de Ovum Michael Azoff. "Por supuesto, hay expectación en torno a DevOps. Todavía estamos en la primera fase. Es donde Agile se ubicaba hace un par de años."

Please read the attached whitepaper: "Top tips for DevOps testing: Achieve continuous delivery"

Más noticias y tutorials:



Picture of System Administrator

Digital Marketing Plan

by System Administrator - Thursday, 17 September 2015, 7:00 PM


por Juan Carlos Muñoz | Marketing Manager, Interactive & CRM at Volvo Car España | Profesor de ICEMD

Picture of System Administrator

Distributed Computing

by System Administrator - Monday, 10 August 2015, 10:13 PM

Distributed Computing

Posted by: Margaret Rouse

Distributed computing is a model in which components of a software system are shared among multiple computers to improve efficiency and performance. 

According to the narrowest of definitions, distributed computing is limited to programs with components shared among computers within a limited geographic area. Broader definitions include shared tasks as well as program components. In the broadest sense of the term, distributed computing just means that something is shared among multiple systems which may also be in different locations. 

In the enterprise, distributed computing has often meant putting various steps in business processes at the most efficient places in a network of computers. For example, in the typical distribution using the 3-tier model, user interface processing is performed in the PC at the user's location, business processing is done in a remote computer, and database access and processing is conducted in another computer that provides centralized access for many business processes. Typically, this kind of distributed computing uses the client/server communications model.

The Distributed Computing Environment (DCE) is a widely-used industry standard that supports this kind of distributed computing. On the Internet, third-party service providers now offer some generalized services that fit into this model.

Grid computing is a computing model involving a distributed architecture of large numbers of computers connected to solve a complex problem. In the grid computing model, servers or personal computers run independent tasks and are loosely linked by the Internet or low-speed networks. Individual participants may allow some of their computer's processing time to be put at the service of a large problem. The largest grid computing project is SETI@home, in which individual computer owners volunteer some of their multitasking processing cycles (while concurrently still using their computer) to the Search for Extraterrestrial Intelligence (SETI) project. This computer-intensive problem uses thousands of PCs to download and search radio telescope data.

There is a great deal of disagreement over the difference between distributed computing and grid computing. According to some, grid computing is just one type of distributed computing. The SETI project, for example, characterizes the model it’s based on as distributed computing. Similarly, cloud computing, which simply involves hosted services made available to users from a remote location, may be considered a type of distributed computing, depending on who you ask.

One of the first uses of grid computing was the breaking of a cryptographic code by a group that is now known as That group also describes its model as distributed computing.

Related Terms



  • Software applications

    - Terms related to software applications, including definitions about software programs for vertical industries and words and phrases about software development, use and management.

  • Internet applications

    - This glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Dig Deeper

Continue Reading About distributed computing


Picture of System Administrator

Documentación (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 1:04 AM


Información: Datos que poseen significado.

Documento: Información y su medio de soporte.

Especificación: Documento que establece requisitos.

Manual de calidad: Documento que especifica el sistema de gestión de la calidad de una organización.

Plan de calidad: Documento qué específica que procedimientos y recursos asociados deben aplicarse, quién debe aplicarlos y cuándo deben aplicarse a un proyecto, producto o contrato específico.

Registro: Documento que presenta resultados obtenidos o proporciona evidencia de actividades desempeñadas.

Picture of System Administrator


by System Administrator - Monday, 6 July 2015, 8:38 PM

7 Critical Questions to Demystify DRaaS

This whitepaper is not a sermon on Disaster Recovery and whyyou need it. You don’t need a lesson in the perils of disasters or a theoretical “business case” that proves unpredictable events can damage your data and cost you thousands of dolllars. In fact,if you were not already aware of the need for Disaster Recovery you probably would not be reading this document.
Please read the attached whitepaper.
Picture of System Administrator

DSC pull server

by System Administrator - Thursday, 31 August 2017, 9:27 PM

DSC pull server

A DSC pull server (desired state configuration pull server) is an automation server that allows configurations to be maintained on many servers, computer workstations and devices across a network.

DSC pull servers use Microsoft Windows PowerShell DSC's declarative scripting to maintain current version software and also monitor and control the configuration of computers and services and the environment they run in. This capacity makes DSC pull servers very useful for administrators, allowing them to ensure reliability and interoperability between machines by stopping the configuration drift that can occur through making individual machine setting changes over time.

DSC pull servers use PowerShell or Windows Server 2012 and client servers must be running Windows Management Framework (WMF) 4. Microsoft has also developed PowerShell DSC for Linux.

Examples of how built-in DSC resources automation can configure and manage a set of computers or devices:

  •     Enabling or disabling server roles and features.
  •     Managing registry settings.
  •     Managing files and directories.
  •     Starting, stopping, and managing processes and services.
  •     Managing groups and user accounts.
  •     Deploying new software.
  •     Managing environment variables.
  •     Running Windows PowerShell scripts.
  •     Fixing configurations that drift away from the desired state.
  •     Discovering the actual configuration state on a given client.


Picture of System Administrator


by System Administrator - Saturday, 20 June 2015, 2:57 PM


Posted by: Margaret Rouse

DuckDuckGo (DDG) is a general search engine designed to protect user privacy, while avoiding the skewing of search results that can happen because of personalized search (sometimes referred to as a filter bubble).

DDG does not track users – user IP addresses and other information are not logged. A log of search terms entered is maintained but the terms are not associated with particular users. Because DuckDuckGo does not record user information, it has no data to turn over to any third-party organizations.

Unlike Google, DuckDuckGo does not default to personalized search, which constrains search results based on information related to the user, such as location, preferences and history. Users may opt to boost results based on locality, for example, but it will not be done unless they specify that they want it to be. Results that appear to be from content mills are also filtered out of search engine results pages (SERP).

DuckDuckGo is sometimes referred to as a hybrid search engine because it compiles results from a variety of sources including its own crawler, DuckDuckBot, crowd-sourced sites such as Wikipedia, and partnerships with other search providers including Yahoo!, Yandex, Yelp, and Bing. 

Instant answers, which appear at the top of the results page, are available for queries involving many types of searches, including flight statuses, recipes, rhyming words, calculations and statistics -- among a wide variety of other possibilities. Instant answers also include functions, such as a stopwatch and a strong password generator.

The !bang feature allows users to search a particular website.  Typing “!Facebook” before a search term, for example, restricts the results to those found on that site.

DuckDuckGo was founded by Gabriell Weinberg in September 2008. Initially funded by Weinberg, the search engine received $3 million in venture capital in 2011 and is now supported by keyword-based advertising. The company’s headquarters are in Paoli, Pennsylvania.

DuckDuckGo is available in most browsers, including Chrome, Firefox and Safari.

Tekzilla compares DuckDuckGo and Google search:

Part of the Software applications glossary


Picture of System Administrator

Dynamic Pricing

by System Administrator - Tuesday, 4 November 2014, 8:24 PM

Dynamic Pricing

Posted by: Margaret Rouse

Dynamic pricing, also called real-time pricing, is an approach to setting the cost for a product or service that is highly flexible. The goal of dynamic pricing is to allow a company that sells goods or services over the Internet to adjust prices on the fly in response to market demands. 

Changes are controlled by pricing bots, which are software agents that gather data and usealgorithms to adjust pricing according to business rules. Typically, the business rules take into account such things as the customer's location, the time of day, the day of the week, the level of demand and competitors' pricing.  With the advent of big data and big data analytics, however, business rules for price adjustments can be made more granular. By collecting and analyzing data about a particular customer, a vendor can more accurately predict what price the customer is willing to pay and adjust prices accordingly.

Dynamic pricing is legal, and the general public has learned to accept dynamic pricing when purchasing airline tickets or reserving hotel rooms online.  The approach, which is sometimes marketed as a personalization service, has been less successful with online retail vendors. Dynamic pricing can be contrasted with fixed pricing, an approach to setting the selling price for a product or service that does not fluctuate.


See also: fair and reasonable priceconsumption-based pricing model

Related Terms


  • employee advocacy

     - Employee advocacy is the promotion of an organization by its staff members. A business may ask employees to actively promote the organization, often through social media, as an element of their job... (

  • critical success factors

     - Critical success factors are a limited number of key variables or conditions that have a tremendous impact on how successfully and effectively an organization meets its mission or the strategic goa...(

  • unsystemic risk (unsystematic risk)

     - Unsystemic risk (also known as unsystematic risk) is a type of investment risk that is specific to an industry or organization. (


  • Business terms

     - Terms related to business, including definitions about project management and words and phrases about human resources, finance and vertical industries.

  • Internet applications

     - This glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...


Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 9:37 PM
Picture of System Administrator

Easy to Build Workflows and Forms

by System Administrator - Thursday, 6 October 2016, 3:06 PM

K2 Special Edition: Easy to Build Workflows and Forms for Dummies

How can automated business workflows and forms drive efficiency? The right solution for business process transformation will make it easy (even for nontechnical users), while increasing efficiency and agility.

In this book, Easy to Build Workflows and Forms for Dummies, you’ll learn how to evaluate business application workflow solutions with key criteria. You’ll also explore different department use cases, examine how businesses can use a single workflow solution across the entire organization, and much more.

Please read the attached eBook

Picture of System Administrator

Effective Software Testing

by System Administrator - Saturday, 11 July 2015, 11:17 PM

Four tips for effective software testing

by Robin F. Goldsmith

To ensure success, follow software testing concepts


Regardless of development methodology or type of software testing, multiple factors can come into play that determine the effectiveness of software testing. Generally, testers do not pay conscious attention to these key software testing concepts. Too often, lack of conscious attention means these essential factors have been overlooked, even by experienced testers who may take too much for granted. Not applying these software testing concepts not only leads to less effective software testing, but the lack of awareness can make the tester oblivious to a test's diminished effectiveness.

Here are four fundamental factors that determine effective software testing.

1. Define expected software testing results independently

When you run a test, you enter inputs or conditions. (Conditions are a form of inputs that, in production, ordinarily are not explicitly entered, such as time of year. Part of running a test often involves additional actions to create the conditions.) The system under test acts on the inputs or conditions and produces actual results. Results include displayed textual or graphical data, signals, noises, control of devices, database content changes, transmissions, printing, changes of state, links, etc.

But actual results are only half the story for effective software testing. What makes the execution a test, rather than production, is that we get the actual results so we can determine whether the software is working correctly. To tell, we compare the actual results to expected software testing results, which are our definition of software testing correctness.

If I run a test and get actual results but have not defined expected software testing results, what do I tend to presume? Unless the actual results are somehow so outlandish that I can't help but realize they are wrong, such as when the system blows up, I'm almost certain to assume that the expected results are whatever I got for actual results, regardless of whether the actual results are correct.

When expected software testing results are not defined adequately, it is often impossible for the tester to ascertain accurately whether the actual results are right or wrong. Consider how many tests are defined in a manner similar to, "Try this function and see if it works properly." "Works properly" is a conclusion, but not a specific-enough expected result on which to base said conclusion. Yet testers often somewhat blindly take for granted that they can guess needed inputs or conditions and corresponding actual results.

For a test to be effective, we must define software testing correctness (expected software testing results) independently of actual results so the actual results do not unduly influence definition of the expected results. As a practical matter, we also need to define the expected results before obtaining the actual results, or our determination of the expected results probably will be influenced by the actual results. In addition, we need to document the expected results in a form that is not subject to subsequent conscious or unconscious manipulation.

2. Know the correct application results

Defining expected results independently of and before actual results is necessary but not sufficient. The expected results have to be correct. You have to know the correct application results in order to tell whether the product is producing it correctly.

In general, real business requirements are the basis for determining correct application results. However, too often real business requirements are inadequately identified. Moreover, most testing is based on demonstrating that the product meets its feature requirements, which means the product works as designed. Demonstrating the product works as designed is necessary but not sufficient for -- let alone the same as -- demonstrating that the product as designed satisfies the real business requirements and thereby accomplishes the value it should.

Some exploratory testers believe their main purpose is to ascertain how the software works, essentially investigating many different instances of, "What happens if I try this?" Although perhaps interesting and even sometimes enlightening, this approach is a form of defining actual results that quite intentionally omits consciously determining what the right answer should be.

Ultimately, tests need to demonstrate that products not only work as designed but in fact satisfy the real business requirements, which are the basis for the "right answers" and should include most quality factors. Most developers and most testers, exploratory and otherwise, focus on product requirements without adequately understanding the real business requirements the product must satisfy to provide value.

Exploratory testing is one of many methods to help identify wrong and missed requirements, but it's usually neither the most economical nor most effective means to do so. Simply detecting requirements issues doesn't automatically correct them; and corrections are easily lost or distorted when they are only in the tester's mind. Moreover, I find it unlikely that exploratory testers who explicitly don't want to know the requirements somehow magically can know better, based on typical requirements, what the right answers should be.

3. Application testers must compare actual to expected results

I'm frequently amazed how often application testers define correctly the right expected results, get actual results by running tests, and then don't take the final comparison step to make sure the actual results are correct (i.e., what was expected).

Of course, the most common reason this key comparison of actual to expected results is skipped is because the right expected results were not defined adequately. When expected results are not externally observable, who knows what the application testers are comparing against? Sometimes the application testers mistakenly assume the actual results are correct if they don't appear outlandish. Perhaps the tester makes a cursory comparison of mainly correct results but misses some of the few exceptions whose actual results differ from expected results.

I appreciate that comparing actual software testing results to the expected results can be difficult. Large volumes of tests can take considerable effort and become tedious, which increase chances of missing something. Results that are complex can be very hard to compare accurately and may require skills or knowledge that the tester lacks.

Such situations can be good candidates for automation. A computer tool won't get tired and can consistently compare all elements of complex results. However, an automated test tool requires very precise expected results. An additional downside of automated tools is that they won't pick up on certain types of results that a human application tester might notice.

4. Follow software testing guidelines to avoid oversights

The fourth key to effective software testing deals with the common experience of overlooking things that can "fall through the cracks." The simple but not always easy way to reduce such oversights is to follow software testing guidelines that help the tester be more thorough. Software testing guidelines include checklists and templates meant to guide development or testing.

Consider the difference between going to the supermarket with and without a shopping list, which is an example of software testing guidelines. Without a list, you tend to spend more yet come home without some of the groceries you needed. With the shopping list, you get what you need and spend less because you're less likely to make impulse buys.

Software testing guidelines also help detect omissions that exploratory testers are much more likely to miss. By definition, exploratory testing is guided by executing the software as built. That tends to channel one's thinking in line with what's been built, which easily can lead away from realizing what hasn't been built that should have been. Software testing guidelines can help prompt attention to such items that following the product as built can obscure.


Picture of System Administrator

Electronic Contract Execution

by System Administrator - Monday, 6 July 2015, 8:44 PM

Pitching Paper: The Case for Electronic Contract Execution

Whether fluctuations in order volumes or the drive for greater profitability and efficiency, organizations and their employees must find new ways to be more effective with fewer resources. Over the past decade, well-known software categories such as customer relationship management (CRM), contract life-cycle management (CLM) and enterprise resource management (ERP) have been deployed in order to streamline business processes and drive greater profitability. However, when it comes to executing transactions that require documents or forms, they fall back to the Stone Age practice of printing and moving paper, dropping out of their hyper-efficient infrastructure.

Please read the attached whitepaper.

Picture of System Administrator

Employee Investigations

by System Administrator - Monday, 20 October 2014, 2:00 PM

Simplifying Employee Investigations

Whether you are the small business owner, head of HR, or in IT, employee investigations are a part of your daily life. In this whitepaper we’ll discuss some of the real-world issues businesses face that result in employee investigations, the methodologies used to perform investigations, and then we’ll look at why investigating proactively can help.

Please read the attached whitepaper.

Picture of System Administrator

Employee Monitoring Program (BUSINESS)

by System Administrator - Thursday, 4 September 2014, 1:56 AM

Implementing an Employee Monitoring Program

Security & Risk professionals recognize the value and benefits of implementing an employee-monitoring program. Privacy advocates and Legal and Human Resources professionals see potentially unwarranted invasion of employee privacy as reasons not to monitor, or at least to restrict monitoring to instances where enough “probable cause” exists to warrant tilting the balance between the privacy of an employee and the interests of the company. This document is intended to assist company executives determining whether or not to implement employee activity monitoring.

Please read the attached whitepaper

Picture of System Administrator

Endpoint Security

by System Administrator - Wednesday, 16 September 2015, 7:13 PM


Endpoint Security

by Kaseya

To win the ongoing war against hackers and cyber criminals, IT professionals must do two things: Deploy and maintain endpoint security tools with the latest updates, and ensure the software applications running in their networks have the latest available patches. Failure to do either exposes their IT environments to cyber threats and their organizations to financial losses and embarrassment, while putting their jobs at risk. Keeping up with patches and updates, however, isn't easy. Learn more in this whitepaper.

Please read the attached whitepaper.

Picture of System Administrator

Enterprise Search

by System Administrator - Friday, 27 March 2015, 11:53 PM

Enterprise Search

Posted by: Margaret Rouse

Enterprise search is the organized retrieval of structured and unstructured data within an organization. Properly implemented, enterprise search creates an easily navigated interface for entering, categorizing and retrieving data securely, in compliance with security and data retention regulations. 

The quality of enterprise search results is reliant upon the description of the data by the metadata. Effective metadata for a presentation, for example, should describe what the presentation contains, who it was presented to, and what it might be useful for. Given the right metadata a user should be able to find the presentation through search using relevant keywords.

There are a number of kinds of enterprise search including local installations, hosted versions, and search appliances, sometimes called “search in a box.” Each has relative advantages and disadvantages. Local installations allow customization but require that an organization has the financial or personnel resources to continually maintain and upgrade the investment. Hosted search outsources those functions but requires considerable trust and reliance on an external vendor. Search appliances and cloud search, the least expensive options, may offer no customization at all.

Enterprise search software has increasingly turned to a faceted approach. Faceted search allows all of the data in a system to be reduced to a series of drop down menus, each narrowing down the total number of results, which allows users to narrow a search to gradually finer and finer criteria. The faceted approach improves upon the keyword search many users might think of (the Google model) and the structured browse model (the early Yahoo model). In the case of keyword search, if the end user doesn't enter the correct keyword or if records weren’t added in a way that considers what end users might be looking for, a searcher may struggle to find the data. Similarly, in a browsing model, unless the taxonomies created by the catalogers of an enterprise's information make intuitive sense to an end user, ferreting out the required data will be a challenge. 

Enterprise search is complex. Issues of security, compliance and data classification can generally only be addressed by a trained knowledge retrieval expert. That complexity is further complicated by the complexity of an enterprise itself, with the potential for multiple offices, systems, content types, time zones, data pools and so on. Tying all of those systems together in a way that enables useful information retrieval requires careful preparation and forethought. 

Vendors of enterprise search products include Oracle, SAP, IBM, Google and Microsoft.

See also: enterprise content management (ECM), e-discovery, autoclassification


  • virtual payment terminal - Virtual terminals allow sellers to take credit card payments online for orders made online or over the phone without requiring a card reader device. (
  • compensating control - Compensating controls were introduced in PCI DSS 1.0, to give organizations an alternative to the requirements for encryption. The alternative is sometimes considered a loophole that creates a secu... (
  • cloud computing - What is cloud computing? To understand cloud computing, examine public, private and hybrid cloud, as well as PaaS, SaaS and IaaS cloud models. (


  • Customer data management - Terms related to customer data management, including customer data integration (CDI) technology definitions and words and phrases about data quality and data governance.
  • Internet applications - This glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Dig Deeper

Page: (Previous)   1  2  3  4  5  6  7  8  9  10  ...  63  (Next)