Glosario KW | KW Glossary

Ontology Design | Diseño de Ontologías

Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

Page: (Previous)   1  ...  6  7  8  9  10  11  12  13  14  15  ...  23  (Next)


Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:14 PM
  • infomediary - a Web site that provides specialized information on behalf of producers of goods and services and their potential customers.
Picture of System Administrator

IBM Predictive Customer Intelligence

by System Administrator - Thursday, 3 September 2015, 7:16 PM

IBM Predictive Customer Intelligence

por IBM

Create personalized, relevant customer experiences with a focus on driving new revenue.

Please read the attached whitepaper.

Picture of System Administrator

IBM z13s mainframe

by System Administrator - Thursday, 18 February 2016, 6:23 PM

IBM unveils z13s mainframe focused on security and hybrid clouds

By John Ribeiro


Brian David Flores, cryptographic hardware verification engineer, holding IBM's new z13s microprocessor chip. Credit: IBM

IBM has unveiled its new z13s mainframe, which it claims offers encryption at twice the speed as previous mid-range systems, without compromising performance.

The company, which sold its x86 server business to Lenovo, continues to invest in new designs of its mainframe to handle new compute challenges. It launched in January last year, the z13, its first new mainframe in almost three years, with a new processor design, faster I/O and the ability to address up to 10TB of memory. The design of the z13 was focused on real-time encryption and embedded analytics.

IBM said the z13s, targeted at mid-size organizations and described as the new entry point for the company's z Systems, has an "updated cryptographic and tamper-resistant hardware-accelerated cryptographic coprocessor cards with faster processors and more memory," allowing clients to process twice as many high-volume, cryptographically-protected transactions as before without compromising performance.

The company is also packaging with the mainframe threat monitoring based on behavior analytics and multi-factor authentication at the z/OS operating system level, and has also announced more independent software vendors that have integrated their software applications with the z Systems under IBM's partnership program called "Ready for IBM Security Intelligence."

The multi-factor authentication for z/OS, the first time such authentication has been integrated into the OS rather than offered as add-on software, requires privileged users to enter a second form of identification like a PIN or randomly generated token to access the system.

The z Systems Cyber Security Analytics offering, being developed by IBM Research, learns user behavior and alerts administrators if it detects unusual patterns on the platform.

The ISVs IBM has partnered with are BlackRidge Technology, RSM Partners and Forcepoint, which offer technologies in the area of identity-based network security, application readiness and penetration testing, and endpoint security of devices.

Although hybrid clouds offer flexibility to customers, they also present new vulnerabilities as more than half of all attackers come from the inside, IBM said. To avoid the impact of human error or meddling in operations, IBM said it is integrating its mainframe with its security technologies that address privileged identity management, sensitive data protection and integrated security intelligence.

The z13s will come in two models – the N10 and N20, IBM said in its FAQ on the mainframe. The N10 can be configured with up to 10 configurable cores and up to 1TB of memory, while the N20 can go up up to 20 configurable cores and up to 4TB of memory.

IBM plans to make the new z13s available in March this year. The company did not disclose the pricing of the new mainframe.



Picture of System Administrator

Identity-as-a-Service (IDaaS)

by System Administrator - Wednesday, 5 November 2014, 2:22 PM

Top Six Things to Consider with an Identity-as-a-Service (IDaaS) Solution

Solve your enterprise security, identity, and password problems with Identity-as-a-Service.

Unified identity management with an Identity-as-a-Service solution (IDaaS) can help your enterprise solve security, password, and identity problems. Download the best practices paper: Top Six Things to Consider with an Identity-as-a-Service Solution. You'll discover how an IDaaS can help you:

  • Drive user productivity. Users have access from any devices to all applications and resources - making them happier and more productive.
  • Enhance IT efficiency. IT can access security features across heterogeneous IT environments using existing infrastructure - no new processes, tools, or servers required.
  • Improve security. Establish a single point of control (and monitoring) for improved access security though multi-factor authentication.
  • Mitigate risk and comply with regulations. IT teams are able to establish granular user accountability, and demonstrate how they are being prescriptive about controlling user access.
  • Lower total cost of ownership (TCO). Time and infrastructure savings can reduce identity-related TCO by greater than 50%.

Please read the attached whitepaper.

Picture of System Administrator

IIoT: Internet Industrial de las Cosas

by System Administrator - Monday, 13 February 2017, 11:12 PM

Internet Industrial de las Cosas, IIoT

La Internet Industrial de las Cosas (IIOT) es el uso de tecnologías de Internet de las Cosas (IoT) en la manufactura.

También conocido como el Internet Industrial, IIoT incorpora el aprendizaje de máquina y la tecnología de grandes volúmenes de datos (big data), aprovechando los datos de sensores, comunicación de máquina-a-máquina (M2M) y las tecnologías de la automatización que han existido en configuraciones industriales por años. 

La filosofía de conducción detrás del IIoT es que las máquinas inteligentes son mejores que los seres humanos en la captura y comunicación de datos con precisión y coherencia. Estos datos pueden permitir a las empresas captar las ineficiencias y los problemas antes, ahorrando tiempo y dinero y apoyando los esfuerzos de inteligencia empresarial. Específicamente en lo que respecta a la fabricación, IIoT tiene un gran potencial para el control de calidad, las prácticas sostenibles y verdes, la trazabilidad de la cadena de suministro y la eficiencia general de la cadena de suministro.

Una preocupación importante que rodea el IoT industrial es la interoperabilidad entre dispositivos y máquinas que utilizan diferentes protocolos y tienen diferentes arquitecturas. El Consorcio de Internet Industrial, fundado en 2014 sin fines de lucro, se centra en la creación de estándares que promueven la interoperabilidad abierta y el desarrollo de arquitecturas comunes.


Picture of System Administrator

Improving Server Performance and Security

by System Administrator - Tuesday, 23 December 2014, 2:23 PM

Improving Server Performance and Security

Server systems are, by definition, more important than individual endpoints. They must provide services to hundred, or even thousands, of endpoints and, naturally, must be secure. Traditional anti-virus (AV) solutions can provide protection for servers. However, constantly running AV processes, along with potentially frequent signature updates, can consume resources that could otherwise be used to provide application services to users. Read this evaluation by Tolly, commissioned by Lumension, as the dive into the impact on server resources of the alternative application control solution compared with traditional AV solutions from Microsoft Corp, Symantec Corp, and Trend Micro, Inc.

Please read the attached whitepaper

Picture of System Administrator

Improving the Management and Governance of Unstructured Data

by System Administrator - Friday, 26 June 2015, 6:06 PM

Improving the Management and Governance of Unstructured Data

Maximize efficiency with deeper insight to data value and automated, policy-based compliance, retention & disposition.

Picture of System Administrator

In-Memory Analytics

by System Administrator - Friday, 16 January 2015, 5:51 PM

In-Memory Analytics

Posted by Margaret Rouse

In-memory analytics queries data residing in a computer’s random access memory (RAM) rather than data stored on physical disks. This results in vastly shortened query response times.

In-memory analytics is an approach to querying data when it resides in a computer’s random access memory (RAM), as opposed to querying data that is stored on physical disks.  This results in vastly shortened query response times, allowing business intelligence (BI) and analytic applications to support faster business decisions.

As the cost of RAM declines, in-memory analytics is becoming feasible for many businesses. BI and analytic applications have long supported caching data in RAM, but older 32-bit operating systems provided only 4 GB of addressable memory.  Newer 64-bit operating systems, with up to 1 terabyte (TB) addressable memory (and perhaps more in the future), have made it possible to cache large volumes of data -- potentially an entire data warehouse or data mart -- in a computer’s RAM.

In addition to providing incredibly fast query response times, in-memory analytics can reduce or eliminate the need for data indexing and storing pre-aggregated data in OLAP cubes or aggregate tables.  This reduces IT costs and allows faster implementation of BI and analytic applications. It is anticipated that as BI and analytic applications embrace in-memory analytics, traditional data warehouses may eventually be used only for data that is not queried frequently.

Continue Reading About in-memory analytics:

Related Terms


Picture of System Administrator

Incident Response: How to Fight Back

by System Administrator - Wednesday, 7 January 2015, 4:02 PM


Incident Response: How to Fight Back

Highly public breaches at companies such as Target, Evernote and Living Social, which collectively compromised more than 200 million customer records, are pushing many organizations to develop in-house incident response (IR) capabilities to prevent such data breaches.

IR teams, typically operating under a formalized IR plan, are designed to detect, investigate and, when necessary, remediate organizational assets in the event of a critical incident. SANS conducted a survey focused on the current state of IR during May and June 2014, polling security professionals from more than 19 industries and various-sized companies and organizations. The goal was to get a clearer picture of what IR teams are up against today—the types of attacks they see and what defenses they have in place to detect and respond to these threats. In addition, the survey measured the IR teams’ perceived effectiveness and obstacles to incident handling.

Of the 259 survey respondents, 88% work in an IR role, making this a target audience for soliciting close to real-time data on the current state of IR. Respondents represented 13 different regions and countries and work in management (28%), or as security analysts (29%), incident responders (13%) and forensic examiners (7%). This broad representation helps shed light on both present and future IR capabilities.

Please read the attached whitepaper.

Picture of System Administrator

Indirect Competition

by System Administrator - Thursday, 13 August 2015, 4:41 PM

Indirect Competition

Posted by: Margaret Rouse

Indirect competition is the conflict between vendors whose products or services are not the same but that could satisfy the same consumer need. 

The term contrasts with direct competition, in which businesses are selling products or services that are essentially the same. Cloud storage providers are direct competitors, for example, as are manufacturers of notebook computers

However, in recent years, desktop computer sales have dropped as many consumers purchased notebooks instead. Sellers of desktop PCs and notebooks are indirect competitors. 

In the 1960s, Theodore Levitt wrote a highly-influential article called "Marketing Myopia” for the Harvard Business Review recommending that businesses should take a much broader view of the competitive environment. Leavitt argued that the market’s central organizing element is human needs and that the satisfaction of those needs should be the focus of businesses. Products and services are transient but human needs are not. From that perspective, the distinction between direct and indirect competition is unimportant.

Related Terms



  • Business terms

    - Terms related to business, including definitions about project management and words and phrases about human resources, finance and vertical industries.

  • Internet applications

    - This glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Picture of System Administrator

Industrial Internet of Things (IIoT)

by System Administrator - Tuesday, 7 April 2015, 6:57 PM

Industrial Internet of Things (IIoT)

Posted by Margaret Rouse

IIoT harnesses the sensor data, machine-to-machine communication and automation technologies that have existed in industrial settings for years.

The Industrial Internet of Things (IIoT) is the use of Internet of Things (IoT) technologies in manufacturing.

Also known as the Industrial Internet, IIoT incorporates machine learning and big data technology, harnessing the sensor data, machine-to-machine (M2M) communication and automation technologies that have existed in industrial settings for years. The driving philosophy behind the IIoT is that smart machines are better than humans at accurately, consistently capturing and communicating data. This data can enable companies to pick up on inefficiencies and problems sooner, saving time and money and supporting business intelligence efforts. In manufacturing specifically, IIoT holds great potential for quality control, sustainable and green practices, supply chain traceability and overall supply chain efficiency.

A major concern surrounding the Industrial IoT is interoperability between devices and machines that use different protocols and have different architectures. The nonprofit Industrial Internet Consortium, founded in 2014, focuses on creating standards that promote open interoperability and the development of common architectures.

Continue Reading About Industrial Internet of Things (IIoT)


Picture of System Administrator

Infographic: US employees concerned about BYOD reimbursement

by System Administrator - Monday, 13 July 2015, 5:22 PM

US employees concerned about BYOD reimbursement

Picture of System Administrator

Information Governance Best Practice: Adopt a Use Case Approach

by System Administrator - Monday, 16 February 2015, 4:10 PM

Information Governance Best Practice: Adopt a Use Case Approach

by Debra LoganAlan DayleySheila Childs


Massive data growth, new data types, litigation, regulatory scrutiny and privacy/information risks have all created an urgent need for information governance. IT professionals considering MDM, e-discovery, information archiving or cloud migration should start implementing information governance now.


  • Data migration projects present an opportunity for legal and IT professionals to eliminate redundant, outdated and trivial data, by up to 60% in some cases, decreasing data management costs and reducing legal and regulatory risks.
  • Master data management (MDM), data quality, archiving, enterprise content management (ECM), records management or e-discovery system implementation can be used as a starting point for chief information officers (CIOs) to create specific information governance policies and set the stage for using information assets to drive business growth.
  • Increasing concerns about data security, privacy, personally identifiable information, intellectual property (IP) protection and e-discovery mean that IT has new business partners, such as chief legal and chief risk officers, to assist with its information governance efforts.
  • Use data migration and system retirement as an opportunity to undertake an information governance program, especially "defensible deletion" or legacy information clean up.
  • Focus on MDM, enterprise content management and data quality projects if your organization is seeking cost optimization, of the business benefits associated with growth enablement, service improvement, reduced risk or regulatory compliance.
  • Avoid wasting time and money on overlapping and redundant efforts by bringing the information governance projects that are proliferating in the areas of privacy and data security together now.

Interest in information governance among Gartner clients continues to be strong with "information management" or "information governance" being the topic of over 1,900 inquiries in the six months to September 2013.

Organizations have been talking about information governance for quite a few years, but it is only now that we see more widespread understanding of what it takes to accomplish it. Information governance is starting to expand beyond the traditional litigation and regulatory retention requirements (for risk and cost control) into possible business value propositions. These ideas have finally broken through the ingrained mentality that many had about storage being "inexpensive" and that it was easier to simply keep information than to delete it, or that search technology would allow enterprises to forgo the effort and expense of organizing themselves and devoting resources to governance (see Note 1 for Gartner's definitions of "governance" and "information governance" and how these relate to overall corporate governance).

While more and more organizations are talking about information governance, they are also realizing that governance is technically complex, organizationally challenging and politically sensitive. In addition, it is often difficult to get executive-level sponsorship for governance programs because, in general, executives do not recognize the need for governance — not least because the effects of a lack of information governance are not as readily apparent as other pressing IT concerns. This is starting to change, however, as executives realize that many kinds of difficulties — such as failing to comply with regulatory regimes, excessive litigation costs and a lack of decision-making transparency — are, in fact, failures that have a root cause in poor information governance.

An approach to information governance based on specific use cases is one way to break through these barriers to adoption. This impact assessment presents different information governance use cases, all of which can be used as starting points for larger programs. This approach is one that has been proven successful by many organizations, and our impacts and recommendations can help your enterprise to achieve the same early success in beginning — or continuing — its information governance program (see the Note 2 for examples).

Information governance is a topic of interest both inside and outside IT. CIOs, chief data officers, infrastructure managers, chief information security officers, risk and compliance officers and general counsel can use this research to make decisions about where to start their information governance programs.

Figure 1. Impacts and Top Recommendations for Information Governance Use Cases

ECM = enterprise content management; CIO = chief information officer; IP = intellectual property; MDM = master data management | Source: Gartner (November 2013)

Impacts and Recommendations

Data migration projects present an opportunity for legal and IT professionals to eliminate redundant, outdated and trivial data, by up to 60% in some cases, decreasing data management costs and reducing legal and regulatory risks

Data migration and IT infrastructure modernization are two of the most common information governance use cases. There are a number of variations on this use case, such as migrating file shares to ECM or SharePoint, files to cloud storage (including file sync and share services), and moving data from legacy storage to more modern and cost-effective platforms.

Clients who undertake analysis of existing data stores always tell us that redundant, outdated, trivial and risky data represents between 15% and 60% of what they have (see the Evidence section)

Another example is the migration of legacy enterprise information archiving systems to next-generation, on-premises or SaaS products or services. Enterprise information archiving systems are the target system type in many migrations. Archiving solves several problems that cannot be handled in native email systems, social media systems or by using file shares as primary storage. Archiving systems have been put in place as solutions for storage management, e-discovery, compliance, indexing, search and business or market analysis.

There are two primary use cases here:

  1. The migration of email or files from the email system or from file shares to an archiving system.
  2. The migration from one archiving system to another.

In the process of moving files from one location to another, many enterprises take the opportunity to create rules that allow data to be identified, classified and assessed for ongoing retention or for deletion. In practice, what has happened over the years is that companies have "over-retained" email and files and migration presents an opportunity to delete data that no longer has any business value and doesn't need to be retained for legal or regulatory purposes.

The Recommended Reading section has more advice on the legal and regulatory implications of legacy application retirement.


  • Use data migration and system retirement as an opportunity to undertake an information governance program, especially "defensible deletion" or legacy information clean up.
  • Storage managers or other IT professionals who are considering any archiving scenario should work with legal and compliance professionals to create rules for retaining only the data that is necessary, usually no more than three years' worth, or that which has had a "litigation hold" placed on it. In many cases legal will have asked for the data to be held, but never rescinded the litigation hold, even though the matter is no longer ongoing.
  • When moving files to an ECM system or SharePoint, organizations should include a component of data classification and tagging, again with the involvement of legal and compliance users.
  • Use hardware refreshes and storage redesign projects as an opportunity to introduce information governance to IT.

MDM, data quality, archiving, ECM, records management or e-discovery system implementation can be used as a starting point for CIOs to create specific information governance policies and set the stage for using information assets to drive business growth.

Information governance can be proactive or reactive. Many organizations find themselves in the position of having to retrospectively apply policy and assign responsibility for data, because that was not done at the outset of the project or when the data was created. Proactive information governance takes place at the time of system planning or process creation. The types of projects that lend themselves well to setting up governance structures, roles and policies include MDM, data quality, application archiving and retirement, ECM, records management, e-discovery data collection, business analytics, social analytics and social media compliance

Determining decision rights and responsibilities — along with accountability for setting policy, implementing policy and enforcing policy — should all be part of the project plan for any of these systems. Having carried out this work for one type of project will enable you to extend it to other systems, both old and new, within your enterprise. As a best practice it is essential that these projects be linked and that governance methods be consistent across the full range of information types, irrespective of system of origin or where the data ends up.

Another best practice is the creation of data stewards, giving specific responsibility and accountability to individuals who have an ongoing responsibility for managing the driving revenue, improving service and decreasing time to market are the business benefits that are often sought when implementing MDM, ECM, data quality and e-discovery projects. The starting point for any proactive information governance program must begin with an effort to value the information as an asset.

Questions that make good starting points include

  • "What is the most critical business information we have?"
  • "What information is shared across business processes on an enterprise wide basis?"
  • "Where is our intellectual property?"

To get maximum leverage and value from customer data that is the subject of an MDM project, one must also consider how that data will be used, who gets to use it and how as well as the legalities of doing so.


  • When planning MDM, ECM, data quality and e-discovery projects or programs, use Gartner's methodology (see "Toolkit: Information Governance Project") to identify stakeholders and assess their roles in the management of the data, according to a standard responsible, accountable, consulted and informed (RACI) chart. The two main questions that need to be answered initially are:
    • "Who is responsible for information decisions and policy?"
    • "Who is responsible for data-related policy and processes?"
  • In order to eliminate duplication of effort and data redundancy, or the need for reconciliation, ensure that implementation of policy, workflow, data dictionaries, business glossaries, taxonomies, reference data and other organizational and definitional elements of information governance are led by business subject matter experts and accessed by all governance programs and personnel.

Increasing concerns about data security, privacy, personally identifiable information, IP protection and e-discovery mean that IT has new business partners, such as chief legal and chief risk officers, to assist with its information governance efforts

According to Gartner's annual privacy survey, organization spending on privacy programs around consumers or citizens is as follows:

  • 36% spend $10 or more per employee per year.
  • 32% spend $100 or more on each employee per year.
  • 11% spend $1,000 or more on each employee per year.

Table 1 contains selected data from Fulbright and Jaworski's Annual Litigation Trends Survey (2012).






Companies spending more than $1 million on litigation



Large companies spending more than $1 million on litigation





Companies that had at least one regulatory proceeding commenced against them






Companies that dealt with at least one investigation in 2012 (by industry sector)

  • Energy
  • Technology/communications
  • Retail/wholesale
  • Healthcare
  • Insurance






Source: Gartner (November 2013)

Compliance managers trying to understand the regulations that will apply to them can be overwhelmed by global regulatory proliferation, and this is further complicated by regulations that conflict with each other. This creates serious legal and compliance risks.

Corporate governance, security breach notification, privacy and data protection, and industry-specific regulations — such as money-laundering or bribery laws — have added layer upon layer of compliance to IT processes and activities. Typically, a new regulation or other binding requirement (such as payment card industry compliance) is followed by a revised corporate and departmental policy, which is then translated into a new set of controls that must be maintained by someone in the IT organization. Over time, these controls begin to overlap and audits are conducted by separate groups of internal auditors, regulatory examiners and assessors from business partners — with each group issuing its own questionnaire and requiring its own report.

There is no way to stay in compliance, safeguard privacy, protect IP or decrease litigation costs while responding to the appropriate legal challenges and regulatory requests outside of a unified information governance framework.


  • Work with the legal department to compile a list of regulations.
  • Complete a compliance risk assessment to prioritize regulatory compliance efforts.
  • Map regulations to policies and controls to identify overlaps, redundancies and gaps in policies, controls and records retention requirements.
  • Redesign policies and controls so they can meet multiple regulations without unnecessary duplication.
  • Implement technology that can provide metadata and content analysis of information and to support policy creation by providing snapshots of your organization's data.


© 2013 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This publication may not be reproduced or distributed in any form without Gartner’s prior written permission. If you are authorized to access this publication, your use of it is subject to theUsage Guidelines for Gartner Services posted on The information contained in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This publication consists of the opinions of Gartner’s research organization and should not be construed as statements of fact. The opinions expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see “Guiding Principles on Independence and Objectivity.”


Picture of System Administrator

Infrastructure (IT Infrastructure)

by System Administrator - Thursday, 13 April 2017, 8:55 PM

Infrastructure (IT Infrastructure)

Posted by: Margaret Rouse | Contributor: Clive Longbottom

Infrastructure is the foundation or framework that supports a system or organization. In computing, infrastructure is composed of physical and virtual resources that support the flow, storage, processing and analysis of data. Infrastructure may be centralized within a data center, or it may be decentralized and spread across several data centers that are either controlled by the organization or by a third party, such as a colocation facility or cloud provider.

In a data center, infrastructure includes the power, cooling and building elements necessary to support hardware. On the internet, infrastructure also includes transmission media, such as network cables, satellites, antennas, routers, aggregators, repeaters and other devices that control data transmission paths. Cloud computing provides a flexible IT infrastructure in which resources can be added and removed as workloads change.

The way IT infrastructures are created is continually changing. Today, some vendors provide pre-engineered blocks of compute, storage and network equipment that optimize the IT hardware and virtualization platform into a single system that can be easily interconnected to other systems. This modular approach is called converged infrastructure.
Regardless of how it is created, an IT infrastructure must provide a suitable platform for all the necessary IT applications and functions an organization or individual requires. Viewing IT infrastructure as a single entity can result in better effectiveness and more efficiency. It allows resources to be optimized for different workloads, and the impact of any changes on interrelated resources to be more readily understood and handled.

Infrastructure management is sometimes divided into categories of systems managementnetwork management, and storage managementHands-off infrastructure management uses a software-defined approach to management and automation to minimize the need for physical interaction with infrastructure components. 

Types of infrastructures

An immutable infrastructure is an approach to managing services and software deployments on IT resources wherein components are replaced rather than changed. An application or services is effectively redeployed each time any change occurs.

composable infrastructure is a framework that treats physical compute, storage and network fabric resources as services. Resources are logically pooled so that administrators don't have to physically configure hardware to support a specific software application.

dynamic infrastructure is a framework that can automatically provision and adjust itself as workload demands change. IT administrators can also choose to manage these resources manually.

critical infrastructure is a framework whose assets are so essential that their continued operation is required to ensure the security of a given nation, its economy, and the public’s health and/or safety.

contact center infrastructure is a framework composed of the physical and virtual resources that a call center facility needs to operate effectively. Infrastructure components include automatic call distributors, integrated voice response units, computer-telephony integration and queue management.

cloud infrastructure includes an abstraction layer that virtualizes resources and logically presents them to users over the internet through application program interfaces and API-enabled command-line or graphical interfaces.

dark infrastructure is that part of a framework that is composed of undocumented but active software or services whose existence and function is unknown to system administrators -- despite the fact that it may be integral to the continued operation of documented infrastructure.

cloud storage infrastructure is a framework composed of hardware and software framework that supports the computing requirements of a private or public cloud storage service. 


Picture of System Administrator

Innovación (KW)

by System Administrator - Thursday, 2 May 2013, 6:06 PM

El Proyecto KW es nuevo.

Tiene sinergias importantes con otros relacionados, como enciclopedias (Wiki), ontologías (Protégé), Web 2.0/3.0 y búsqueda (Google), pero es diferente al involucrar proactividad con la gestión y plena integración del valor agregado de los Usuarios Finales, sin que éstos tengan necesidad de conocer programación.

El desarrollo de componentes clave como XML, Ajax y web services, bases de datos orientadas a objetos y la alta disponibilidad de programadores Java y .Net ofrecen un marco ideal para desarrollar las aplicaciones de software compatibles con el proyecto. Los fabricantes de hardware (Intel, AMD, nVidia, IBM, etc.) sabrán acompañar esta nueva onda de conocimiento aplicado.

Picture of System Administrator

Innovation Process Management (IPM)

by System Administrator - Thursday, 2 July 2015, 7:46 PM

innovation process management (IPM) definition

Posted by Margaret Rouse

Innovation process management (IPM) refers to the management of processes used to spur product and organizational innovation. The purpose of innovation process management is to trigger the creative capabilities of employees, create an environment that encourages innovation and develop repeatable processes that make innovation an integral part of the workplace.

According to the consultancy Gartner Inc., companies that can successfully manage and maintain innovation within the workplace can increase revenue, improve operational effectiveness, and pursue new business models.

Common tools or strategies used to elicit this creativity from employees include brainstorming, virtual prototyping, product lifecycle management, idea management, product line planning, portfolio management and more.

Innovation processes often fall into two categories: "pushed" or "pulled." A pushed process is when a company has access to existing or emerging technologies and tries to find a profitable application for it. A pulled process is when the company focuses on areas where the customers' needs are not met and a solution is found.

An important aspect of keeping innovation, especially IT innovation, alive within a company is cultivating and maintaining an innovative culture.

One type of innovation culture is a formulaic innovation culture. A formulaic innovation management style instills a vision throughout the workplace and continually supports that vision through operational processes that enable employees to take measured risks. New ideas are encouraged, can come from anyone within the company and, when good ideas do surface, that idea is supported through one of the company's time-tested processes. The possible drawbacks to this type of business innovation management is that companies can begin to value the system over the breakthroughs, and the culture within the organization can become complacent.

Another type of innovation culture is an entrepreneurial innovation culture. This type of innovation culture is rare and usually features, especially early on in the company's maturity, a single innovator or leader. Steve Jobs, the cofounder of Apple Inc. was an example of the single leader inspired innovation culture, as is Mark Zuckerberg, chairman and CEO of Facebook. These types of companies are usually willing to take risks that most companies would not. These types of companies strive for major disruption rather than incremental growth and they use emerging and disruptive technologies to change how a certain product or service is used. One possible drawback is that the company can rely too heavily on the innovative leader.

Gartner's recommendation to IT leaders interested in launching an innovation management program is to follow a disciplined approach. Here are five steps Gartner recommends IT leaders and their companies take to develop an innovation management program:

  1.  Strategize and plan: Settle on an agreement of the vision for the initiative that is also in line with business goals. Then establish the resources and budget, and integrate the vision with IT and business plans.
  2. Develop governance: Establish a process for making decisions. This includes identifying and engaging stakeholders, agreeing on who is in charge and what the flow for decision making is, and also having feedback mechanisms in place.
  3. Drive change management: Have systems by which people can communicate and socialize via multiple channels; get buy-in from stakeholders at all levels; and assess which open innovation initiatives and cultural shifts will help the company optimize contributions to innovation.
  4. Execute: Make sure to draw from a wide range of sources to generate ideas for innovations that will transform the business, align the initiatives with business goals, and then update and drive new elements of the initiatives in response to changing business requirements.
  5. Measure and improve: Once the innovative initiative is in place, monitor and measure how it has affected business outcomes. It is also important to seek feedback from stakeholders and to continue to study innovation best practices and case studies from other organizations. Also make sure to continually drive improvements through process changes and upgrades.


Picture of System Administrator

Insecure File Sharing

by System Administrator - Thursday, 17 September 2015, 8:35 PM

Breaking Bad: The Risk of Insecure File Sharing

by Intralinks

Data leakage and loss from negligent file sharing and information collaboration practices is becoming just as significant a risk as data theft. Just like malicious threats from hackers and others, data leakage through the routine and insecure sharing of information is a major threat to many organizations. Being able to securely share valuable corporate data is a critical requirement for all organizations, but especially regulated companies like financial services and life sciences firms.

Many companies have few provisions in place – process, governance, and technology – to adequately protect data. Yet, more and more sensitive information is being shared outside the organization, often without the knowledge or approval of CIOs or GRC professionals who are arguably losing control. Employees are 'behaving badly' – they acknowledge risky behavior and in turn experience the consequences of risky behavior regularly.

For the first time, the study Breaking Bad: The Risk of Insecure File Sharing explores the link between organizational and individual behavior when using increasingly popular file sync-andshare solutions. As shown in this research, organizations are not responding to the risk of ungoverned files-sharing practices among employees as well as with external parties, such as business partners, contractors, vendors and other stakeholders.

Consumer grade file-sharing cloud applications are popular with both employees and organizations because they make it possible for busy professionals to work efficiently together.

However, the findings in this report identify the holes in document and file level security in part caused by their expanded use. The goal is to provide solutions to reduce the risk created by employees' document and file sharing practices. More than 1,000 IT and IT security practitioners were surveyed in the United States, United Kingdom and Germany. The majority of respondents are at the supervisor level or above with expertise and understanding of their organization's use of file-sharing solutions and overall information security and data privacy policies and strategies.

Following are the key takeaways from this study...

Please read the attached whitepaper.

Picture of System Administrator

Insomnio en adultos mayores

by System Administrator - Sunday, 15 June 2014, 2:20 PM

09 JUN 14 | Revisión de literatura


Insomnio en adultos mayores
El insomnio es la dificultad para tener un sueño suficiente y reparador, constituye uno de los trastornos más frecuentes en los pacientes geriátricos.

Dres. José Antonio Navarro-Cabrera, Rogelio Domínguez-Moreno, Mario Morales-Esponda, Ingrid Yosheleen Guzmán-Santos
Archivos de Medicina General de México de Año 2 • Número 6 • Abril/Junio 2013


El Insomnio es la dificultad para tener un sueño suficiente y reparador, constituye uno de los trastornos más frecuentes en los pacientes geriátricos (10-50%), la mayor proporción de los casos corresponden al insomnio secundario o comórbido, siendo la depresión y ansiedad los trastornos asociados más frecuentes, aunque también puede estar presente en enfermedades neurodegenerativas entre otras, por lo cual el insomnio se tiene que ver como un síntoma de un gran número de patologías, algunas veces subdiagnosticadas. El insomnio repercute en la esfera social, física y mental del paciente. No existe una etiología definida y generalmente es multifactorial, por lo cual su tratamiento también debe serlo. El diagnóstico se basa en la historia clínica detallada y sólo algunos pacientes con patologías específicas requerirán estudios de gabinete. Se debe tener cuidado con el uso de ciertos fármacos hipnóticos en el paciente geriátrico que pueden predisponerlo a sufrir accidentes o intoxicación.

Palabras clave: Insomio, sueño, adulto mayor.


Conforme transcurre el proceso de envejecimiento se alteran todas las funciones y sistemas corporales. El ritmo circadiano no es la excepción y presenta múltiples modificaciones fisiológicas. La palabra circadiano proviene del griego “circa” (alrededor o en torno a) y “dias” (día), siendo definido como aquellos ritmos biológicos de 24 horas. Algunos ejemplos de ritmo circadiano incluyen la secreción hormonal, temperatura corporal central y el ciclo sueño-vigilia.1El objetivo de este trabajo es revisar de forma general el insomnio en adultos mayores. Debido a su alta frecuencia y poca atención, reviste importancia que la mayor parte de las etiologías de insomnio en este grupo son secundarias a procesos comórbidos que pueden estar subdiagnosticados, por lo cual este se debe abordar como un síntoma y no una enfermedad.

Definición y clasificación 

El insomnio, una de las alteraciones más frecuentes del sueño, puede ser definido como la presencia de una o más de las siguientes manifestaciones: dificultad para el inicio del sueño o para el mantenimiento del mismo, despertar precoz, sueño no reparador o de baja calidad a pesar de un entorno favorable y una adecuada oportunidad para dormir. Por otra parte, la definición empleada requiere que las manifestaciones clínicas antes mencionadas produzcan un impacto significativo sobre el desempeño diurno. En cuanto a su clasificación se divide en: insomnio primario, el cual no tiene una causa definida, y el secundario o comórbido, que es el más frecuente en la población mayor a 65 años, puede ser causado por patologías subyacentes, efectos secundarios de fármacos y algunas condiciones ambientales. El insomnio se clasifica, según su tiempo de evolución, en agudo cuando dura menos de 1 mes, subagudo cuando dura entre 1 y 6 meses, y crónico cuando su duración es superior a 6 meses.2,3


Actualmente, debido al aumento de la esperanza de vida, los adultos mayores de 65 años representan una gran parte de la población, se estima que para 2030 el 25% de la población será de este grupo. La prevalencia de insomnio en adultos mayores es de 10 a 50%, de estos 10-13% sufren de insomnio crónico y de 25-35% tienen insomnio transitorio u ocasional en Estados Unidos. En México se han reportado cifras del 36%.4 Foley y colaboradores estudiaron a 9000 adultos mayores y encontraron que el 42% tenía dificultad para mantener el sueño, mientras que el 28% tenía dificultades para conciliarlo. En este mismo estudio, tres años después, se encontró que los problemas del sueño se habían resuelto en 15% de los pacientes, pero el 5% de los que antes no presentaban insomnio al inicio del estudio lo presentaban en el seguimiento. La mayoría de los estudios indican que el insomnio en ancianos es más prevalente en mujeres que en hombres.5-7

Uno de los principales problemas del insomnio en adultos mayores es que se diagnostican muy pocos casos; el 70% de las personas que lo padecen jamás lo comentan a su médico, 26% lo discuten levemente en alguna consulta por otro motivo y sólo 5% pide una consulta por insomnio; por lo cual, solamente algunos pacientes reciben un tratamiento adecuado. Los costos económicos del insomnio y trastornos relacionados son altos; se estima que en EUA los costos directos son cerca de 14 billones de dólares, y los indirectos, como ausencia laboral y disminución de la productividad, son cerca de los 28 billones de dólares.

Factores de riesgo

Muchos estudios han encontrado una mayor prevalencia de insomnio entre las personas mayores que están propensas a una serie de factores de riesgo concomitantes tales como polifarmacia, enfermedades crónico-degenerativas e institucionalización. La prevalencia en las mujeres es mayor en los años posmenopáusicos. En los ancianos institucionalizados aumenta el riesgo de interrupción del sueño a través de una combinación de anormalidades en las funciones fisiológicas subyacentes (por ejemplo, la incontinencia y la nicturia) y factores ambientales externos (como la interrupción del sueño por el personal de centros de retiro, ruidos, etc.). También, tienen más exposición a la luz durante la noche, lo que puede suprimir la melatonina y aumentar la vigilia nocturna. Los datos longitudinales sugieren que la reducción de la actividad física es un factor de riesgo para el desarrollo del insomnio en ancianos. El insomnio en la tercera edad también puede ser consecuencia de los cambios en el modo de vida relacionados con la jubilación, divorcio, viudez, ocupación, bajo nivel socioeconómico y de la mayor incidencia de problemas médicos causantes de insomnio comórbido.9-11 

Desestructuración del sueño en el adulto mayor El control del ritmo circadiano del sueño está dado por un marcapaso interno localizado en el núcleo supraquiasmático del hipotálamo anterior. Para su adecuada sincronización, alo largo de 24 horas existen dos tipos de estímulos principales, los externos o “zeitgebers” que son la luz y las actividades que la persona realiza, y los ritmos internos en donde la melatonina y la temperatura corporal son los principales representantes. Conforme el individuo envejece, la sincronización por ambos estímulos se ve afectada debido a que el adulto mayor se encuentra menos expuesto a los estímulos externos, aunado a que los ritmos internos se vuelven más débiles. Lo anterior produce una desestructuración en la arquitectura del sueño, la cual se ve reflejada en los siguientes parámetros: disminución del sueño lento profundo, aparición de frecuentes despertares de 2 a 15 segundos de duración que pueden ocurrir con movimientos de las piernas; aumento en la duración del primer sueño REM, así como acortamiento de su latencia y redistribución de los ritmos circadianos a lo largo de 24 horas. Cuando esto ocurre, es común que los ancianos valoren negativamente la calidad de su sueño.12

Insomnio secundario o comórbido

El insomnio puede ser causado por condiciones médicas, psiquiátricas o debido a los efectos secundarios de algunos fármacos, este insomnio se conoce como secundario o comórbido. Las causas más comunes en el anciano son ansiedad, depresión, artritis, dolor crónico, diabetes, reflujo gastroesofágico, falla cardiaca congestiva, cáncer, nicturia, enfermedad pulmonar obstructiva crónica, desórdenes respiratorios del sueño, enfermedad de Alzheimer, enfermedad de Parkinson, déficit neurológico relacionado con evento vascular cerebral, síndrome de piernas inquietas y movimientos periódicos de las piernas durante el sueño. Según Ohayon y colaboradores, el 65% del insomnio secundario en ancianos se asocia a desórdenes psiquiátricos como depresión y ansiedad.13,14 Se ha descrito que el 60-90% de los pacientes con enfermedad de Parkinson tiene trastornos del sueño, la prevalencia de insomnio en estos pacientes ha sido estimada en 30% y usualmente se caracteriza por sueño fragmentado y despertares tempranos. Estos trastornos son ocasionados por varios factores: proceso neurodegenerativo, síntomas motores, depresión y medicamentos. Sin embargo, Diederich y su equipo encontraron que el grado de disfunción motora, la dosis de dopaminérgicos y la edad son factores independientes al insomnio.15,16

La pérdida o daño de las vías neuronales en el núcleo supraquiasmático contribuye a la aparición de insomnio en ancianos con demencia Alzheimer. Asimismo, muchas sustancias y medicamentos pueden interferir con el sueño, lo que, aunado a los cambios fisiológicos en esta edad, los hacen más susceptibles a padecer insomnio17 (Cuadro 1).

Cuadro 1. Medicamentos y otras sustancias que pueden contribuir al desarrollo de insomnio en ancianos



Tomado de: Wolkove N, Elkholy O, Baltzan M, Palayew M. Sleep and aging: 1. Sleep disorders commoly found in older people. CMAJ 2007;176:1299-304.

Cuadro clínico

El insomnio es un síntoma, no una enfermedad, por lo que las manifestaciones clínicas secundarias a éste son variadas dependiendo de la enfermedad asociada. Algunas de las características de las patologías asociadas al insomnio secundario se ejemplifican en el Cuadro 2.

Cuadro 2. Síntomas característicos de algunos desórdenes de insomnio comórbido


Tomado de: Doghramji K. The Evaluation and Management of Insomnia. Clin Chest Med 2010;31:327–39

Cuando los pacientes mayores se enfrentan al insomnio, las consecuencias clínicas incluyen fatiga, alteración del estado de ánimo, somnolencia diurna, deterioro cognitivo, cefalea tensional, cognición alterada, intelecto disminuido, confusión, retraso psicomotor, irritabilidad y aumento en el riesgo de lesiones, las cuales pueden poner en peligro la calidad de vida y provocar accidentes, creando cargas sociales y económicas para los cuidadores.18

De acuerdo al momento en que se presenta el insomnio puede manifestarse como:

  1. insomnio inicial (dificultades para conciliar el sueño, latencia alargada) 
  2. insomnio intermedio o de mantenimiento (despertares durante el sueño y dificultad para volver a dormir)
  3. insomnio terminal (despertar prematuro, precoz o temprano). 19

El insomnio inicial es significativamente mayor en pacientes con trastornos afectivos, trastornos de ansiedad, trastornos de personalidad y demencia Alzheimer. La dificultad de mantener el sueño (insomnio intermedio) es común en los trastornos respiratorios del sueño, se presenta hipoxemia nocturna, disnea y nicturia, repercutiendo en el tiempo y calidad del sueño nocturno. El insomnio terminal suele presentarse en movimientos periódicos de los miembros durante el sueño, lo cual genera frecuentes despertares muy breves y da como resultado un sueño fragmentado y no reparador.20,21

Al momento de evaluar a un paciente con insomnio, hay que tener abierta la posibilidad de asociaciones con patologías insomnogénicas (por ejemplo, reflujo gastroesofágico) y por lo tanto, la entrevista estará dirigida a identificar síntomas y signos que nos pueden llevar a algún diagnóstico primario.22


La evaluación del insomnio se basa principalmente en la historia clínica detallada y sólo una pequeña proporción de los casos requiere estudios del sueño. Se debe tener en cuenta la severidad, duración, frecuencia y secuelas diurnas. Para que la dificultad para dormir se considere de relevancia clínica debe de estar presente al menos tres veces por semana. En la historia clínica es necesario recabar ciertos datos de importancia como factores ambientales, familiares, ocupacionales, sociales, comorbilidades, eventos durante el sueño, consumo de sustancias insomnogénicas, características del insomnio (latencia, duración del sueño y número de despertares), actividades antes de dormir, repercusión diurna, factores perpetuantes y tratamientos llevados. Asimismo, debe interrogarse al compañero de cama sobre los síntomas y signos que pueden estar asociados como ronquidos, jadeo o tos (trastornos de la respiración durante el sueño), movimientos de las piernas, patadas durante el sueño (trastornos de movimientos durante el sueño), comportamiento o vocalización durante el sueño (parasomnias) entre otros, ya que nos pueden orientar a patologías específicas.23-25

La exploración física debe centrarse en la detección de factores de riesgo, como por ejemplo algunos relacionados con apnea del sueño (obesidad, restricciones a la apertura de la vía aérea, etc.) así como para detectar condiciones médicas comórbidas como enfermedades pulmonares, cardiacas, reumatológicas, neurológicas, endocrinas (en particular de la tiroides) y gastrointestinales.26

Existe un gran número de instrumentos que evalúan distintas variables del sueño. La elección del instrumento debe estar basada en su accesibilidad, la experiencia del médico y las condiciones del paciente.27

Como mínimo se debe completar:

A) Un cuestionario de medicación médica/psiquiátrica (para identificar enfermedades comórbidas y uso de medicamentos).

B) La Escala de Somnolencia de Epworth o el Índice de Calidad de Sueño de Pittsburg.

C) Un diario del sueño por dos semanas nos ayudará a identificar la hora del sueño y del despertar, los patrones y la variabilidad día a día.28

El uso de un diario de sueño nos ayuda a obtener información actual del tiempo del sueño, la duración del paciente en la cama, los despertares nocturnos y la variabilidad diaria de estos parámetros. La valoración psicológica es útil para descartar la presencia de síntomas psicopatológicos, para esto puede utilizarse la escala hospitalaria de ansiedad y depresión (HADS), la escala de Hamilton (HDRS), el inventario de ansiedad estado-rasgo (STAI) y el cuestionario de cribado de ansiedad (ASQ-15).29

La Escala de Somnolencia de Epworth (ESE) y el Índice de Calidad de Sueño de Pittsburg (ICSP) son los instrumentos más usados en la actualidad. La ESE es un inventario diseñado para valorar el nivel de somnolencia diurna de un individuo, distingue adecuadamente entre quedarse dormido y solamente sentirse cansado. El ICSP valora la calidad del sueño en 1 mes y nos puede dar una información útil sobre las alteraciones del sueño en general.30,31

Existen además instrumentos con medidas más objetivas que, sin embargo, no se usan de manera rutinaria por su alto costo y limitada accesibilidad.

La polisomnografía es un estudio que valora los ciclos y etapas del sueño por medio del registro de las ondas cerebrales, la actividad eléctrica de los músculos, los movimientos oculares, la frecuencia respiratoria, la presión arterial, la saturación del oxígeno en la sangre y el ritmo cardíaco; es útil para la evaluación de otros trastornos del sueño concomitantes. De acuerdo con la American Academy of Sleep Medicine se puede aplicar en casos específicos cuando el diagnóstico es dudoso, estos casos incluyen sospecha de trastornos de la respiración durante el sueño como apnea obstructiva del sueño, trastornos de movimientos periódicos de las extremidades, diagnóstico inicial incierto, poca respuesta al tratamiento y despertares con comportamientos violentos. Otro instrumento con medidas objetivas es la actigrafía, un método no invasivo que permite, mediante la colocación de un pequeño sensor, normalmente colocado en el brazo no dominante, valorar los periodos de reposo y actividad. Se utiliza para medir los diferentes tiempos del sueño. Su uso es controversial puesto que en algunos estudios los resultados son similares a los obtenidos por polisomnografía, el aumento de la duración de la grabación durante más de siete días puede mejorar la fiabilidad de las estimaciones de la actigrafía.32


El tratamiento del insomnio es necesariamente multifactorial, incluye aspectos como: tratamiento no farmacológico (terapia cognitiva y conductual) y farmacológico (benzodiacepinas, no benzodiacepinas y antidepresivos).33

Para el tratamiento del insomnio a corto plazo se pueden usar distintos fármacos, teniendo en cuenta que las benzodiacepinas pueden predisponer a caídas y accidentes en el paciente geriátrico (Cuadro 3).

Cuadro 3. Fármacos de primera línea en el tratamiento del insomnio


Tomado de: Sateia MJ, Pigeon WR. Identification and management of insomnia. Med Clin N Am 2004;88:567-96.

La terapia no farmacológica incluye educación sobre sueño, higiene del sueño, técnicas de relajación, control de estímulos y restricción del sueño. La higiene del sueño es un concepto que se refiere a evitar factores precipitantes y perpetuantes del insomnio, en éste interviene la terapia conductual y el apoyo psicosocial de los pacientes que lo padecen. Dentro de las medidas encaminadas a la higiene del sueño están:

  • Evitar el consumo de alcohol, tabaco, cafeína, cenas copiosas, siestas diurnas y ejercicio intenso antes de irse a dormir.
  • Excluir disturbios del sueño como mascotas, televisión, excesivo calor, luz, ruido externo o de la cama.
  • Tener un horario establecido de irse a la cama y de preferencia ir cuando se tenga sueño.
  • Usar la cama sólo para dormir o para realizar actividades sexuales.
  • Si no concilia sueño en 20 minutos salir de la cama y hacer alguna actividad relajante, repetir el ciclo hasta que sea necesario.

Aunque éstas medidas son ineficaces en el insomnio comórbido, pueden ser de ayuda cuando ya se ha tratado lapatología que lo produce y el insomnio persiste, en este sentido deben buscarse también factores perpetuantes.

Como se mencionó anteriormente, la causa más común de insomnio en el anciano es la depresión, por lo cual el tratamiento farmacológico debe ser diseñado con el objeto de proporcionar un alivio a corto plazo de los problemas del sueño y tratamiento antidepresivo para largo plazo, para este último se pueden usar los inhibidores selectivos de la recaptura de serotonina. Asimismo, está indicado el tratamiento sintomático en condiciones médicas que así lo ameriten, en algunas este tratamiento puede ser suficiente para erradicar el insomnio, por ejemplo, para el tratamiento del síndrome de piernas inquietas y del movimiento periódico de las piernas se prescribe el uso de agentes dopaminérgicos como pramipexol, ropirinol y levodopa. En años recientes, se ha visto que la melatonina puede ser de ayuda en el tratamiento de pacientes ancianos, especialmente en los que tienen disminuida la producción endógena. Otros fármacos que pueden ayudar en el tratamiento del insomnio son el L-triptófano, valeriana, kava y los antihistamínicos como la difenhidramina y la hidroxicina; sin embargo, hacen falta estudios para comprobar su eficacia y algunos de ellos se han asociado a efectos adversos graves.34,35


El insomnio es uno de los trastornos más frecuentes en los adultos mayores, siendo el comórbido o secundario el más prevalente en estos pacientes, es por eso que debe ser considerado un síntoma y no una enfermedad, por lo que se deben buscar las causas primarias y tratarlas. El cuadro clínico es amplio manifestando diversos síntomas como somnolencia diurna, irritabilidad, deterioro cognitivo, torpeza motora y fatiga, entre otros que afectan la funcionalidad de los pacientes. El diagnóstico se basa en una historia clínica detallada y sólo algunos casos requieren estudios especiales del sueño. El tratamiento del insomnio es multifactorial, incluye aspectos como: tratamiento no farmacológico (terapia cognitiva y conductual) y farmacológico (benzodiacepinas, no benzodiacepinas y antidepresivos). 


  • 1 Ayalon G. Med Clin N Am 2004; 88:737–50
  • 2 López A, Lemus A, Manterola C, Ramírez J. Repercusiones médicas, sociales y económicas del insomnio. Arch Neurocien 2009; 4:266-72.
  • 3 Pascual B, Gómez S. Historia clínica básica y tipos de insomnio. Vigilia-Sueño 2006; 18:9-15.
  • 4 Alvarado R. Frecuencia del insomnio en México. Arch Neurocien 1997; 2:114-21.
  • 5 Foley DJ, Monjan AA, Brown SL, Simonsick EM, Wallace RB, Blazer DG. Sleep complaints among elderly persons: an epidemiologic study of three communities. Sleep 1995;18:425–32.
  • 6 Foley DJ, Monjan A, Simonsick EM, Wallace RB, Blazer DG. Incidence and remission of insomnia among elderly adults: an epidemiologic study of 6,800 persons over three years. Sleep 1999; 22:366–72.
  • 7 Ancoli-Israel S, Roth T. Characteristics of insomnia in the United States: results of the 1991 National Sleep Foundation Survey. Sleep 1999; 22:347–53.
  • 8 Walsh JK. Clinical and socioeconomic correlates of insomnia. J Clin Psychiatry 2004; 65:41–5.
  • 9 Paniagua MA, Paniagua EW. The Demented Elder with Insomnia. Clin Geriatr 2008; 24:69-81.
  • 10 Roth T, Roehrs T, Pies R. Insomnia: Pathophysiology and implications for treatment. Sleep Medicine Reviews 2007; 11:71–9.
  • 11 Holbrook AM. The diagnosis and management of insomnia in clinical practice: a practical evidence-based approach. Can Med Assoc J 2000; 162: 216-20.
  • 12 Cruz M, Hernández Y, Morera B, Fernández Z, Rodríguez JC. Trastornos del sueño en el adulto mayor en la comunidad. Rev Ciencias Médicas 2008; 12:1614-18.
  • 13 Foley D, Ancoli-Israel S, Britz P, Walsh J: Sleep disturbances and chronic disease in older adults: results of the 2003 National Sleep Foundation Sleep in America Survey. J Psychosom Res 2004; 56:497–502.
  • 14 Ohayon MM, Roth T. What are the contributing factors for insomnia in the general population? J Psychosomatic Res 2001; 51:745–55.
  • 15 Ancoli-Israel S. Insomnia in the elderly: a review for the primary care practitioner. Sleep 2000; 23:23–30.
  • 16 Diederich NJ, Vaillant M, Mancuso G. Progressive sleep ‘destructuring’ in Parkinson’s disease. A polysomnographic study in 46 patients. Sleep Med 2005; 6:313-8.
  • 17 Wolkove N, Elkholy O, Baltzan M, Palayew M. Sleep and aging: 1. Sleep disorders commonly found in older people. CMAJ 2007; 176:1299-304.
  • 18 Avidan AY. Sleep changes and disorders in the elderly patient. Curr Neurol Neurosci Rep 2002; 2:178–85.
  • 19 Trujillo Z. Insomnio en el paciente geriátrico. Arc Neurocien 1997; (2)2:122-7.
  • 20 Benca RM, Obermeyer WH, Thisted RA, Gillin JC. Sleep and psychiatric disorders. A metaanalysis. Arch Gen Psychiatry 1992; 49(8):651– 68.
  • 21 Factor SA, McAlarney T, Sanchez-Ramos JR, Weiner WJ. Sleep disorders and sleep effect in Parkinson’s disease. Mov Disord 1990; 5(4):280– 5.
  • 22 Del río Portilla IY. Estrés y sueño. Rev Mex Neuroci 2006; 7:15-20.
  • 23 Schutte-Rodin S, Broch L, Buysse D, Dorsey C, Sateia M. Clinical Guideline for the Evaluation and Management of Chronic Insomnia in Adults. J Clin Sleep Med 2008; 5:487-504.
  • 24 Todd Arnedt J, Conroy D, Aloia M. Evaluation of insomnia patients. Sleep Med Clin 2006; 1:319-32.
  • 25 Lichstein K, Durrence H, Taylor DJ. Quantitative criteria for insomnia. Behav Res Ther 2003; 41:427-45.
  • 26 Mai E, Buysse D. Insomnia: Prevalence, impact, pathogenesis, differential diagnosis, and evaluation. Sleep Med Clin 2008; 3:167-74.
  • 27 Littner M, Kushida C, Wise M. Standards of Practice Committee of the American Academy of Sleep Medicine. Practice parameters for clinical use of the multiple sleep latency test and the maintenance of wakefulness test. Sleep 2005; 28:113-21.
  • 28 Chesson AL, Anderson WM, Littner M. Practice parameters for the nonpharmacologic treatment of chronic insomnia. An American Academy of Sleep Medicine report. Standards of Practice Committee of the American Academy of Sleep Medicine. Sleep 1999; 22:1128-33.
  • 29 Kryger M, Roth T, Dement W. Principles and Practice of Sleep Medicine. Philadelphia:WB Saunders; 2000. p. 521-5.
  • 30 Buysse DJ, Reynolds CF, Monk TH, Bemlall SR, Kupfer DJ. The Pittsburgh Sleep Quality Index: A new instrument for psychiatric practice and research. Psychiatry Res 1989; 28:193-213.
  • 31 Johns MW. A new method for measuring daytime sleepiness: the Epworth Sleepiness Scale. Sleep 1991; 14:540–45. 32 Sivertsen B, Omvik S, Havik OE, Pallesen S, Bjorvant B, Nielsen GH. A comparison of actigraphy and polysomnography in older adults treated for chronic primary insomnia. Sleep. 2006;29:1353-8. 32 Sateia MJ, Pigeon WR. Identification and management of insomnia. Med Clin N Am 2004; 88:567-96. 32 Zhdanova IV, Wurtman RJ, Regan MM, Taylor JA, Shi JP, Leclair OU. Melatonin treatment for age-related insomnia. J Clin Endocrinol Metab 2001; 86:4727–30.
  • 32 Curry D, Eisenstein R, Walsh JK. Pharmacologic management of insomnia: past, present, and future. Psychiatr Clin N Am 2006; 29:871–93.


Picture of System Administrator

Integrating Big Data into Business Processes and Enterprise Systems

by System Administrator - Wednesday, 10 September 2014, 9:18 PM

Integrating Big Data into Business Processes and Enterprise Systems

In the paper, "Integrate Big Data into Your Business Processes and Enterprise Systems" you'll learn how to drive maximum value with an enterprise approach to Big Data. Topics discussed include:

  • How to ensure that your Big Data projects will drive clearly defined business value
  • The operational challenges each Big Data initiative must address
  • The importance of using an enterprise approach for Hadoop batch processing

Please read the attached whitepaper

Picture of System Administrator

Integrating Physical Layer Management Systems into Today’s Networks

by System Administrator - Tuesday, 4 November 2014, 5:40 PM

Integrating Physical Layer Management Systems into Today’s Networks


TE Connectivity

Damon DeBenedictis has had a 17-year career at TE Connectivity, managing copper and fiber product portfolios that have led to market-changing technologies for data centers, office networks, and broadcast networks.

Physical layer management (PLM) systems provide complete visibility into the physical state of the network at any given time, but integrating such systems into a network and business processes may seem like a complex project. Where does one start? When do you integrate PLM and how do you do it? In this article, we’ll look PLM and at some key considerations when integrating a PLM system into a network.

Breaking down a PLM system

A PLM system is a tool that network managers use to access and catalogue real-time status information about their physical layer networks. PLM systems bring layer 1 to the same visibility as layers 2-7 by including intelligent connectors on patch cords and intelligent ports on patch panels. The solution software reports the state of every network connection: whether or not it is connected, how much bandwidth a circuit can carry, and the type of circuit (i.e., Cat5/6 Ethernet or single- or multi-mode fiber). The PLM system also provides circuit mapping, alarming, and reporting.

Areas of consideration prior to integration

The key opportunity for implementing a PLM system arises when there is a new data center or data center expansion project. This is the time to consider PLM.

There are two basic ways to integrate a PLM system into a network:

  1. Use the PLM system’s own application and database;
  2. Use a middleware API in the PLM system to integrate its output into an existing network management system.

The decision about which route to take depends on the network manager’s tolerance for using an additional management system on top of the others he or she is already using, and whether or not it’s worth the effort to adopt a new system.

Two ways to integrate: the pros and cons of both

The advantage to using the PLM system’s own application and database is that it manages the entire physical layer, mapping circuits, issuing work orders, reserving ports for new connections, reporting on circuit and patch panel inventories, and other functions. However, using a new application may require some duplication of effort as the manager compares the PLM system’s output with the output of other management systems. In addition, the PLM application will require process changes to employee workflows as a new work order system is integrated.

With the middleware approach, the manager need not change anything about employee workflows. However, the value of the input is limited to what the target management system can accept. For example, if the management system doesn’t understand the network at the patch cord level, then patch cord status and locations will not be available to the network manager.

Choosing between the two, what’s right for you?

One key to deciding between the application and middleware approaches is to determine whether or not the existing work order and documentation systems are working well. Large carriers use existing or home grown software tools to manage their networks. Frequently, these systems include work order management systems that automatically email work orders to the appropriate technicians. In smaller organizations, however, network documentation may be done manually on spreadsheets. Either way, these manual data entry tools are fraught with errors and very labor-intensive.

If a company has a robust work order management system and simply wants to add awareness of the physical network to its suite of tools, then integrating PLM middleware into an existing management system is the way to go. But for companies that struggle with work order management, using the PLM application will be well worth whatever changes must take place in employee workflows.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.



Picture of System Administrator

Intel® RealSense™ SDK

by System Administrator - Wednesday, 4 November 2015, 3:52 PM

Building Gesture Recognition Web Apps with Intel® RealSense™ SDK

by Jimmy Wei | Intel Corporation

In this article, we will show you how to build a web application that can detect various types of gestures using the Intel® RealSense™ SDK and front facing (F200) camera.

Editorial Note

This article is in the Product Showcase section for our sponsors at CodeProject. These reviews are intended to provide you with information on products and services that we consider useful and of value to developers.


In this article, we will show you how to build a web application that can detect various types of gestures using the Intel® RealSense™ SDK and front facing (F200) camera. Gesture recognition will give users of your application another innovative means for navigation and interface interaction. You will need basic knowledge of HTML, JavaScript*, and jQuery in order to complete this tutorial.

Hardware Requirements

  • 4th generation (or later) Intel® CoreTM processor
  • 150 MB free hard disk space
  • 4 GB RAM
  • Intel® RealSense™ camera (F200)
  • Available USB3 port for the Intel RealSense camera (or dedicated connection for integrated camera)

Software Requirements

  • Microsoft Windows* 8.1 (or later)
  • A web browser such as Microsoft Internet Explorer*, Mozilla Firefox*, or Google Chrome*
  • The Intel RealSense Depth Camera Manager (DCM) for the F200, which includes the camera driver and service, and the Intel RealSense SDK. Go here to download components.
  • The Intel RealSense SDK Web Runtime. Currently, the best way to get this is to run one of the SDK’s JavaScript samples, which can be found in the SDK install directory. The default location is C:\Program Files (x86)\Intel\RSSDK\framework\JavaScript. The sample will detect that the web runtime is not installed, and prompt you to install it.


Please make sure that you complete the following steps before proceeding further.

  1. Plug your F200 camera into a USB3 port on your computer system.
  2. Install the DCM.
  3. Install the SDK.
  4. Install the Web Runtime.
  5. After installing the components, navigate to the location where you installed the SDK (we’ll use the default path):

C:\Program Files (x86)\Intel\RSSDK\framework\common\JavaScript

You should see a file called realsense.js. Please copy that file into a separate folder. We will be using it in this tutorial. For more information on deploying JavaScript applications using the SDK, click here.

Code Overview

For this tutorial, we will be using the sample code outlined below. This simple web application displays the names of gestures as they are detected by the camera. Please copy the entire code below into a new HTML file and save this file into the same folder as the realsense.js file. Alternatively, you can download the complete web application by clicking on the code sample link at the top of the article. We will go over the code in detail in the next section.

The Intel RealSense SDK relies heavily on the Promise object. If you are not familiar with JavaScript promises, please refer to this documentation for a quick overview and an API reference.

Refer to the Intel RealSense SDK documentation to find more detail about SDK functions referenced in this code sample. The SDK is online, as well as in the doc directory of your local SDK install.

    <title>RealSense Sample Gesture Detection App</title>
    <script type="text/javascript" src=""></script>
    <script type="text/javascript" src=""></script>
    <script type="text/javascript" src=""></script>
    <script type="text/javascript" src="realsense.js"></script>
        var sense, hand_module, hand_config
        var rs = intel.realsense

        function DetectPlatform() {
            rs.SenseManager.detectPlatform(['hand'], ['front']).then(function (info) {
                if (info.nextStep == 'ready') {
                else if (info.nextStep == 'unsupported') {
                    $('#info-area').append('<b> Platform is not supported for Intel(R) RealSense(TM) SDK: </b>')
                    $('#info-area').append('<b> either you are missing the required camera, or your OS and browser are not supported </b>')
                else if (info.nextStep == 'driver') {
                    $('#info-area').append('<b> Please update your camera driver from your computer manufacturer </b>')
                else if (info.nextStep == 'runtime') {
                    $('#info-area').append('<b> Please download the latest web runtime to run this app, located <a href="">here</a> </b>')
            }).catch(function (error) {
                $('#info-area').append('Error detected: ' + JSON.stringify(error))

        function Start() {
            rs.SenseManager.createInstance().then(function (instance) {
                sense = instance
                return rs.hand.HandModule.activate(sense)
            }).then(function (instance) {
                hand_module = instance
                hand_module.onFrameProcessed = onHandData
                return sense.init()
            }).then(function (result) {
                return hand_module.createActiveConfiguration()
            }).then(function (result) {
                hand_config = result
                hand_config.allAlerts = true
                hand_config.allGestures = true
                return hand_config.applyChanges()
            }).then(function (result) {
                return hand_config.release()
            }).then(function (result) {
                return sense.streamFrames()
            }).catch(function (error) {

        function onHandData(sender, data) {
            for (g = 0; g < data.firedGestureData.length; g++) {
                $('#gesture-area').append(data.firedGestureData[g].name + '<br />')


    <div id="info-area"></div>
    <div id="gesture-area"></div>

The screenshot below is what the app looks like when you run it and present different types of gestures to the camera.


Detecting the Intel® RealSense™ Camera on the System

Before we can use the camera for gesture detection, we need to see if our system is ready for capture. We use the detectPlatform function for this purpose. The function takes two parameters: the first is an array of runtimes that the application will use and the second is an array of cameras that the application will work with. We pass in ['hand'] as the first argument since we will be working with just the hand module and ['front'] as the second argument since we will only be using the F200 camera.

The function returns an info object with a nextStep property. Depending on the value that we get, we can determine if the camera is ready for usage. If it is, we call the Start function to begin gesture detection. Otherwise, we output an appropriate message based on the string we receive back from the platform.

If there were any errors during this process, we output them to the screen.

rs.SenseManager.detectPlatform(['hand'], ['front']).then(function (info) {
    if (info.nextStep == 'ready') {
    else if (info.nextStep == 'unsupported') {
        $('#info-area').append('<b> Platform is not supported for Intel(R) RealSense(TM) SDK: </b>')
        $('#info-area').append('<b> either you are missing the required camera, or your OS and browser are not supported </b>')
    else if (info.nextStep == 'driver') {
        $('#info-area').append('<b> Please update your camera driver from your computer manufacturer </b>')
    else if (info.nextStep == 'runtime') {
        $('#info-area').append('<b> Please download the latest web runtime to run this app, located <a href="">here</a> </b>')
}).catch(function (error) {
    $('#info-area').append('Error detected: ' + JSON.stringify(error))

Setting Up the Camera for Gesture Detection

rs.SenseManager.createInstance().then(function (instance) {
    sense = instance
    return rs.hand.HandModule.activate(sense)

You need to follow a sequence of steps to set up the camera for gesture detection. First, create a new SenseManager instance and enable the camera to detect hand movement. The SenseManager is used to manage the camera pipeline.

To do this, we will call the createInstance function. The callback returns the instance that we just created, which we store in the sense variable for future use. We then call the activate function to enable the hand module, which we will need for gesture detection.

.then(function (instance) {
    hand_module = instance
    hand_module.onFrameProcessed = onHandData
    return sense.init()

Next, we need to save the instance of the hand tracking module that was returned by the activate function into the hand_module variable. We then assign the onFrameProcessed property to our own custom callback function called onHandData whenever new frame data is available. Finally, we initialize the camera pipeline for processing by calling the Init function

.then(function (result) {
    return hand_module.createActiveConfiguration()

To configure the hand tracking module for gesture detection, you have to create an active configuration instance. This is done by calling the createActiveConfiguration function.

.then(function (result) {
    hand_config = result
    hand_config.allAlerts = true
    hand_config.allGestures = true
    return hand_config.applyChanges()

The CreateActiveConfiguration function returns the instance of the configuration, which is stored in the hand_config variable. We then set the allAlerts property to true to enable all alert notifications. The alert notifications give us additional details such as the frame number, timestamp, and the hand identifier that triggered the alert. We also set the allGestures property to true, which is needed for gesture detection. Finally, we call the applyChanges function to apply all parameter changes to the hand tracking module. This makes the current configuration active.

.then(function (result) {
    return hand_config.release()

We then call the release function to release the configuration.

.then(function (result) {
    return sense.streamFrames()
}).catch(function (error) {

Finally, the next sequence of functions sets up the camera to start streaming frames. When new frame data is available, the onHandData function will be invoked. If any errors were detected, we catch them and log all errors to the console.

The onHandData function

function onHandData(sender, data) {
    for (g = 0; g < data.firedGestureData.length; g++) {
        $('#gesture-area').append(data.firedGestureData[g].name + '<br />')

The onHandData callback is the main function where we check to see if a gesture has been detected. Remember this function is called whenever there is new hand data and that some of the data may or may not be gesture-related data. The function takes in two parameters, but we use only the data parameter. If gesture data is available, we iterate through the firedGestureData array and get the gesture name from the name property. Finally, we output the gesture name into the gesture-area div, which displays the gesture name on the web page.

Note that the camera remains on and continues to capture gesture data until you close the web page.


In this tutorial, we used the Intel RealSense SDK to create a simple web application that uses the F200 camera for gesture detection. We learned how to detect whether a camera is available on the system and how to set up the camera for gesture recognition. You could modify this example by checking for a specific gesture type (e.g., thumbsup or thumbsdown) using if statements and then writing code to handle that specific use case.

About the Author

Jimmy Wei is a software engineer and has been with Intel Corporation for over 9 years.

Related Resources


  • No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty rising from course of performance, course of dealing, or usage in trade.


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 6:08 PM

The Internet, sometimes called simply "the Net," is a worldwide system of computer networks - a network of networks in which users at any one computer can, if they have permission, get information from any other computer (and sometimes talk directly to users at other computers). It was conceived by the Advanced Research Projects Agency (ARPA) of the U.S. government in 1969 and was first known as the ARPANet. The original aim was to create a network that would allow users of a research computer at one university to be able to "talk to" research computers at other universities. A side benefit of ARPANet's design was that, because messages could be routed or rerouted in more than one direction, the network could continue to function even if parts of it were destroyed in the event of a military attack or other disaster.

Today, the Internet is a public, cooperative, and self-sustaining facility accessible to hundreds of millions of people worldwide. Physically, the Internet uses a portion of the total resources of the currently existing public telecommunication networks. Technically, what distinguishes the Internet is its use of a set of protocols called TCP/IP (for Transmission Control Protocol/Internet Protocol). Two recent adaptations of Internet technology, theintranet and the extranet, also make use of the TCP/IP protocol.

For many Internet users, electronic mail (e-mail) has practically replaced the Postal Service for short written transactions. Electronic mail is the most widely used application on the Net. You can also carry on live "conversations" with other computer users, using Internet Relay Chat (IRC). More recently, Internet telephony hardware and software allows real-time voice conversations.

The most widely used part of the Internet is the World Wide Web (often abbreviated "WWW" or called "the Web"). Its outstanding feature is hypertext, a method of instant cross-referencing. In most Web sites, certain words or phrases appear in text of a different color than the rest; often this text is also underlined. When you select one of these words or phrases, you will be transferred to the site or page that is relevant to this word or phrase. Sometimes there are buttons, images, or portions of images that are "clickable." If you move the pointer over a spot on a Web site and the pointer changes into a hand, this indicates that you can click and be transferred to another site.

Using the Web, you have access to millions of pages of information. Web browsing is done with a Web browser, the most popular of which are Microsoft Internet Explorer and Netscape Navigator. The appearance of a particular Web site may vary slightly depending on the browser you use. Also, later versions of a particular browser are able to render more "bells and whistles" such as animation, virtual reality, sound, and music files, than earlier versions.

RELATED GLOSSARY TERMS: HTTP (Hypertext Transfer Protocol)static analysis (static code analysis)intranetWindows Workflow Foundation (WF or WinWF)XAML (Extensible Application Markup Language)scripting languageGUI (graphical user interface)Windows Communication Foundation (WCF)Windows Presentation Foundation (WPF)Windows File System (WinFS)

Picture of System Administrator

Internet de las cosas (IoT)

by System Administrator - Thursday, 5 January 2017, 6:25 PM

Internet de las cosas (IoT)

Publicado por: Margaret Rouse

La internet de las cosas (IoT, por sus siglas en inglés) es un sistema de dispositivos de computación interrelacionados, máquinas mecánicas y digitales, objetos, animales o personas que tienen identificadores únicos y la capacidad de transferir datos a través de una red, sin requerir de interacciones humano a humano o humano a computadora.

Una cosa, en la internet de las cosas, puede ser una persona con un implante de monitor de corazón, un animal de granja con un transpondedor de biochip, un automóvil que tiene sensores incorporados para alertar al conductor cuando la presión de los neumáticos es baja, o cualquier otro objeto natural o artificial al que se puede asignar una dirección IP y darle la capacidad de transferir datos a través de una red.

IoT ha evolucionado desde la convergencia de tecnologías inalámbricas, sistemas micro-electromecánicos (MEMS), microservicios e internet. La convergencia ha ayudado a derribar las paredes de silos entre la tecnología operativa (OT) y la tecnología de la información (TI), permitiendo que los datos no estructurados generados por máquinas sean analizados para obtener información que impulse mejoras.

Kevin Ashton, cofundador y director ejecutivo del Auto-ID Center de MIT, mencionó por primera vez la internet de las cosas en una presentación que hizo a Procter & Gamble en 1999. He aquí cómo Ashton explica el potencial de la internet de las cosas:

"Las computadoras de hoy –y, por lo tanto, la internet– dependen casi totalmente de los seres humanos para obtener información. Casi todos los aproximadamente 50 petabytes (un petabyte son 1.024 terabytes) de datos disponibles en internet fueron capturados y creados por seres humanos escribiendo, presionando un botón de grabación, tomando una imagen digital o escaneando un código de barras. 

El problema es que la gente tiene tiempo, atención y precisión limitados, lo que significa que no son muy buenos para capturar datos sobre cosas en el mundo real. Si tuviéramos computadoras que supieran todo lo que hay que saber acerca de las cosas –utilizando datos que recopilaron sin ninguna ayuda de nosotros– podríamos rastrear y contar todo, y reducir en gran medida los desechos, las pérdidas y el costo. Sabríamos cuándo necesitamos reemplazar, reparar o recordar cosas, y si eran frescas o ya pasadas”.

El enorme aumento de IPv6 en el espacio de direcciones es un factor importante en el desarrollo de la internet de las cosas. Según Steve Leibson, quien se identifica como "docente ocasional en el Museo de Historia de la Computación", la expansión del espacio de direcciones significa que podríamos "asignar una dirección IPV6 a cada átomo en la superficie de la Tierra, y aún tener suficientes direcciones para hacer otras más de cien Tierras". En otras palabras, los seres humanos fácilmente podría asignar una dirección IP a cada"cosa" en el planeta. Se espera que un aumento en el número de nodos inteligentes, así como la cantidad de datos ascendentes generados por los nodos, genere nuevas preocupaciones sobre la privacidad de los datos, la soberanía de los datos y la seguridad.

Las aplicaciones prácticas de la tecnología IoT se pueden encontrar en muchas industrias actualmente, incluyendo la agricultura de precisión, gestión de edificios, salud, energía y transporte. Hay numerosas opciones de conectividad para los ingenieros electrónicos y los desarrolladores de aplicaciones que trabajan en productos y sistemas para internet de las cosas.

Aunque el concepto no fue nombrado hasta 1999, la internet de las cosas ha estado en desarrollo durante décadas. El primer aparato de internet, por ejemplo, fue una máquina de Coca Cola en la Universidad Carnegie Melon, a principios de 1980. Los programadores podían conectarse a la máquina a través de internet, verificar el estado de la máquina y determinar si había o no una bebida fría esperándoles, si decidieran hacer el viaje a la máquina.

El Dr. John Barrett explica la internet de las cosas en su charla de TED:

Términos relacionados


Picture of System Administrator

Internet of Things

by System Administrator - Tuesday, 30 April 2013, 2:47 PM

Part of the Cloud computing glossary.

The Internet of Things (IoT) is a scenario in which every thing has a unique identifier and the ability to communicate over the Internet or a similar wide-area network (WAN).

The technologies for an Internet of Things are already in place. Things, in this context, can be people, animals, servers, applications, shampoo bottles, cars, steering wheels, coffee machines, park benches or just about any other random item that comes to mind. Once something has a unique identifier, it can be tagged, assigned a uniform resource identifier (URI) and monitored over a network. The Internet of Things is an evolutionary outcome of the trend towards ubiquitous computing, a scenario in which processors are embedded in everyday objects.

Although the concept wasn't named until 1999, the Internet of Things has been in development for decades. The first Internet appliance was a Coke machine at Carnegie Melon University in the early 1980s. Programmers working several floors above the vending machine wrote a server program that tracked how long it had been since a storage column in the machine had been empty. The programmers could connect to the machine over the Internet, check the status of the machine and determine whether or not there would be a cold drink awaiting them, should they decide to make the trip down to the machine.

Kevin Ashton, cofounder and executive director of the Auto-ID Center at MIT, first mentioned the Internet of Things in a presentation he made to Procter & Gamble. Here’s how Ashton explains the potential of the Internet of Things:

“Today computers—and, therefore, the Internet—are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024terabytes) of data available on the Internet were first captured and created by human beings—by typing, pressing a record button, taking a digital picture or scanning a bar code… The problem is, people have limited time, attention and accuracy—all of which means they are not very good at capturing data about things in the real world… If we had computers that knew everything there was to know about things—using data they gathered without any help from us—we would be able to track and count everything, and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling, and whether they were fresh or past their best.”

IPv6’s huge increase in address space is another factor in the development of the Internet of Things. According to Steve Leibson, who identifies himself as “occasional docent at the Computer History Museum,” the address space expansion means that we could “assign an IPV6 address to every atom on the surface of the earth, and still have enough addresses left to do another 100+ earths.” In other words, we could easily assign an IP address to every thing that we wanted to monitor.

See also: cloud-oriented architecture (COA), resource-oriented architecture (ROA)

Contributor(s): Ivy Wigmore / Posted by: Margaret Rouse


Picture of System Administrator

Interoperabilidad / Interoperability (KW)

by System Administrator - Thursday, 2 May 2013, 6:48 PM

Un Sistema de Información es interoperable o no lo es.



Los Seguros de Salud exigen a sus Proveedores contenidos y mensajes estandarizados como condición mínima para el cobro de las prestaciones.

El primer paso para cumplir con estas exigencias es la utilización de códigos comunes (en general contenidos en el cabezal de los documentos y mensajes que se intercambian). HealthDesk© recomienda, antes de comenzar con el ingreso de parámetros, cumplir con este requisito.

Es imposible abordar la horizontalidad de un sistema de información para la gestión de la producción sanitaria sin mantener el máximo nivel de abstracción posible. Esto significa marcar una frontera clara entre los contenidos estándar (o sugeridos por HealthDesk©) y los contenidos relacionados con las reglas de negocio del Usuario Final[1].

La experiencia de los primeros 90 días de implantación de un sistema como éste puede ser percibida como un verdadero caos o como una evolución natural sin sobresaltos. La clave está en lo que mencionábamos en una de las figuras: el “Plan Estratégico (Propósito-Misión-Visión)”. Si este Plan surge de un análisis profesional y anticipado al proceso de implantación, producto de un liderazgo conceptual bien claro y una estrategia comunicacional sin exclusiones, los resultados positivos serán visibles en un cortísimo plazo.

Ud. habrá notado la frecuencia con la que utilizamos la palabras “estándar” e “interoperabilidad”, pero todavía no las hemos definido formalmente.

“Originalmente en inglés, estándar significaba bandera; color; pancarta; de allí el nombre estándar(te). El significado primario moderno que le siguió fue "lo que es establecido por la autoridad, la costumbre o el consentimiento general". En este sentido se utiliza como sinónimo de norma: serie de reglas y definiciones que especifican como llevar a cabo un proceso o producir un producto.

La interoperabilidad física es la habilidad de dos o más sistemas o componentes para intercambiar información. En los sistemas de información en salud, el primer paso para que un sistema sea interoperable es su capacidad de transferir información de un paciente de un sistema a otro. En general, esta transferencia se realiza a través de una interfaz adaptada y personalizada.

El Diccionario de la Real Academia de la Lengua Española define interfaz como una palabra derivada del término inglés “interface” (superficie de contacto) y la define de la siguiente manera: 1. f. Inform. Conexión física y funcional entre dos aparatos o sistemas independientes.

Podríamos decir que la interoperabilidad física se refiere a la estructura de una comunicación, lograr que los sistemas se comuniquen físicamente. La interoperabilidad semántica contiene el significado de la comunicación, sería el equivalente a un diccionario. Para ello la solución recomendada es utilizar estándares de es terminologías como SNOMED, LOINC o documentos clínicos estándares como el Clínical Document Architecture (CDA). Sin la interoperabilidad semántica, los datos pueden ser intercambiados pero no hay seguridad de que puedan ser utilizados por el que los recibe. Muchos de los estándares disponibles hoy en día, cumplen con ambos tipos de interoperabilidad  (física y semántica).

Interoperabilidad del negocio: Alcanzar 100% de interoperabilidad requerirá una comprensión de las prioridades y la persistencia de los datos entre varios y variados escenarios (internación, ambulatorio, rehabilitación, internación domiciliaria, internación psiquiátrica, emergencias, farmacias, cuidados odontológicos, etc). Esto necesitará información de todas las partes para entender todas las reglas de negocio de los distintos actores que participan en la comunicación.

Con verdadera interoperabilidad, es decir, aquella que cumpla con estos 3 requisitos (interoperabilidad física, semántica y de negocio), los datos podrán ser intercambiados o integrados dentro de una institución, entre diferentes instituciones de salud, compartidos con los pacientes y utilizados para sistemas de vigilancia a nivel nacional como los que veremos en una de las última unidades del curso.

… la interoperabilidad requiere de estándares por múltiples partes, para intercambiar datos y además hablar el mismo idioma a través de diccionarios o terminologías médicas que permitan hacer uso de los datos intercambiados.”

Las “reglas de negocio” del Usuario Final no tienen porqué ser transmitidas, sí el resultado de las mismas (por ejemplo: NO AUTORIZADO).

“Hoy en día existen estándares que se ocupan de ambos tipos de interoperabilidad, a modo práctico, los organizaremos en 6 categorías, para después explicarlos en detalle:

  • Intercambio de datos y mensajería: permiten que las transacciones para el intercambio de datos fluyan de manera consistente entre los sistemas u organizaciones porque contienen las especificaciones o instrucciones necesarias para la estructura, formato y elementos del dato. Entre los más comunes se encuentran HL7 para datos como información demográfica del paciente o consultas y DICOM para imágenes. 
  • Terminología: proveen codificación específica para conceptos clínicos como: patologías, lista de problemas, alergias, diagnósticos y medicamentos que pueden tener variantes de texto en los registros en papel. A modo de ejemplo tenemos SNOMED para términos clínicos, LOINC para resultados de laboratorio y CIE para enfermedades y causas de muerte. 
  • Documentos: indican el tipo de información que debe ser incluido en un documento y donde puede encontrarse. Un estándar reconocido en los registros clínicos en papel es el formato SOEP (Subjetivo-Objetivo-Evaluación-Plan). El CCR (Continuity of Care Record) es un estándar para la comunicación para información de las patologías del paciente, medicaciones, antecedentes y plan de cuidado recomendado para ser compartido entre profesionales del equipo de salud.
  • Conceptual: permiten el transporte de datos a través de los sistemas sin que estos pierdan sentido y contexto. El HL7 RIM (Reference Information Model) provee el marco conceptual para la descripción de datos clínicos y el contexto que lo rodea explicando el “quien, que, cuando, donde y como”. 
  • Aplicaciones: determinan la forma por la cual las reglas del negocio se implementan y las aplicaciones pueden interactuar. Esto incluye el log-in único para diversas aplicaciones dentro del mismo entorno y estándares para brindar una vista comprensiva de datos a través de múltiples bases de datos no integradas.
  • Arquitectura: definen el proceso involucrado en el almacenamiento y distribución de los datos …”

Alineación (“Luz”), Conexión (“Cámara”), Interoperabilidad (“Acción”): Así se llama el trabajo científico que TNG presentó en el “III Congreso Iberoamericano de Informática Médica Normalizada” de 2008 en Uruguay, organizado por SUEIIDISS ( Este trabajo resalta, como primer paso IMPRESCINDIBLE rumbo a la interoperabilidad, la alineación de las tablas con los identificadores que serán utilizados en los mensajes. En otras palabras, Ud. debe utilizar los mismos códigos para identificar los mismos objetos (tipos de mensaje, tipos de documentos y atributos de personas, financiadores, prestadores o efectores de salud, prestaciones, recursos humanos, capas de la HCE, estructura hospitalaria, etc). En muchos casos estos identificadores son OIDs (Objects Identifiers), y los deberá buscar en la Autoridad de Asignación respectiva (organizaciones ISO o el propio Estado). En el caso de Uruguay remitirse a SUEIIDISS.

HL7 es un estándar privado, y para publicitar que su sistema de gestión y/o mensajería es “HL7 compatible” debe certificarse en un representante autorizado HL7.

Los “perfiles” con los que se comunicará con otros sistemas de información pueden certificarse en eventos de IHE Internacional denominados “Connectathon”.

Finalizando esta introducción al Módulo de Organización, no debemos dejar de mencionar que en el País donde se utilice este sistema puede existir una “Ley de Habeas Data” o “de Protección de Datos Personales”. Por favor tenga en cuenta el texto de esta Ley en su estrategia general de gestión.

Picture of System Administrator

Intrapreneur (Intrapreneurship)

by System Administrator - Thursday, 23 April 2015, 7:54 PM

Intrapreneur (Intrapreneurship)

Picture of System Administrator

Introducción (KW)

by System Administrator - Thursday, 2 May 2013, 6:48 PM

Con 30 años de experiencia en “las trincheras” de la gestión más sensible, proponemos una formidable oportunidad para gestionar eficientemente el conocimiento, cambiando para siempre la percepción y protagonismo del Usuario Final respecto a la tecnología que utiliza.

El stress tecnológico actual ha catalizado el hecho de que el “paradigma final” ya esté entre nosotros.

Este paradigma se sustenta en los siguientes hechos:

  1. Los contenidos son más importantes que la tecnología en sí misma.

  1. El verdadero ahorro para el Usuario Final vendrá por su capacidad para interoperar con el exterior, y no con los costes de desarrollo.

  1. Para el Usuario Final los debates Windows-Linux o Intel/AMD no son importantes, pues la verdadera elección se basará en qué plataforma (hardware + software) puede gestionar mejor su conocimiento (o “valor agregado”).

  1. Mucho se ha hablado de las “redes neuronales”, pero nadie ha colocado las bases concretas para que el Usuario Final (sin conocimientos de programación) sea pleno protagonista en su construcción (única forma de que existan realmente).

  1. El “embotellado” del conocimiento ya no es una opción con futuro.

  1. Aún no existen “clusters” de componentes abiertos y reusables de conocimiento (los de componentes de software para desarrolladores ya existen, pero no para los Usuarios Finales).

  1. La reusabilidad del conocimiento en forma práctica es una oportunidad única para desarrollar nuevos productos y servicios. Primero fue el PC, luego Internet, ahora: KW. Quien lidere esta iniciativa será el nuevo referente a nivel mundial. ¿Porqué?. Esta tecnología cambiará para siempre la percepción que el Usuario Final tiene sobre la tecnología informática.

  1. Los sistemas de gestión en entorno Web ("cloud computing", "virtualización") harán que paulatinamente los Dptos. de IT de las empresas, tal como los conocemos ahora, tiendan a desaparecer y sea el propio Usuario Final quien introduzca la lógica (HKW/SKW) y la experiencia (DKW) en su propio sistema de gestión.

  1. La creación de “Comunidades KW” es la solución práctica para implementar clusters de neuronas para todos los sectores de actividad: B2B, B2C, C2C, científicos, redes sociales y de cualquier otra Comunidad que genere conocimiento y desee que éste se vuelva proactivo. Solamente en el sector bio-tecnológico el impacto es enorme (ver Citas).

  2. Es una excelente forma de llevar a la práctica la Responsabilidad Social Empresarial.

Tal como funcionan las industrias corporativas del software y el hardware actualmente, sería preciso dar un salto muy grande y valiente para atender realmente las necesidades de gestión y expansión del talento humano. No debería ser necesario que terceros interpreten la innovación y experiencia de alguien con buenas ideas, “embotellando” en un determinado software este conocimiento ajeno para beneficio propio. De la misma forma, la falta de acceso al conocimiento y a las relaciones generadas a partir de él no debería coartar la capacidad de desarrollo de una persona, familia, sociedad o país.

Picture of System Administrator

ISO 9000/9001

by System Administrator - Thursday, 9 May 2013, 12:01 AM

ISO 9000 designa un conjunto de normas sobre calidad y gestión continua de calidad, establecidas por la Organización Internacional para la Estandarización (ISO). Se pueden aplicar en cualquier tipo de organización o actividad orientada a la producción de bienes o servicios. Las normas recogen tanto el contenido mínimo como las guías y herramientas específicas de implantación, como los métodos de auditoría. El ISO 9000 especifica la manera en que una organización opera, sus estándares de calidad, tiempos de entrega y niveles de servicio Existen más de 20 elementos en los estándares de este ISO que se relacionan con la manera en que los sistemas operan. Su implantación, aunque supone un duro trabajo, ofrece numerosas ventajas para las empresas, entre las que se cuentan:

  • Monitorizar los principales procesos.
  • Asegurar su efectividad.
  • Mantener registros de gestión, procesos y procedimientos.
  • Mejorar la satisfacción de los clientes o usuarios.
  • Mejora continua de procesos.
  • Reducir las incidencias de producción o prestación de servicios.

La Norma ISO 9001 ha sido elaborada por el Comité Técnico ISO/TC176 de la Organización Internacional para la Estandarización (ISO) y especifica los requisitos para un buen sistema de gestión de la calidad que pueden utilizarse para su aplicación interna por las organizaciones, para certificación o con fines contractuales.



Picture of System Administrator

ISO 9001:2015

by System Administrator - Wednesday, 12 August 2015, 9:47 PM

What’s Really New in ISO 9001:2015? Knowledge Management

Helping an organization implement a strategic knowledge management program

by Arun Hariharan

Peter Drucker once said, "The most important, and indeed the truly unique, contribution of management in the 20th century was the fifty-fold increase in the productivity of the manual worker in manufacturing. The most important contribution management needs to make in the 21st century is similarly to increase the productivity of knowledge work and the knowledge worker."

The importance of knowledge management (KM) as an important element of business excellence or strategic quality programs is gaining recognition. Some years ago, Baldrige and the European Foundation for Quality Management (EFQM) added criteria related to KM to their models.

Now, ISO 9001:2015 has a new clause, 7.1.6, on organizational knowledge and its management. This clause has no equivalent in ISO 9001:2008. In fact, it seems to be the only clause that is completely new. The other clauses seem to have some equivalent in the earlier version, in letter or in spirit. (For a comparison of the two ISO 9001 versions, click here.)

The objective of this series of articles is to help organizations and their leaders, management representatives, and ISO practitioners, that aspire to be certified to ISO 9001:2015, particularly with the clause 7.1.6.

The series is aimed at helping any organization implement a strategic knowledge management (KM) program relevant to their strategic objectives. Having a dual career as a quality professional and a KM professional gives me an opportunity to look at KM from a quality practitioner's point of view (and the other way around). It is with this dual perspective that I have tried to look at the new ISO 9001:2015 clause on KM.

In my recent book The Strategic Knowledge Management Handbook (ASQ Quality Press, 2015), I introduced the Strategic Knowledge Management Framework, which forms the subject of this article. The framework is based on my years of experience helping organizations with KM implementation. I believe that the framework contains what almost any type of organization needs in order to implement a strategic KM program with substantial and sustained results.

What is KM?

What are the differences between KM as a strategy and a technology-only approach?

Look at one definition of knowledge management: KM is an enabler to achieve an organization's objectives better and faster through an integrated set of initiatives, systems and behavioral interventions, aimed at promoting smooth flow and sharing of knowledge relevant to the organization, and the elimination of reinvention. KM seeks to facilitate the flow of knowledge from where it resides, to where it is required (that is, where it can be applied or used), to achieve the organization's objectives.

The differences between KM as a "strategy," as the above definition suggests, and a limited "technology only" approach are given below. A strategic KM Program has the following:

  • Senior-management involvement
  • Linked with broader organizational priorities
  • KM initiatives (such as knowledge-bases, communities of experts, and collaboration) are centered around predefined "mission-critical" areas
  • KM roles are clearly defined
  • Closed-looped processes for knowledge-sharing and replication in mission-critical areas—not left to choice or chance
  • Technology is an important enabler, but clearly only one component of a larger KM program

On the other hand, a technology-only approach to KM (which, unfortunately, some organizations take) is a narrow view that treats the implementation of some form of technology—usually an intranet/portal with some features of document management, storage, and collaboration—as the be-all-and-end-all of KM. In such an approach, KM is not linked to organizational priorities or employees' performance. Not surprisingly, many organizations that take the technology-only view end up getting lackluster results from KM.

The Strategic Knowledge Management Framework is designed to help you keep your organization's KM initiative strategic, and derive significant and sustained results from KM.

The Strategic Knowledge Management Framework

The Strategic Knowledge Management framework is given in Figure 1 and consists of the following elements:

1. The organization's knowledge management vision
2. KM strategy (how to achieve the KM vision)
3. Leadership/top management's role
4. People and roles
5. Culture/change management
6. KM processes
7. Measurement of KM results
8. Technology


Figure 1: Strategic Knowledge Management Framework

All eight elements of the framework must be in place, and work in coordination with each other, for a strategic KM program to deliver sustained results.

The sub-elements under each element are given in Table 1.

Strategic KM Framework Element


1. KM vision

1. Broad direction
2. Link with broader organizational priorities
3. Helps to bring focus to, and sustain results from KM initiatives

2. KM strategy
(How to achieve the vision)

In several companies, we found the 360-degree knowledge management model to be an effective strategy to achieve the organization's KM vision. This model provides employees single-window access to all relevant knowledge and expertise from within and outside the organization, that is relevant to their domain. A detailed paper on the 360-degree KM model can be seen at

3. Leadership, top management's role, governance

1. Provide KM vision and direction.
2. Identify organizational priority areas and align KM efforts.
3. Define performance measures for KM.
4. Identify and put the right people in different KM roles.
5. Governance: create a structure and rhythm for regular review of progress and results.
6. Provide visibility and recognition for knowledge sharing and results from KM.

4. People, roles, structure

1. People-participation/mass-movement of knowledge-sharing and replication
2. Define roles such as knowledge manager, knowledge champion, subject matter expert, researcher, etc. Incorporate them into the organizational structure.

5. Culture/change management

1. Organization-wide culture of knowledge-sharing and "copying" of best practices
2. Uninhibited flow of knowledge/information
3. Visibly reward and recognize knowledge performance.
4. Key performance indicators/performance appraisal systems are realigned to reward knowledge sharing/replication.
5. Constant communication by senior leaders about expected culture/behavior
6. Lead by example (senior leaders must walk the talk).

6. KM processes

1. Standardized processes for knowledge-sharing, replication, and identification of best practices
2. Process for keeping content on online knowledge-sharing platforms, such as websites, accurate and updated. In the case of websites offering e-commerce (including e-government), proactive process must be in place to ensure that all links and transaction services are working.
3. Standard formats/templates for knowledge-sharing and replication
4. Measuring and publishing results of KM

7. Measurement of KM results

1. Measurement of results (lagging measures): Financial benefits (revenue or cost-savings), impact on customer satisfaction scores, reduction in customer complaints, improvement in internal/process-related measures, and so on.
2. Measurement of enablers (leading measures): Number of best-practices shared/published, number of best-practices replicated with business-results, employee engagement in KM (e.g., number of employees who shared knowledge, or replicated knowledge shared by other employees or from external sources, with results).

Monthly dashboards showing these results and their trend must be reviewed by senior leaders.

8. Technology

1. Portal to facilitate knowledge-exchange/virtual collaboration
2. Knowledge bases/repositories
3. Taxonomy, organized content, easy and quick search and retrieval
4. Organizational "yellow-pages"—virtual communities of experts (pool of talent)
5. Knowledge sharing/upload; workflow for review by experts and publishing
6. In the case of e-commerce and e-government websites providing transactional services and links—ensure links, payment gateways, etc. are always working

Table 1: Strategic Knowledge Management Framework—elements and sub-elements

My next article will provide a detailed explanation of each element of the strategic knowledge management framework.


Picture of System Administrator

ISO 9001:2015​

by System Administrator - Tuesday, 17 November 2015, 6:27 PM

Hablemos sobre la ISO 9001:2015​


Nigel Croft, Presidente del subcomité ISO que desarrolló y revisó la norma: "[la ISO 9001:2015] es un proceso evolutivo más que revolucionario. [...] Estamos trayendo la ISO 9001 firmemente al siglo XXI. Las versiones anteriores de la ISO 9001 eran muy prescriptivas, con muchos requisitos de procedimientos documentados y registros. En las ediciones 2000 y 2008, nos hemos centrado más en la gestión de procesos y menos en la documentación. [...] Ahora hemos ido un paso más allá, y la ISO 9001:2015 es menos prescriptiva que su predecesor, centrándose en el rendimiento. Hemos logrado esto combinando el enfoque de procesos con el pensamiento basado en riesgos, y empleando el ciclo PHVA "Planificar-Hacer-Verificar-Actuar" (Plan-Do-Check-Act) en todos los niveles en la organización."

¿Por qué ha sido revisada la ISO 9001?

Las Normas de Sistemas de Gestión ISO son revisadas en cuanto a efectividad y sostenibilidad aproximadamente cada 5 a 8 años. 

La ISO 9001:2015 reemplaza las ediciones anteriores, los entes certificadores tendrán hasta tres años para migrar los certificados a la nueva versión. 

La ISO 9000, que establece los conceptos y el lenguaje utilizado a lo largo de la familia de normas ISO 9000, también ha sido revisada y una nueva edición está disponible.

La Norma ha sido revisada con los objetivos de:

  • Mayor importancia del servicio.
  • Mayores expectativas de las partes interesadas.
  • Mejor integración con otros Estándares de Sistemas de Gestión.
  • Adaptarse a complejas cadenas de suministros.
  • Globalización. 

También ha habido un cambio en la estructura de la norma. La ISO 9001:2015 y todo sistema de gestión futuro, seguirán la nueva estructura común de normas del sistema de gestión. Esto ayudará a las organizaciones con los sistemas integrados de gestión.

"Teniendo en cuenta que las organizaciones de hoy tienen varias normas de gestión implantadas, hemos diseñado la versión 2015 para integrarse fácilmente con otros sistemas de gestión. La nueva versión también ofrece una sólida base para las normas del sector de calidad (industria automotriz, aeroespacial, médica, etc.) y toma en cuenta las necesidades de los reguladores." Nigel Croft, Presidente del subcomité ISO.

Resumen de los principales cambios de la ISO 9001

  • La norma hace hincapié en el enfoque de proceso.
  • La norma solicita un pensamiento basado en riesgos.
  • Existe mayor flexibilidad en la documentación.
  • La norma se focaliza más en los actores.

Esta versión de la norma se basa en Siete Principios de Gestión de Calidad: 

La norma ISO 9001:2008 se basa en principios de calidad que son utilizados generalmente por la Alta Gerencia como una guía para la mejora de la calidad. Están definidos en la ISO 9000 y la ISO 9004. Sin embargo, estos principios han sido modificados en la versión 9001:2015. La nueva versión de la norma se basa en siete principios, los cuales incluyen:  

  1. Enfoque al cliente.
  2. Liderazgo.
  3. Compromiso de las personas.
  4. Enfoque en proceso.
  5. Mejora (fusión de anteriores: enfoques de sistema y de proceso). 
  6. Toma de decisiones basada en la evidencia. 
  7. Gestión de las relaciones.

Los dos primeros principios, enfoque al cliente y liderazgo, no han cambiado desde la versión 2008. El tercer principio, la participación de las personas, ha sido renombrado en Compromiso de las personas. El cuarto principio enfoque de procesos se mantiene igual. El quinto ha sido integrado al cuarto por lo que el número de principios pasa a siete.

El enfoque de procesos es usado en la versión 2008 de la ISO 9001, y se utiliza para desarrollar, implementar y mejorar la eficacia de un sistema de gestión de calidad. Sin embargo, la norma ISO 9001:2015 proporciona una perspectiva adicional en el enfoque de procesos, y aclara el porqué es esencial adoptarlo en cada proceso gerencial de la organización. Las organizaciones ahora están obligadas a determinar cada proceso necesario para el Sistema de Gestión de Calidad y mantener documentada la información necesaria para apoyar la operación de dichos procesos.

El Pensamiento basado en riesgos es también uno de los principales cambios en la nueva versión de la ISO 9001. El estándar no incluye el concepto de acción preventiva, pero hay otras dos series de requisitos que cubren el concepto y que incluyen requisitos sobre la gestión de riesgo.

La flexibilidad de la documentación es otro cambio importante en la Norma ISO 2001:2015. Los documentos de términos y registros han sido reemplazados por  "información documentada". No hay requisitos específicos que determinen tener documentados los procedimientos, sin embargo los procesos deberían ser documentados para demostrar conformidad.

Focalizarse más en los actores es otro cambio en la norma ISO 9001 2015. La nueva versión de la norma a menudo plantea el tema de las partes interesadas, que en este contexto se refiere a las partes interesadas, tanto internas como externas de la organización, con intereses en el proceso de gestión de la calidad. La norma requiere que las organizaciones se centren no sólo en requisitos de cliente, sino también en los requerimientos de otros actores o partes interesadas tales como empleados, proveedores y así sucesivamente, que puedan afectar el sistema de gestión de calidad.

La nueva norma tiene 10 cláusulas:

  • Alcance.
  • Referencias de normativa.
  • Términos y definiciones.
  • Contexto de la organización y del liderazgo.
  • Calendario para el Sistema de Gestión de la Calidad.
  • Soporte.
  • Operación.
  • Evaluación de desempeño. 
  • Mejora. 

ISO 9001:2015 - Just published!

by Maria Lazarte

The latest edition of ISO 9001, ISO's flagship quality management systems standard, has just been published. This concludes over three years of revision work by experts from nearly 95 participating and observing countries to bring the standard up to date with modern needs.

With over 1.1 million certificates issued worldwide, ISO 9001 helps organizations demonstrate to customers that they can offer products and services of consistently good quality. It also acts as a tool to streamline their processes and make them more efficient at what they do. Acting ISO Secretary-General Kevin McKinley explains: “ISO 9001 allows organizations to adapt to a changing world. It enhances an organization’s ability to satisfy its customers and provides a coherent foundation for growth and sustained success.”

The 2015 edition features important changes, which Nigel Croft, Chair of the ISO subcommittee that developed and revised the standard, refers to as an “evolutionary rather than a revolutionary” process. “We are just bringing ISO 9001 firmly into the 21st century. The earlier versions of ISO 9001 were quite prescriptive, with many requirements for documented procedures and records. In the 2000 and 2008 editions, we focused more on managing processes, and less on documentation.

“We have now gone a step further, and ISO 9001:2015 is even less prescriptive than its predecessor, focusing instead on performance. We have achieved this by combining the process approach with risk-based thinking, and employing the Plan-Do-Check-Act cycle at all levels in the organization.

“Knowing that today’s organizations will have several management standards in place, we have designed the 2015 version to be easily integrated with other management systems. The new version also provides a solid base for sector-quality standards (automotive, aerospace, medical industries, etc.), and takes into account the needs of regulators.”

As the much anticipated standard comes into being, Kevin McKinley concludes, “The world has changed, and this revision was needed to reflect this. Technology is driving increased expectations from customers and businesses. Barriers to trade have dropped due to lower tariffs, but also because of strategic instruments like International Standards. We are seeing a trend towards more complex global supply chains that demand integrated action. So organizations need to perform in new ways, and our quality management standards need to keep up with these expectations. I am confident that the 2015 edition of ISO 9001 can help them achieve this.”

The standard was developed by ISO/TC 176/SC 2, whose secretariat is held by BSI, ISO member for the UK. “This is a very important committee for ISO,” says Kevin, “one that has led the way in terms of global relevance, impact and utilization. I thank the experts for their hard effort.”

ISO 9001:2015 replaces previous editions and certification bodies will have up to three years to migrate certificates to the new version.

ISO 9000, which lays down the concepts and language used throughout the ISO 9000 family of standards, has also been revised and a new edition is available.

Learn all about the new ISO 9001:2015 in our five minute video:

The world’s quality management systems standard, ISO 9001, has been revised. Here, Kevin McKinley, Acting ISO Secretary-General, and Nigel Croft, Chair of the subcommittee that revised ISO 9001, tell you everything you need to know about the new edition of this landmark standard that enhances an organization’s ability to satisfy its customers and provides a coherent foundation for growth and sustained success.
La Norme internationale ISO 9001 sur les systèmes de management de la qualité vient d’être révisée. Dans cette vidéo, Kevin McKinley, Secrétaire général par intérim de l’ISO, et Nigel Croft, Président du sous-comité en charge de la révision d’ISO 9001, vous disent tout ce qu’il faut savoir sur la nouvelle édition de cette norme de référence qui améliore la capacité d’une organisation à satisfaire ses clients et offre des bases cohérentes pour une croissance et une réussite durables.


Page: (Previous)   1  ...  6  7  8  9  10  11  12  13  14  15  ...  23  (Next)