Sunday, 21 January 2018, 3:23 AM
Site: KW Foundation | Campus
Course: KW Foundation | Campus (KWSN | KW Foundation Social Network & Campus)
Glossary: Glosario KW | KW Glossary
R

 R

#### R (PMI)

• Reengineering is radical redesign of an organization's processes, especially its business processes. Rather than organizing a firm into functional specialties (like production, accounting, marketing, etc.) and considering the tasks that each function performs; complete processes from materials acquisition, to production, to marketing and distribution should be considered. The firm should be re-engineered into a series of processes.
• Resources are what is required to carry out a project's tasks. They can be people, equipment, facilities, funding, or anything else capable of definition (usually other than labour) required for the completion of a project activity.
• Risk is the precise probability of specific eventualities.
• Risk management is a management specialism aiming to reduce different risks related to a preselected domain to the level accepted by society. It may refer to numerous types of threats caused by environment, technology, humans, organizations and politics.
• Risk register is a tool commonly used in project planning and organizational risk assessments.

------------------------------------------

• Reclamación: Se trata de una solicitud, requerimiento, demanda o declaración de derechos que hace un vendedor a un comprador o viceversa. Esta debe ser considerada, compensada o pagada en virtud de los términos establecidos en un contrato —que los vincula de acuerdo con la ley.
• Recopilar requisitos: Es el  proceso de puntualizar y establecer las necesidades de los stakeholders para acatar los objetivos del proyecto.
• Recurso: Cualquier ayuda tangible por ejemplo, una persona, una herramienta, un artículo de la fuente o una facilidad usados en el funcionamiento de un proyecto.
• Registros históricos: Es la documentación del proyecto que puede ser usada para predecir tendencias, analizar la viabilidad y poner en relieve las áreas y dificultades que se pudieran presentar en proyectos similares en el futuro.
• Registro de riesgos: Es el escrito donde se depositan los resultados de los estudios cualitativos y cuantitativos de riesgos, así como la planeación de la respuesta a éstos. A través de un documento bien detallado se plasman los riesgos identificados y una serie de datos respecto a éstos con la finalidad de tenerlos presente y poder reaccionar
• Reingeniería: Es el rediseño radical de procesos de una organización, especialmente sus procesos de negocio. En vez de pretender organizar una firma en especializaciones funcionales (por ejemplo: producción, contabilidad, marketing, etc) y teniendo en cuenta las tareas que realiza cada función, los procesos completos desde la adquisición de materiales, a la producción, la comercialización y distribución son reconsiderados. En este rediseño se ejecuta un planteamiento absoluto de todos los procesos y la forma bajo la cual viene operando la compañía.
• Relaciones Lógicas: Dependencia entre dos actividades de proyecto, o entre una actividad de proyecto y un hito.  Estas pueden ser.  comienzo–comienzo , comienzo–fin, fin–comienzo y fin-fin.
• Reparación de defectos: Descripción  debidamente documentada de un desperfecto o insuficiencia en un elemento de un  proyecto, con una sugerencia de subsanar el defecto o suplantar el componente  completo.
• Repositorio del proyecto: Es un lugar establecido (ya sea físico o virtual) para el coherente y eficaz almacenamiento y recuperación de toda la información del proyecto para el uso eficiente del director del proyecto y su equipo.
• Requisitos: Es la declaración de los objetivos detallados del producto que describe las características y las funciones y los apremios del funcionamiento que se entregarán en el producto.
• Reserva: Provisión en el plan de proyecto para mitigar riesgo de costo y/o programación. Muchas veces es usada con un modificador  para proveer más detalle sobre qué tipo de riesgo es el que se quiere mitigar. El significado específico del término modificador varía de acuerdo con el área de aplicación.
Reserva para Contingencias: La cantidad de fondos, presupuesto o tiempo, que supera la estimación y que serán utilizados para la reduccipon de riesgos de sobrecoste de los objetivos del proyecto, la cual será presentada dentro de un nivel aceptable para la organización.
• Restricción: Es un impedimento o una limitación que influencia el plan del proyecto.
• Reubicación: Consiste en la colocación de pocos o todos los integrantes del equipo del proyecto en un mismo espacio físico a los fines de generar mejoras en la forma de trabajar del conjunto.
• Reunión de iniciación: Es un tipo de junta en la cual los principales interesados y participantes del proyecto son informados de las metas y objetivos del mismo, cómo estará organizado, entre otros puntos; con la finalidad de contribuir a su planeación, asignación de responsabilidades, etc.
• Riesgo: Un suceso o circunstancia indeterminada que de llegarse a concretar, tiene una consecuencia positiva o negativa en los objetivos de un proyecto.
• Riesgo Secundario: Un riesgo que surge como resultado directo de la implementación de una respuesta a los riesgos.
• Riesgo residual: Un suceso o circunstancia indeterminada que permanece después de haber ejecutado las respuestas a los riesgos.
• Roles internos de administración de proyectos: Es el papel asignado al project manager o al equipo de dirección d eproyectos relacionado al trabajo del proyecto.
• Ruta Crítica: Son las actividades que determinan la terminación temprana del proyecto en un diagrama de red de proyecto, esta ruta se modifica durante el desarrollo del proyecto, depende del término de las actividades, este se calcula regularmente para todo el proyecto , sin embargo puede hacerse solo para una parte del proyecto.
• Ruta de Red: Es cualquier serie continua de actividades conectadas en un diagrama de red de proyecto.

#### R (WEB SERVICES)

• RDF Site Summary (RSS) - an application of XML that describes or other Web content that is available for "feeding" (distribution or syndication) from an online publisher to Web users.
• Resource Description Framework (RDDF) - a general framework for how to describe any Internet resource such as a Web site and its content.

### R programming language

Posted by: Margaret Rouse | Contributor(s): Ed Burns

#### The R programming language is an open sourcescripting language for predictive analytics and data visualization.

The initial version of R was released in 1995 to allow academic statisticians and others with sophisticated programming skills to perform complex data statistical analysis and display the results in any of a multitude of visual graphics. The "R" name is derived from the first letter of the names of its two developers, Ross Ihaka and Robert Gentleman, who were associated with the University of Auckland at the time.

The R programming language includes functions that support linear modeling, non-linear modeling, classical statistics, classifications, clustering and more. It has remained popular in academic settings due to its robust features and the fact that it is free to download in source code form under the terms of the Free Software Foundation's GNU general public license. It compiles and runs on UNIX platforms and other systems including Linux, Windows and macOS.

The appeal of the R language has gradually spread out of academia into business settings, as many data analysts who trained on R in college prefer to continue using it rather than pick up a new tool with which they are inexperienced.

#### The R software environment

The R language programming environment is built around a standard command-line interface. Users leverage this to read data and load it to the workspace, specify commands and receive results. Commands can be anything from simple mathematical operators, including +, -, * and /, to more complicated functions that perform linear regressions and other advanced calculations.

Users can also write their own functions. The environment allows users to combine individual operations, such as joining separate data files into a single document, pulling out a single variable and running a regression on the resulting data set, into a single function that can be used over and over.

Looping functions are also popular in the R programming environment. These functions allow users to repeatedly perform some action, such as pulling out samples from a larger data set, as many times as the user wants to specify.

#### R language pros and cons

Many users of the R programming language like the fact that it is free to download, offers sophisticated data analytics capabilities and has an active community of users online where they can turn to for support.

Because it's been around for many years and has been popular throughout its existence, the language is fairly mature. Users can download add-on packages that enhance the basic functionality of the language. These packages enable users to visualize data, connect to external databases, map data geographically and perform advanced statistical functions. There is also a popular user interface called RStudio, which simplifies coding in the R language.

The R language has been criticized for delivering slow analyses when applied to large data sets. This is because the language utilizes single-threaded processing, which means the basic open source version can only utilize one CPU at a time. By comparison, modern big data analytics thrives on parallel data processing, simultaneously leveraging dozens of CPUs across a cluster of servers to process large data volumes quickly.

In addition to its single-threaded processing limitations, the R programming environment is an in-memory application. All data objects are stored in a machine's RAM during a given session. This can limit the amount of data R is able to work on at one time.

#### R and big data

These limitations have mitigated the applicability of the R language in big data applications. Instead of putting R to work in production, many enterprise users leverage R as an exploratory and investigative tool. Data scientists will use R to run complicated analyses on sample data and then, after identifying a meaningful correlation or cluster in the data, put the finding into product through enterprise-scale tools.

Several software vendors have added support for the R programming language to their offerings, allowing R to gain a stronger footing in the modern big data realm. Vendors including IBM, Microsoft, Oracle, SAS Institute, TIBCO and Tableau, among others, include some level of integration between their analytics software and the R language. There are also R packages for popular open source big data platforms, including Hadoop and Spark.

### Rack Unit

Posted by: Margaret Rouse

A rack unit (abbreviated as U, less commonly seen as RU) is a unit of measurement applied to equipment racks and the serversdisk drives and other devices that they contain. One U is 1.75 inches (44.45mm); the standard rack, at 19 inches, is 42U.

Rack servers and other hardware designed to be rack-mounted are manufactured in multiples of 1.75 inches and specified in multiples of rack units, usually 1U, 2U, 3U or 4U. Racks are designed to hold equipment of those sizes. The holes in the mounting flanges of racks are arranged in groups of three, and that three-hole grouping is also called a rack unit.

Rack units for equipment is considered maximum dimensions. In practice, devices are often made slightly smaller than the specified U value to allow a little space. A device specified as 2U, for example, may in reality measure 3.44 inches in height, rather than the 3.5 inch multiple of 1.75.

See an introductory video on rack mounting equipment:

### Rackspace Carina brings 'zero infrastructure' Docker deployment to public cloud

A team of researchers at the University of Toyama in Japan, led by Masahiko Inouye, claim to have created the world's first stable artificial DNA molecules, made from synthesised nucleosides that resemble their natural counterparts.

DNA is made up of four basic building blocks, or bases, which code proteins used in cell functioning and development. While other researchers have developed DNA molecules with a few select artificial parts, the Japanese team put together four completely new artificial bases inside the framework of a DNA molecule, creating unusually stable, double-stranded structures resembling natural DNA.

The scientists say the artificial DNA acts like the real thing, and even forms right-handed duplexes with complementary artificial strands. They hope to one day use their discovery to create a new biological information storage system that functions outside the cell. Artificial DNA could be advantageously used instead of natural DNA due to its stability against naturally occurring enzymes and its structural diversity.

The unique chemistry of these artificial bases and DNA structures, coupled with their high stability, offers limitless possibilities for new biotechnology materials and applications, such as the creation of powerful DNA computers. These computers are constructed using DNA as software and enzymes as hardware, rather than traditional silicon-based components. By mixing DNA and enzymes in this way and monitoring the reactions, complex computer calculations can be performed.

DNA molecules are similar to computer hard drives in the way they save information about an individual's genes. However, they have the potential to perform calculations much faster than today's fastest man-made computers. This is because, unlike a traditional computer, calculations are performed simultaneously - similar to a parallel computing schematic - as numerous different DNA molecules attempt to test various possibilities at once.

In addition, unlike today's PCs, DNA computers require minimal or no external power sources as they run on internal energy produced during cellular reactions. There is a huge amount of potential for a computer that does not need to be plugged in the implications this has for laptops and true mobility are endless.

Because of these reasons, scientists all over the world are looking for ways in which DNA may be integrated into a computer chip to create a biochip that will make standard computers faster and more energy efficient. DNA computers could potentially be the future of green IT.

Although the idea of artificial DNA and DNA computers may seem far fetched, the concept is entirely plausible if one keeps an open mind: although DNA solutions may seem impossibly complex, there are few people who actually understand how silicon-based computing works. In addition, current systems are based on the binary system, and DNA computers would be similar in nature: they could leverage the pre-existing relationships between the four bases that are the core of every DNA molecule.

However, the more sinister connotations of artificial DNA computing - even though unfounded - remain fixed in users' minds. Therefore, since the first concept of DNA computing came about in 1994, researchers have been trying to develop artificial versions of DNA. Since the components of artificial DNA that have been created by Inouye's team do not exist in natural DNA, it is nearly impossible for them to react together, eliminating any threat of mutation.

The discovery of artificial DNA by Inouye and the Japanese team could be vital to the furthering of DNA computing as it would allow researchers to build custom DNA structures, which are optimised for computing. Unfortunately, the current method used for constructing the DNA structures creates only short strands, which are not long enough to encode information.

The technology for building artificial DNA is still extremely new, however, and is only the first step (albeit a huge one) towards using DNA as an external information storage system. DNA computers will not be replacing today's standard PCs any time soon as there are still years of research to be conducted before it can be determined if this technology will be fruitful in computing. That said, as DNA computing becomes more high profile, it may be beneficial for hardware technology giants such as Apple, Dell, HP, IBM, Intel and Sun Microsystems to invest in research that emphasises artificial DNA and its potential applications.

Ultimately, DNA computers are still in their infancy, but, if successful, will be capable of storing much more data than a regular PC and would be considerably more energy efficient and smaller in size. Given these huge benefits, investors should not rule DNA computers out of their strategies purely because they seem too implausible. Those vendors that participate in this revolutionary research could be pioneers in the development of DNA microprocessors and computers, if and when the technology is found to be viable.

Ruchi Mallya is an analyst on Datamonitor's Public Sector Technology team, covering the life sciences. Her research focuses on the use of technology in pharmaceuticals and biotechnology.

### RASP helps apps protect themselves, but is it ready for the enterprise?

A new technology called runtime application self-protection is being touted as a next big thing in application security. But not everyone is singing its praises.

In the application economy, a perimeter defense is no longer a good offense. With the proliferation of mobile devices and cloud-based technologies, perimeters are all but disappearing, according to Joseph Feiman, an analyst with Gartner Inc. "The more we move from place to place with our mobile devices, the less reliable perimeter-based technology becomes," he said.

Firewalls and intrusion prevention systems, which enterprises spent an estimated $9.1 billion on last year, still serve a vital purpose. But, given the enterprise infrastructure's growing sprawl, CIOs should be thinking about security breadth as well as security depth and how to scale their strategies down to the applications themselves, even building in a strikingly human feature: self-awareness. A new tool for the application security toolbox known as runtime application self-protection (RASP) could help CIOs get there, but, according to one expert, it's no silver bullet. Joseph Feiman #### Guarding the application The security measures many CIOs have in place don't do much to safeguard actual applications, according to Feiman. Network firewalls, identity access management, intrusion detection or endpoint protection provide security at different levels, but none of them can see beyond the application layer. "Can you imagine a person who walks out of the house and into the city always surrounded by bodyguards because he has no muscles and no skills," Feiman said. "That is a direct analogy with the application." Strip away features like perimeter firewalls, and the application is basically defenseless. Defenseless applications leave enterprises vulnerable to external -- and internal -- threats. "High-profile security breaches illustrate the growing determination and sophistication of attackers," said Johann Schleier-Smith, CTO at if(we), a social and mobile technology company based in San Francisco. "They have also forced the industry to confront the limitations of traditional security measures." Gary McGraw Application security testing tools help detect flaws and weaknesses, but the tools aren't comprehensive, Feiman said during a Gartner Security and Risk Management Summit last summer. Static application security testing, for example, analyzes source, binary or byte code to uncover bugs but only before the application is operational. Dynamic application security testing, on the other hand, simulates attacks on the application while it's operational and analyzes the response but only for Web applications that use HTTP, according to Gary McGraw, CTO of the software security consulting firm Cigital Inc. Even when taken together, these two technologies still can't see what happens inside the application while it's operational. And, according to Feiman's research report Stop Protecting Your Apps; It's Time for Apps to Protect Themselves, published in September 2014, static and dynamic testing, whether accomplished with premises-based tools or purchased as a service, can be time-consuming and hard to scale as the enterprise app portfolio multiples. #### Is RASP the answer? That's why Feiman is keeping an eye on a budding technology Gartner calls RASP or runtime application self-protection. "It is the only technology that has complete insight into what's going on in the application," he said. RASP, which can be applied to Web and non-Web applications, doesn't affect the application design itself; instead, detection and protection features are added to the servers an application runs on. "Being a part of the virtual machine, RASP sees every instruction being executed, and it can see whether a set of instructions is an attack or not," he said. The technology works in two modes: It can be set to diagnostic mode to sound an alarm; or it can be set to self-protection mode to "stop an execution that would lead to a malicious exploit," Feiman said. The technology is offered by a handful of vendors. Many, such as Waratek, founded in 2009, are new to the market, but CIOs will recognize at least one vendor getting into the RASP game: Hewlett-Packard. Currently, RASP technology is built for the two popular application servers: Java virtual machine and .NET Common Language Runtime. Additional implementations are expected to be rolled out as the technology matures. While Feiman pointed to the technology's "unmatched accuracy," he did note a couple of challenges: The technology is language dependent, which means the technology will have to be implemented separately for Java virtual machine versus .NET CLR. Because RASP sits on the application server, it uses CPUs. "Emerging RASP vendors report 2% to 3% of performance overhead, and some other evidence reports 10% or more," Feiman wrote inRuntime Application Self-Protection: Technical Capabilities, published in 2012. #### Is it ready for primetime? Not everyone is ready to endorse RASP. "I don't think it's ready for primetime," said Cigital's McGraw. RASP isn't a bad idea in principle, he said, "but in practice, it's only worked for one or two weak categories of bugs." The statement was echoed by if(we)'s Schleier-Smith: "What remains to be seen is whether the value RASP brings beyond Web application firewalls and other established technologies offsets the potential additional complexity," he said. CIOs may be better off creating an inventory of applications segmented by type -- mobile, cloud-based, Web-facing. "And choose the [security] technology stack most appropriate for the types of applications found in their portfolio," McGraw said. Even Feiman stressed that CIOs need to find a use case for the technology and consider how aggressive in general the organization is when adopting emerging technologies. For more conservative organizations, investing in RASP could still be two to five years out, he said. To strengthen application security right now, McGraw urged CIOs to remember the power of static testing, which works on all kinds of software. And he suggested they investigate how thoroughly tools such as static and dynamic testing are being utilized by their staff. "The security people are not really testing people," he said, referring to software developers. "So when they first applied dynamic testing to security, nobody bothered to check how much of the code was actually tested. And the answer was: Not very much." An even better strategy: Rather than place too much emphasis on RASP or SAST or DAST, application security should start with application design. "Half of software security issues are design problems and not silly little bugs," McGraw said. Let us know what you think of the story; email Nicole Laskowski, senior news writer, or find her on Twitter @TT_Nicole. Link: http://searchcio.techtarget.com #### Real-Time Analytics ### Real-Time Analytics Real-time analytics is the use of, or the capacity to use, data and related resources as soon as the data enters the system. The adjective real-time refers to a level of computer responsiveness that a user senses as immediate or nearly immediate. Technologies that support real-time analytics include: • Processing in memory (PIM) -- a chip architecture in which the processor is integrated into a memory chip to reduce latency. • In-database analytics -- a technology that allows data processing to be conducted within the database by building analytic logic into the database itself. • Data warehouse appliances -- combination hardware and software products designed specifically for analytical processing. An appliance allows the purchaser to deploy a high-performance data warehouse right out of the box. • In-memory analytics -- an approach to querying data when it resides in random access memory (RAM), as opposed to querying data that is stored on physical disks. • Massively parallel programming (MPP) -- the coordinated processing of a program by multiple processors that work on different parts of the program, with each processor using its own operating system and memory. #### Applications of real-time analytics In CRM (customer relations management), real-time analytics can provide up-to-the-minute information about an enterprise's customers and present it so that quicker and more accurate business decisions can be made -- perhaps even within the time span of a customer interaction. In a data warehouse context, real-time analytics supports unpredictable, ad hoc queries against large data sets. Another application is in scientific analysis such as the tracking of a hurricane's path, intensity, and wind field, with the intent of predicting these parameters hours or days in advance. The adjective real-time refers to a level of computer responsiveness that a user senses as immediate or nearly immediate, or that enables a computer to keep up with some external process (for example, to present visualizations of Web site activity as it constantly changes). This definition is part of our Essential Guide: Enterprise data analytics strategy: A guide for CIOs See also: Please read the attached whitepaper: May the Data Be With You: Download Our Customer Data E-book. #### Real-World Time Management ### Real-World Time Management by Roy Alexander and Michael S. Dobson Most of us dream about having a few extra hours in our day for taking care of business, relaxing, or engaging in the activities we most enjoy. But how can we make the most of our time when it seems as though there aren’t enough hours in the day? This instructive guide to time management is full of tips, techniques, and commonsense advice that will make anyone more productive. In this newly updated edition of Real-World Time Management, Michael Dobson includes invaluable tips on setting priorities, tricks for staying on track, keeping a closed-door policy, avoiding interrupters, and techniques for reducing stress through time management. Readers will also learn how to handle distractions, stop procrastinating, delegate tasks, deal with meetings, and manage time effectively while traveling. Instructive and helpful, Real-World Time Management will help all readers organize their time—no matter how hectic their lives may seem. #### About the Author Roy Alexander (New York, NY) heads his own consulting firm in New York City and is particularly noted for his sales and communications consultations in energy-related fields. Michael S. Dobson (New York, NY) is a consultant and popular seminar leader in project management, communications and personal success. He is the president of his own consulting firm whose clients include Calvin Klein Cosmetics and the Department of Health and Human Services. He is the author of several books including Managing Up (978-0-8144-7042-8). Please read the attached eBook. #### Recovery Time Objective (RTO) ### Recovery Time Objective (RTO) Posted by: Margaret Rouse The recovery time objective (RTO) is the maximum tolerable length of time that a computer, system, network, or application can be down after a failure or disaster occurs. The RTO is a function of the extent to which the interruption disrupts normal operations and the amount of revenue lost per unit time as a result of the disaster. These factors in turn depend on the affected equipment and application(s). An RTO is measured in seconds, minutes, hours, or days and is an important consideration in disaster recovery planning (DRP). Numerous studies have been conducted in an attempt to determine the cost of downtime for various applications in enterprise operations. These studies indicate that the cost depends on long-term and intangible effects as well as on immediate, short-term, or tangible factors. Once the RTO for an application has been defined, administrators can decide which disaster recovery technologies are best suited to the situation. For example, if the RTO for a given application is one hour, redundant data backup on external hard drives may be the best solution. If the RTO is five days, then tape, recordable compact disk (CD-R) or offsite storage on a remote Web server may be more practical. #### Red privada virtual (VPN) ### Red privada virtual (VPN) Publicado por: Margaret Rouse Una red privada virtual (Virtual Private Network, o VPN) es una tecnología que crea una conexión cifrada a través de una red menos segura. La ventaja de utilizar una VPN segura es que garantiza el nivel de seguridad adecuado para los sistemas conectados cuando la infraestructura de red subyacente por sí sola no puede proporcionarla. La justificación para usar el acceso VPN en lugar de una red privada generalmente se reduce al coste ya la viabilidad: No es factible tener una red privada, por ejemplo, para un representante de ventas ambulante, o es demasiado costoso hacerlo. Los tipos más comunes de VPN son las VPN de acceso remoto y las VPN de sitio a sitio. Una VPN de acceso remoto utiliza una infraestructura pública de telecomunicaciones como internet para proporcionar a los usuarios remotos acceso seguro a la red de su organización. Esto es especialmente importante cuando los empleados utilizan un hotspot Wi-Fi público u otras vías para usar internet y conectarse a su red corporativa. Un cliente VPN en el equipo del usuario remoto o dispositivo móvil se conecta a una puerta de enlace (gateway) VPN en la red de la organización. El gateway típicamente requiere que el dispositivo autentique su identidad. A continuación, crea un enlace de red al dispositivo que le permite acceder a recursos de red internos, por ejemplo, servidores de archivos, impresoras e intranets, como si estuviera localmente en esa red. Una VPN de acceso remoto usualmente se basa en IPsec o Secure Sockets Layer (SSL) para asegurar la conexión, aunque las VPN SSL a menudo se centran en proporcionar acceso seguro a una sola aplicación, en lugar de a toda la red interna. Algunas VPN proporcionan acceso de Nivel 2 a la red de destino; éstas requieren un protocolo de tunelización como PPTP o L2TP, que se ejecuta a través de la conexión IPsec base. Una VPN de sitio a sitio utiliza un dispositivo de puerta de enlace para conectar toda la red en una ubicación a una red en otra ubicación, normalmente una sucursal pequeña que se conecta a un centro de datos. Los dispositivos de nodo final en la ubicación remota no necesitan clientes VPN porque la puerta de enlace controla la conexión. La mayoría de las VPN de sitio a sitio que se conectan a través de internet utilizan IPsec. También es común usar nubes portadoras MPLS, en lugar de internet pública, como el transporte de VPN de sitio a sitio. Aquí, también, es posible tener la conectividad de Capa 3 (MPLS IP VPN) o de Capa 2 (Virtual Private LAN Service, o VPLS) funcionando a través del transporte base. Las VPN también se pueden definir entre equipos específicos, normalmente servidores en centros de datos independientes, cuando los requisitos de seguridad para sus intercambios exceden lo que la red empresarial puede ofrecer. Cada vez más, las empresas también utilizan conexiones VPN en modo de acceso remoto o modo sitio a sitio para conectar o conectarse a recursos en un entorno de infraestructura pública como servicio. Los nuevos escenarios de acceso híbrido colocan al gateway VPN en la nube, con un enlace seguro del proveedor de servicios en la nube a la red interna. #### Términos relacionados #### Relentless Incrementalism ### Relentless Incrementalism Posted by: Margaret Rouse Relentless incrementalism is a process in which something substantial is built through the accumulation of small but incessant additions. Relentless incrementalism is often recommended as an approach to accomplishing a daunting goal. A seemingly impossible objective may be achieved by steadily working towards it, perhaps by completing subtasks or sharing the work among multiple individuals. The essential components of relentless incrementalism are: 1. Getting started and accomplishing even small tasks or work segments regularly, and 2. Not stopping until the goal is achieved. The concept of relentless incrementalism derives from economics and social policy, is used in various areas of information technology and business management. Applied to a large effort like enterprise security, for example, the approach helps businesses start on a fundamental level and build on the initial efforts, decreasing their vulnerability as they do so. Relentless incrementalism is also an effective time management approach. Because it emphasizes the importance of accomplishing even small tasks regularly, it can help prevent employees from feeling overwhelmed by large projects. Agile project management, which is an iterative approach, can be considered an implementation of relentless incrementalism. #### Definitions • ##### quality control (QC) - Quality control (QC) is a procedure or set of procedures intended to ensure that a manufactured product or performed service adheres to a defined set of quality criteria or meets the requirements o... (WhatIs.com) • ##### communication plan - A communications management formally defines who should be given specific information, when that information should be delivered and what communication channels will be used to deliver the informat... (WhatIs.com) • ##### Respect for People principle - Continuous Improvement (CI) and Respect for People are the two foundational principles of the Toyota Way, the company's business management guide. (WhatIs.com) #### Glossaries • ##### Project management - Terms related to project management, including definitions about project management methodologies and tools. • ##### Internet applications - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ... #### Dig Deeper #### Continue Reading About relentless incrementalism #### People Who Read This Also Read... #### Rogue Cloud (CLOUD) Rogue Cloud, la nube sin autorización Según un reciente estudio de Symantec, un alto porcentaje de empresas tuvieron a lo largo del año pasado problemas con el uso de aplicaciones de nubes no autorizadas por parte de sus empleados. Espacios que son compartidos en la nube y que no están directamente controlados por la organización y a cuya información pueden estar accediendo terceras personas, con el riesgo añadido de una posible suplantación de identidad. Por Xavier Pérez / IT-Sitio Director de Calidad de Claranet El libre acceso a la información, su accesibilidad y facilidad de uso, desde cualquier lugar y a cualquier hora, y a través de diferentes dispositivos, es una situación deseable para cualquier usuario. ¿Para quién no? podríamos preguntarnos en este ámbito tecnológico basado en la inmediatez y en la búsqueda incesante de la máxima eficiencia. Ahora bien, qué ocurre cuando ese usuario particular que quiere compartir sus playlists de música preferida con sus compañeros a través de una aplicación en un Cloud Público es también un empleado de una organización que decide usar el mismo espacio para compartir información de la compañía para la cual trabaja, sin disponer de un nivel de seguridad apropiado y sin contar con la aprobación y autorización de su empresa. La brecha que se abre a nivel de seguridad de la información es importante, no solo por el alto riesgo que entraña usar espacios sin gestión, no controlados ni integrados en la propia estructura corporativa, sino porque, además, están fuera del alcance competencial de la organización. Esta situación, denominada también de ‘rogue cloud’, suele escapar del conocimiento del área encargada de la seguridad, con lo que la exposición al riesgo que conlleva puede sostenerse por tiempo indefinido. La información es uno de los activos más importantes de cualquier compañía, pero su facilidad de uso puede ser contraproducente si no se tiene en cuenta una correcta gestión de la seguridad que garantice su protección y unas medidas de seguridad adecuadas y a las que todo usuario debe ser sensible. Aquí es donde el concepto de ‘rogue cloud’, o de nube sin autorización, adquiere una mayor relevancia. Según un recién estudio de Symantec, un alto porcentaje de empresas tuvieron a lo largo del año pasado problemas con el uso de aplicaciones de nubes no autorizadas por parte de sus empleados. Espacios que son compartidos en la nube y que no están directamente controlados por la organización y a cuya información pueden estar accediendo terceras personas, con el riesgo añadido de una posible suplantación de identidad. Son, en definitiva, sistemas de información que no están integrados en la infraestructura de la compañía y por tanto, el uso que se hace de los mismos y de los datos que allí se comparten, carecen del debido control y de la debida autorización. Ello nos lleva una vez más a considerar la gestión de la seguridad de la información como un aspecto que engloba a toda la organización. No basta con tener buenas políticas si no se asegura su correcta aplicación, o medidas restrictivas si los usuarios, en su ámbito personal, beneficiándose de una mal pretendida disponibilidad, acaban haciendo un uso indebido de la información que manejan, incluso si es de forma involuntaria, pero que acaba poniendo en riesgo su integridad y/o confidencialidad. De aquí que la inversión de las empresas en programas de formación y concienciación de sus empleados, por un lado, y en la adopción de sistemas y recursos apropiados y a medida de sus necesidades, por otro, alejados de soluciones públicas demasiado expuestas, son la mejor vía de promoción de su seguridad y también de fomento de una buena reputación empresarial en esta área #### Ruby Developer's Resume ### Why Ruby Is the Crown Jewel in a Developer's Resume Demand for Ruby on Rails talent continues at a steady pace, and developers with the skills and knowledge of this language are red-hot in a tight IT job market (includes infographic). Over the last five years, demand for Ruby on Rails skills has quadrupled and is proving to be a lucrative feather in the cap of developers, according to data from PayScale, an online salary, benefits and compensation information company. The relative ratio of workers who report it as a skill critical to their role in the last year is 4.5 times higher than the ratio of workers who reported it as a skill critical to their role five years ago, according to PayScale's data. And Ruby on Rails skills can really make a difference for developers' career satisfaction and employment outlook, says Katie Bardaro, lead economist at PayScale. Ruby Makes a Difference "When I looked at this data, when I talked to developers, one thing that stuck out for me was the number of workers who felt Ruby really made a difference in their career," says Bardaro. "Ruby has made a significant difference for them as far as demand for their skills and their compensation; if a developer has Ruby skills, they can count on approximately$17,000 annually added to their salary, and that's not chump change," Bardaro says.

[Related: Getting Started With Ruby: A Tour of the Scripting Language ]

PayScale's data backs up Bardaro's assertion. The additional income a tech worker with Ruby skills receives in the second quarter of 2014 is $17,800, a significant increase over Q2 2013, when the additional income was reported as$10,200. That's much higher than the generally skilled tech worker, who reported an increase of \$4,800 in Q2 2014 for adding any other skill, according to PayScale data.

Ruby on Rails Use Cases Drive Demand

"Ruby's one of those languages that's easy to learn but hard to master," says Laura McGarrity, vice president of Digital Marketing Strategy at resourcing and staffing firmMondo.

[ Related: 6 Emerging Programming Languages Career-Minded Developers Should Learn ]

"We get a lot of opportunities for Ruby from our clients - it's probably our third-highest skill in demand after JavaScript and PHP, and those top three have remained stable over the last few years," says McGarrity.

Much of Ruby's popularity is because of the language's extensive use in building elegant, easy-to-use customer- and user-facing applications, says McGarrity, and the skillsets are in high demand for e-commerce companies, in finance and in other industries where user experience is a key metric for success.

[Related: Top 10 Programming Skills That Will Get You Hired]

"There just aren't enough good, skilled Ruby on Rails developers to fill these positions," she says. "Our clients in finance, in ecommerce, in marketing, they want very specific skillsets and candidates with a lot of experience - Ruby is at the forefront of the platforms they're looking to build on," she says.

#### Ante Up or Risk Losing Talent

Ruby developers know this, and can be choosy when considering job opportunities, says John Parker, CEO, Enfocus Solutions. Organizations with a demand for Ruby on Rails talent need to make sure they have adequate compensation and benefits in place to attract this talent, or they'll go elsewhere, he says.

"It's hard to attract and hire Ruby developers because the demand puts their skills at a premium," Parker says. "There are enough opportunities available that they can be picky about where they go and what the environment and compensation is like. You really have to make sure you're willing to go the distance to get them on board, because it's certain they'll have other options," he says.

#### Ruby on Rails

Ruby on Rails, sometimes known as "RoR" or just "Rails," is an open source framework for Web development in Ruby, an object-oriented programming (OOP) language similar to Perland Python.

The principle difference between Ruby on Rails and other frameworks for development lies in the speed and ease of use that developers working within the environment enjoy. Changes made to applications are immediately applied, avoiding the time consuming steps normally associated with the web development cycle. According to David Geary, a Java expert, the Ruby-based framework is five to 10 times faster than comparable Java-based frameworks. In a blog posting, Geary predicted that Rails would be widely adopted in the near future.

Rails is made up of several components, beyond Ruby itself, including:

• Active record, an object-relational mapping layer
• Action pack, a manager of controller and view functions
• Action mailer, a handler of email
• Action web services
• Prototype, an implementer of drag and drop and Ajax functionality

Rails can run on most Web servers that support CGI. The framework also supports MySQL,PostgreSQL, SQLite, SQL ServerDB2 and Oracle. Rails is also an MVC (model, view, controller) framework where all layers are provided by Rails, as opposed to reliance on other, additional frameworks to achieve full MVC support. Invented by David Heinemeier Hanss, Ruby On Rails has been developed as an open-source project, with distributions available through rubyonrails.org.

Contributor(s): Alexander B. Howard
This was last updated in April 2006
Posted by: Margaret Rouse