Glosario KW | KW Glossary
Ontology Design | Diseño de Ontologías
Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL
R (DATA CENTER)
R (OPEN SOURCE)
R (WEB SERVICES)
R programming language
R programming language
The R programming language is an open source scripting language for predictive analytics and data visualization.
The initial version of R was released in 1995 to allow academic statisticians and others with sophisticated programming skills to perform complex data statistical analysis and display the results in any of a multitude of visual graphics. The "R" name is derived from the first letter of the names of its two developers, Ross Ihaka and Robert Gentleman, who were associated with the University of Auckland at the time.
The R programming language includes functions that support linear modeling, non-linear modeling, classical statistics, classifications, clustering and more. It has remained popular in academic settings due to its robust features and the fact that it is free to download in source code form under the terms of the Free Software Foundation's GNU general public license. It compiles and runs on UNIX platforms and other systems including Linux, Windows and macOS.
The appeal of the R language has gradually spread out of academia into business settings, as many data analysts who trained on R in college prefer to continue using it rather than pick up a new tool with which they are inexperienced.
The R software environment
The R language programming environment is built around a standard command-line interface. Users leverage this to read data and load it to the workspace, specify commands and receive results. Commands can be anything from simple mathematical operators, including +, -, * and /, to more complicated functions that perform linear regressions and other advanced calculations.
Users can also write their own functions. The environment allows users to combine individual operations, such as joining separate data files into a single document, pulling out a single variable and running a regression on the resulting data set, into a single function that can be used over and over.
Looping functions are also popular in the R programming environment. These functions allow users to repeatedly perform some action, such as pulling out samples from a larger data set, as many times as the user wants to specify.
R language pros and cons
Many users of the R programming language like the fact that it is free to download, offers sophisticated data analytics capabilities and has an active community of users online where they can turn to for support.
Because it's been around for many years and has been popular throughout its existence, the language is fairly mature. Users can download add-on packages that enhance the basic functionality of the language. These packages enable users to visualize data, connect to external databases, map data geographically and perform advanced statistical functions. There is also a popular user interface called RStudio, which simplifies coding in the R language.
The R language has been criticized for delivering slow analyses when applied to large data sets. This is because the language utilizes single-threaded processing, which means the basic open source version can only utilize one CPU at a time. By comparison, modern big data analytics thrives on parallel data processing, simultaneously leveraging dozens of CPUs across a cluster of servers to process large data volumes quickly.
In addition to its single-threaded processing limitations, the R programming environment is an in-memory application. All data objects are stored in a machine's RAM during a given session. This can limit the amount of data R is able to work on at one time.
R and big data
These limitations have mitigated the applicability of the R language in big data applications. Instead of putting R to work in production, many enterprise users leverage R as an exploratory and investigative tool. Data scientists will use R to run complicated analyses on sample data and then, after identifying a meaningful correlation or cluster in the data, put the finding into product through enterprise-scale tools.
Several software vendors have added support for the R programming language to their offerings, allowing R to gain a stronger footing in the modern big data realm. Vendors including IBM, Microsoft, Oracle, SAS Institute, TIBCO and Tableau, among others, include some level of integration between their analytics software and the R language. There are also R packages for popular open source big data platforms, including Hadoop and Spark.
Continue Reading About R programming language:
Download the attached guide: Create an Analytics Success Story
Posted by: Margaret Rouse
A rack unit (abbreviated as U, less commonly seen as RU) is a unit of measurement applied to equipment racks and the servers, disk drives and other devices that they contain. One U is 1.75 inches (44.45mm); the standard rack, at 19 inches, is 42U.
Rack servers and other hardware designed to be rack-mounted are manufactured in multiples of 1.75 inches and specified in multiples of rack units, usually 1U, 2U, 3U or 4U. Racks are designed to hold equipment of those sizes. The holes in the mounting flanges of racks are arranged in groups of three, and that three-hole grouping is also called a rack unit.
Rack units for equipment is considered maximum dimensions. In practice, devices are often made slightly smaller than the specified U value to allow a little space. A device specified as 2U, for example, may in reality measure 3.44 inches in height, rather than the 3.5 inch multiple of 1.75.
See an introductory video on rack mounting equipment:
Rackspace Carina brings 'zero infrastructure' Docker deployment to public cloud
Radical Computer Rethink (DNA)
DNA offers radical computer rethink
A team of researchers at the University of Toyama in Japan, led by Masahiko Inouye, claim to have created the world's first stable artificial DNA molecules, made from synthesised nucleosides that resemble their natural counterparts.
DNA is made up of four basic building blocks, or bases, which code proteins used in cell functioning and development. While other researchers have developed DNA molecules with a few select artificial parts, the Japanese team put together four completely new artificial bases inside the framework of a DNA molecule, creating unusually stable, double-stranded structures resembling natural DNA.
The scientists say the artificial DNA acts like the real thing, and even forms right-handed duplexes with complementary artificial strands. They hope to one day use their discovery to create a new biological information storage system that functions outside the cell. Artificial DNA could be advantageously used instead of natural DNA due to its stability against naturally occurring enzymes and its structural diversity.
The unique chemistry of these artificial bases and DNA structures, coupled with their high stability, offers limitless possibilities for new biotechnology materials and applications, such as the creation of powerful DNA computers. These computers are constructed using DNA as software and enzymes as hardware, rather than traditional silicon-based components. By mixing DNA and enzymes in this way and monitoring the reactions, complex computer calculations can be performed.
DNA molecules are similar to computer hard drives in the way they save information about an individual's genes. However, they have the potential to perform calculations much faster than today's fastest man-made computers. This is because, unlike a traditional computer, calculations are performed simultaneously - similar to a parallel computing schematic - as numerous different DNA molecules attempt to test various possibilities at once.
In addition, unlike today's PCs, DNA computers require minimal or no external power sources as they run on internal energy produced during cellular reactions. There is a huge amount of potential for a computer that does not need to be plugged in the implications this has for laptops and true mobility are endless.
Because of these reasons, scientists all over the world are looking for ways in which DNA may be integrated into a computer chip to create a biochip that will make standard computers faster and more energy efficient. DNA computers could potentially be the future of green IT.
Although the idea of artificial DNA and DNA computers may seem far fetched, the concept is entirely plausible if one keeps an open mind: although DNA solutions may seem impossibly complex, there are few people who actually understand how silicon-based computing works. In addition, current systems are based on the binary system, and DNA computers would be similar in nature: they could leverage the pre-existing relationships between the four bases that are the core of every DNA molecule.
However, the more sinister connotations of artificial DNA computing - even though unfounded - remain fixed in users' minds. Therefore, since the first concept of DNA computing came about in 1994, researchers have been trying to develop artificial versions of DNA. Since the components of artificial DNA that have been created by Inouye's team do not exist in natural DNA, it is nearly impossible for them to react together, eliminating any threat of mutation.
The discovery of artificial DNA by Inouye and the Japanese team could be vital to the furthering of DNA computing as it would allow researchers to build custom DNA structures, which are optimised for computing. Unfortunately, the current method used for constructing the DNA structures creates only short strands, which are not long enough to encode information.
The technology for building artificial DNA is still extremely new, however, and is only the first step (albeit a huge one) towards using DNA as an external information storage system. DNA computers will not be replacing today's standard PCs any time soon as there are still years of research to be conducted before it can be determined if this technology will be fruitful in computing. That said, as DNA computing becomes more high profile, it may be beneficial for hardware technology giants such as Apple, Dell, HP, IBM, Intel and Sun Microsystems to invest in research that emphasises artificial DNA and its potential applications.
Ultimately, DNA computers are still in their infancy, but, if successful, will be capable of storing much more data than a regular PC and would be considerably more energy efficient and smaller in size. Given these huge benefits, investors should not rule DNA computers out of their strategies purely because they seem too implausible. Those vendors that participate in this revolutionary research could be pioneers in the development of DNA microprocessors and computers, if and when the technology is found to be viable.
Ruchi Mallya is an analyst on Datamonitor's Public Sector Technology team, covering the life sciences. Her research focuses on the use of technology in pharmaceuticals and biotechnology.
RASP helps apps protect themselves, but is it ready for the enterprise?
A new technology called runtime application self-protection is being touted as a next big thing in application security. But not everyone is singing its praises.
In the application economy, a perimeter defense is no longer a good offense. With the proliferation of mobile devices and cloud-based technologies, perimeters are all but disappearing, according to Joseph Feiman, an analyst with Gartner Inc. "The more we move from place to place with our mobile devices, the less reliable perimeter-based technology becomes," he said.
Firewalls and intrusion prevention systems, which enterprises spent an estimated $9.1 billion on last year, still serve a vital purpose. But, given the enterprise infrastructure's growing sprawl, CIOs should be thinking about security breadth as well as security depth and how to scale their strategies down to the applications themselves, even building in a strikingly human feature: self-awareness.
A new tool for the application security toolbox known as runtime application self-protection (RASP) could help CIOs get there, but, according to one expert, it's no silver bullet.
Guarding the application
The security measures many CIOs have in place don't do much to safeguard actual applications, according to Feiman. Network firewalls, identity access management, intrusion detection or endpoint protection provide security at different levels, but none of them can see beyond the application layer. "Can you imagine a person who walks out of the house and into the city always surrounded by bodyguards because he has no muscles and no skills," Feiman said. "That is a direct analogy with the application." Strip away features like perimeter firewalls, and the application is basically defenseless.
Defenseless applications leave enterprises vulnerable to external -- and internal -- threats. "High-profile security breaches illustrate the growing determination and sophistication of attackers," said Johann Schleier-Smith, CTO at if(we), a social and mobile technology company based in San Francisco. "They have also forced the industry to confront the limitations of traditional security measures."
Application security testing tools help detect flaws and weaknesses, but the tools aren't comprehensive, Feiman said during a Gartner Security and Risk Management Summit last summer. Static application security testing, for example, analyzes source, binary or byte code to uncover bugs but only before the application is operational. Dynamic application security testing, on the other hand, simulates attacks on the application while it's operational and analyzes the response but only for Web applications that use HTTP, according to Gary McGraw, CTO of the software security consulting firm Cigital Inc.
Even when taken together, these two technologies still can't see what happens inside the application while it's operational. And, according to Feiman's research report Stop Protecting Your Apps; It's Time for Apps to Protect Themselves, published in September 2014, static and dynamic testing, whether accomplished with premises-based tools or purchased as a service, can be time-consuming and hard to scale as the enterprise app portfolio multiples.
Is RASP the answer?
That's why Feiman is keeping an eye on a budding technology Gartner calls RASP or runtime application self-protection. "It is the only technology that has complete insight into what's going on in the application," he said.
RASP, which can be applied to Web and non-Web applications, doesn't affect the application design itself; instead, detection and protection features are added to the servers an application runs on. "Being a part of the virtual machine, RASP sees every instruction being executed, and it can see whether a set of instructions is an attack or not," he said. The technology works in two modes: It can be set to diagnostic mode to sound an alarm; or it can be set to self-protection mode to "stop an execution that would lead to a malicious exploit," Feiman said.
The technology is offered by a handful of vendors. Many, such as Waratek, founded in 2009, are new to the market, but CIOs will recognize at least one vendor getting into the RASP game: Hewlett-Packard. Currently, RASP technology is built for the two popular application servers: Java virtual machine and .NET Common Language Runtime. Additional implementations are expected to be rolled out as the technology matures.
While Feiman pointed to the technology's "unmatched accuracy," he did note a couple of challenges: The technology is language dependent, which means the technology will have to be implemented separately for Java virtual machine versus .NET CLR. Because RASP sits on the application server, it uses CPUs. "Emerging RASP vendors report 2% to 3% of performance overhead, and some other evidence reports 10% or more," Feiman wrote inRuntime Application Self-Protection: Technical Capabilities, published in 2012.
Is it ready for primetime?
Not everyone is ready to endorse RASP. "I don't think it's ready for primetime," said Cigital's McGraw. RASP isn't a bad idea in principle, he said, "but in practice, it's only worked for one or two weak categories of bugs."
The statement was echoed by if(we)'s Schleier-Smith: "What remains to be seen is whether the value RASP brings beyond Web application firewalls and other established technologies offsets the potential additional complexity," he said.
CIOs may be better off creating an inventory of applications segmented by type -- mobile, cloud-based, Web-facing. "And choose the [security] technology stack most appropriate for the types of applications found in their portfolio," McGraw said.
Even Feiman stressed that CIOs need to find a use case for the technology and consider how aggressive in general the organization is when adopting emerging technologies. For more conservative organizations, investing in RASP could still be two to five years out, he said.
To strengthen application security right now, McGraw urged CIOs to remember the power of static testing, which works on all kinds of software. And he suggested they investigate how thoroughly tools such as static and dynamic testing are being utilized by their staff. "The security people are not really testing people," he said, referring to software developers. "So when they first applied dynamic testing to security, nobody bothered to check how much of the code was actually tested. And the answer was: Not very much."
An even better strategy: Rather than place too much emphasis on RASP or SAST or DAST, application security should start with application design. "Half of software security issues are design problems and not silly little bugs," McGraw said.
Real-time analytics is the use of, or the capacity to use, data and related resources as soon as the data enters the system. The adjective real-time refers to a level of computer responsiveness that a user senses as immediate or nearly immediate.
Technologies that support real-time analytics include:
Applications of real-time analytics
In CRM (customer relations management), real-time analytics can provide up-to-the-minute information about an enterprise's customers and present it so that quicker and more accurate business decisions can be made -- perhaps even within the time span of a customer interaction. In a data warehouse context, real-time analytics supports unpredictable, ad hoc queries against large data sets. Another application is in scientific analysis such as the tracking of a hurricane's path, intensity, and wind field, with the intent of predicting these parameters hours or days in advance.
The adjective real-time refers to a level of computer responsiveness that a user senses as immediate or nearly immediate, or that enables a computer to keep up with some external process (for example, to present visualizations of Web site activity as it constantly changes).
This definition is part of our Essential Guide: Enterprise data analytics strategy: A guide for CIOs
Please read the attached whitepaper: May the Data Be With You: Download Our Customer Data E-book.
Recovery Time Objective (RTO)
Recovery Time Objective (RTO)
Posted by: Margaret Rouse
The recovery time objective (RTO) is the maximum tolerable length of time that a computer, system, network, or application can be down after a failure or disaster occurs.
The RTO is a function of the extent to which the interruption disrupts normal operations and the amount of revenue lost per unit time as a result of the disaster. These factors in turn depend on the affected equipment and application(s). An RTO is measured in seconds, minutes, hours, or days and is an important consideration in disaster recovery planning (DRP).
Numerous studies have been conducted in an attempt to determine the cost of downtime for various applications in enterprise operations. These studies indicate that the cost depends on long-term and intangible effects as well as on immediate, short-term, or tangible factors. Once the RTO for an application has been defined, administrators can decide which disaster recovery technologies are best suited to the situation. For example, if the RTO for a given application is one hour, redundant data backup on external hard drives may be the best solution. If the RTO is five days, then tape, recordable compact disk (CD-R) or offsite storage on a remote Web server may be more practical.
Red privada virtual (VPN)
Red privada virtual (VPN)
Publicado por: Margaret Rouse
Una red privada virtual (Virtual Private Network, o VPN) es una tecnología que crea una conexión cifrada a través de una red menos segura. La ventaja de utilizar una VPN segura es que garantiza el nivel de seguridad adecuado para los sistemas conectados cuando la infraestructura de red subyacente por sí sola no puede proporcionarla. La justificación para usar el acceso VPN en lugar de una red privada generalmente se reduce al coste ya la viabilidad: No es factible tener una red privada, por ejemplo, para un representante de ventas ambulante, o es demasiado costoso hacerlo. Los tipos más comunes de VPN son las VPN de acceso remoto y las VPN de sitio a sitio.
Una VPN de acceso remoto utiliza una infraestructura pública de telecomunicaciones como internet para proporcionar a los usuarios remotos acceso seguro a la red de su organización. Esto es especialmente importante cuando los empleados utilizan un hotspot Wi-Fi público u otras vías para usar internet y conectarse a su red corporativa. Un cliente VPN en el equipo del usuario remoto o dispositivo móvil se conecta a una puerta de enlace (gateway) VPN en la red de la organización. El gateway típicamente requiere que el dispositivo autentique su identidad. A continuación, crea un enlace de red al dispositivo que le permite acceder a recursos de red internos, por ejemplo, servidores de archivos, impresoras e intranets, como si estuviera localmente en esa red.
Una VPN de acceso remoto usualmente se basa en IPsec o Secure Sockets Layer (SSL) para asegurar la conexión, aunque las VPN SSL a menudo se centran en proporcionar acceso seguro a una sola aplicación, en lugar de a toda la red interna. Algunas VPN proporcionan acceso de Nivel 2 a la red de destino; éstas requieren un protocolo de tunelización como PPTP o L2TP, que se ejecuta a través de la conexión IPsec base.
Una VPN de sitio a sitio utiliza un dispositivo de puerta de enlace para conectar toda la red en una ubicación a una red en otra ubicación, normalmente una sucursal pequeña que se conecta a un centro de datos. Los dispositivos de nodo final en la ubicación remota no necesitan clientes VPN porque la puerta de enlace controla la conexión. La mayoría de las VPN de sitio a sitio que se conectan a través de internet utilizan IPsec. También es común usar nubes portadoras MPLS, en lugar de internet pública, como el transporte de VPN de sitio a sitio. Aquí, también, es posible tener la conectividad de Capa 3 (MPLS IP VPN) o de Capa 2 (Virtual Private LAN Service, o VPLS) funcionando a través del transporte base.
Las VPN también se pueden definir entre equipos específicos, normalmente servidores en centros de datos independientes, cuando los requisitos de seguridad para sus intercambios exceden lo que la red empresarial puede ofrecer. Cada vez más, las empresas también utilizan conexiones VPN en modo de acceso remoto o modo sitio a sitio para conectar o conectarse a recursos en un entorno de infraestructura pública como servicio. Los nuevos escenarios de acceso híbrido colocan al gateway VPN en la nube, con un enlace seguro del proveedor de servicios en la nube a la red interna.
Posted by: Margaret Rouse
Relentless incrementalism is a process in which something substantial is built through the accumulation of small but incessant additions.
Relentless incrementalism is often recommended as an approach to accomplishing a daunting goal. A seemingly impossible objective may be achieved by steadily working towards it, perhaps by completing subtasks or sharing the work among multiple individuals. The essential components of relentless incrementalism are: 1. Getting started and accomplishing even small tasks or work segments regularly, and 2. Not stopping until the goal is achieved.
The concept of relentless incrementalism derives from economics and social policy, is used in various areas of information technology and business management. Applied to a large effort like enterprise security, for example, the approach helps businesses start on a fundamental level and build on the initial efforts, decreasing their vulnerability as they do so.
Relentless incrementalism is also an effective time management approach. Because it emphasizes the importance of accomplishing even small tasks regularly, it can help prevent employees from feeling overwhelmed by large projects.
Agile project management, which is an iterative approach, can be considered an implementation of relentless incrementalism.
Continue Reading About relentless incrementalism
People Who Read This Also Read...
Ruby Developer's Resume
Why Ruby Is the Crown Jewel in a Developer's Resume
Demand for Ruby on Rails talent continues at a steady pace, and developers with the skills and knowledge of this language are red-hot in a tight IT job market (includes infographic).
Over the last five years, demand for Ruby on Rails skills has quadrupled and is proving to be a lucrative feather in the cap of developers, according to data from PayScale, an online salary, benefits and compensation information company.
The relative ratio of workers who report it as a skill critical to their role in the last year is 4.5 times higher than the ratio of workers who reported it as a skill critical to their role five years ago, according to PayScale's data. And Ruby on Rails skills can really make a difference for developers' career satisfaction and employment outlook, says Katie Bardaro, lead economist at PayScale.
Ruby Makes a Difference
"When I looked at this data, when I talked to developers, one thing that stuck out for me was the number of workers who felt Ruby really made a difference in their career," says Bardaro. "Ruby has made a significant difference for them as far as demand for their skills and their compensation; if a developer has Ruby skills, they can count on approximately $17,000 annually added to their salary, and that's not chump change," Bardaro says.
PayScale's data backs up Bardaro's assertion. The additional income a tech worker with Ruby skills receives in the second quarter of 2014 is $17,800, a significant increase over Q2 2013, when the additional income was reported as $10,200. That's much higher than the generally skilled tech worker, who reported an increase of $4,800 in Q2 2014 for adding any other skill, according to PayScale data.
Ruby on Rails Use Cases Drive Demand
"Ruby's one of those languages that's easy to learn but hard to master," says Laura McGarrity, vice president of Digital Marketing Strategy at resourcing and staffing firmMondo.
Much of Ruby's popularity is because of the language's extensive use in building elegant, easy-to-use customer- and user-facing applications, says McGarrity, and the skillsets are in high demand for e-commerce companies, in finance and in other industries where user experience is a key metric for success.
"There just aren't enough good, skilled Ruby on Rails developers to fill these positions," she says. "Our clients in finance, in ecommerce, in marketing, they want very specific skillsets and candidates with a lot of experience - Ruby is at the forefront of the platforms they're looking to build on," she says.
Ante Up or Risk Losing Talent
Ruby developers know this, and can be choosy when considering job opportunities, says John Parker, CEO, Enfocus Solutions. Organizations with a demand for Ruby on Rails talent need to make sure they have adequate compensation and benefits in place to attract this talent, or they'll go elsewhere, he says.
"It's hard to attract and hire Ruby developers because the demand puts their skills at a premium," Parker says. "There are enough opportunities available that they can be picky about where they go and what the environment and compensation is like. You really have to make sure you're willing to go the distance to get them on board, because it's certain they'll have other options," he says.
Ruby on Rails
The principle difference between Ruby on Rails and other frameworks for development lies in the speed and ease of use that developers working within the environment enjoy. Changes made to applications are immediately applied, avoiding the time consuming steps normally associated with the web development cycle. According to David Geary, a Java expert, the Ruby-based framework is five to 10 times faster than comparable Java-based frameworks. In a blog posting, Geary predicted that Rails would be widely adopted in the near future.
Rails is made up of several components, beyond Ruby itself, including:
Rails can run on most Web servers that support CGI. The framework also supports MySQL,PostgreSQL, SQLite, SQL Server, DB2 and Oracle. Rails is also an MVC (model, view, controller) framework where all layers are provided by Rails, as opposed to reliance on other, additional frameworks to achieve full MVC support. Invented by David Heinemeier Hanss, Ruby On Rails has been developed as an open-source project, with distributions available through rubyonrails.org.
Contributor(s): Alexander B. Howard
This was last updated in April 2006
Posted by: Margaret Rouse