Glosario KW | KW Glossary

Ontology Design | Diseño de Ontologías

Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

Page: (Previous)   1  2


Picture of System Administrator

SLAs for the Cloud

by System Administrator - Thursday, 13 July 2017, 9:49 AM



Picture of System Administrator

Slicing the Big Data Analytics Stack

by System Administrator - Wednesday, 10 September 2014, 9:06 PM

Slicing the Big Data Analytics Stack

In this special report we have provided a deeper view into a series of technical tools and capabilities that are powering the next generation of big data analytics. From the pipes and platforms to the analytical interfaces and data management tools, we hope to help you develop a better ear to tune into the big data noise with. The goal is to empower you to make strong decisions in a noisy world of optionsall of which seem to promise similar end results. 

Please read the attached whitepaper

Picture of System Administrator

Software Asset Management: Pay Attention or Pay Up

by System Administrator - Friday, 12 September 2014, 1:17 AM

Software Asset Management: Pay Attention or Pay Up

There is a wide range of options for managing software assets, from in-house solutions to the cloud to managed services providers. Read this whitepaper to learn about:

  • Using SAM to inform software investments
  • Avoiding software fees and fines
  • What's the best approach to SAM?Making the most of volume-based licensing
  • Hands-free SAM: How vendors can unburden IT leaders

 Please read the attached whitepaper.


Picture of System Administrator

Software Defined Networking

by System Administrator - Wednesday, 25 February 2015, 3:55 PM

Software Defined Networking (Source: Intel)

Software Defined Networking: En busca de la automatización de la red.

by Frost & Sullivan

Software Defined Network (SDN) es uno de los temas más candentes en el mercado de networking, con más de US$250 millones de capital de riesgo en “startups” y más de US$1.5 billones en adquisiciones relacionadas con este cambio de arquitectura en el mundo. SDN representa un nuevo paradigma capaz de volver las redes más eficientes, escalables, ágiles y dinámicas, mediante el aumento de programación y automatización.

Los beneficios son reducción de los costos operativos, una notable mejora en el rendimiento de las redes, aprovisionamiento más rápido, y la promesa de una arquitectura abierta, basada en estándares, lo que permite una mayor variedad de proveedores para las empresas que adoptan el SDN.

El principal beneficio obtenido por las empresas es la capacidad de crear una facilidad de gestión, en particular para las empresas que ya están avanzando hacia un centro de datos virtualizado, lo que implica una búsqueda de soluciones SDN para implementar en sus centros de datos en los próximos años. Se destacan también en la búsqueda por estas soluciones empresas que quieren mejorar flexibilidad, agilidad y simplificación de la gestión a través de ofertas en nube.

Please read the attached whitepaper.

Picture of System Administrator

Software Development

by System Administrator - Tuesday, 30 April 2013, 2:56 PM

Terms related to software development, including definitions about programming and words and phrases about Scrum, Agile and waterfall methodologies.


Picture of System Administrator

Software Interactive eGuide

by System Administrator - Wednesday, 17 December 2014, 2:02 PM

Software Interactive eGuide

The way that enterprises procure and use software is changing rapidly. Many organizations have grown tired of traditional software licensing models, which are often complex and expensive, and are looking to cloud computing and Software-as-a-Service (SaaS) as viable alternatives. However, these new approaches pose their own sets of challenges.

In this eGuide, Computerworld along with sister publications InfoWorld and IDG News Service look at recent trends and advancements in cloud and SaaS models. Read on to learn the best approaches for your organization.

Please read the attached eGuide

Picture of System Administrator

Source Code

by System Administrator - Friday, 14 April 2017, 4:40 PM

Código Fuente

por Wikipedia

El código fuente de un programa informático (o software) es un conjunto de líneas de texto con los pasos que debe seguir la computadora para ejecutar dicho programa.

El código fuente de un programa está escrito por un programador en algún lenguaje de programación, pero en este primer estado no es directamente ejecutable por la computadora, sino que debe ser traducido a otro lenguaje o código binario; así será más fácil para la máquina interpretarlo (lenguaje máquina o código objeto que sí pueda ser ejecutado por el hardware de la computadora). Para esta traducción se usan los llamados compiladoresensambladoresintérpretes y otros sistemas de traducción.

El término código fuente también se usa para hacer referencia al código fuente de otros elementos del software, como por ejemplo el código fuente de una página web, que está escrito en lenguaje de marcado HTML o en Javascript, u otros lenguajes de programación web, y que es posteriormente ejecutado por el navegador web para visualizar dicha página cuando es visitada.

El área de la informática que se dedica a la creación de programas, y por tanto a la creación de su código fuente, es la ingeniería de software.


Source Code

Posted by: Margaret Rouse

Source code is the fundamental component of a computer program that is created by a programmer. It can be read and easily understood by a human being. When a programmer types a sequence of C language statements into Windows Notepad, for example, and saves the sequence as a text file, the text file is said to contain the source code. 

Source code and object code are sometimes referred to as the "before" and "after" versions of a compiled computer program. For script (noncompiled or interpreted) program languages, such as JavaScript, the terms source code and object code do not apply, since there is only one form of the code.

Programmers can use a text editor, a visual programming tool or an integrated development environment to create source code. In large program development environments, there are often management systems that help programmers separate and keep track of different states and levels of source code files. 

Licensing of source code

Source code can be proprietary or open, and licensing agreements often reflect this distinction.

When a user installs a software suite like Microsoft Office, for example, the source code is proprietary, and Microsoft only gives the customer access to the software's compiled executables and the associated library files that various executable files require to call program functions.

By comparison, when a user installs Apache OpenOffice, its open source software code can be downloaded and modified.

Typically, proprietary software vendors like Microsoft don't share source code with customers for two reasons: to protect intellectual property and to prevent the customer from making changes to source code in a way that might break the program or make it more vulnerable to attack. Proprietary software licenses often prohibit any attempt to discover or modify the source code.

Open source software, on the other hand, is purposely designed with the idea that source code should be made available because the collaborative effort of many developers working to enhance the software can, presumably, help make it more robust and secure. Users can freely take open source code under public licenses, such as the GNU General Public License.

Purposes of source code

Beyond providing the foundation for software creation, source code has other important purposes, as well. For example, skilled users who have access to source code can more easily customize software installations, if needed.

Meanwhile, other developers can use source code to create similar programs for other operating platforms -- a task that would be trickier without the coding instructions.

Access to source code also allows programmers to contribute to their community, either through sharing code for learning purposes or by recycling portions of it for other applications.

Organization of source code

Many different programs exist to create source code. Here is an example of the source code for a Hello World program in C language:

/* Hello World program */


printf("Hello World");


Even a person with no background in programming can read the C programming source code above and understand that the goal of the program is to print the words "Hello World." In order to carry out the instructions, however, this source code must first be translated into a machine language that the computer's processor can understand; that is the job of a special interpreter program called a compiler -- in this case, a C compiler.

After programmers compile source code, the file that contains the resulting output is referred to as object code.

Object code consists mainly of the numbers one and zero and cannot be easily read or understood by humans. Object code can then be "linked" to create an executable file that runs to perform the specific program functions.

Source code management systems can help programmers better collaborate on source code development; for example, preventing one coder from inadvertently overwriting the work of another.

History of source code

Determining the historical start of source code is a subjective -- and elusive -- exercise. The first software was written in binary code in the 1940s, so depending on one's viewpoint, such programs may be the initial samples of source code.

One of the earliest examples of source code as we recognize it today was written by Tom Kilburn, an early pioneer in computer science. Kilburn created the first successful digital program held electronically in a computer's memory in 1948 (the software solved a mathematical equation).


Tom Kilburn's highest factor routine


In the 1950s and '60s, source code was often provided for free with software by the companies that created the programs. As growing computer companies expanded software's use, source code became more prolific and sought after. Computing magazines prior to the internet age would often print source code in their pages, with readers needing to retype the code character for character for their own use. Later, floppy disks decreased the price for electronically sharing source code, and then the internet further deleted these obstacles.

Picture of System Administrator

Spark vs. Hadoop

by System Administrator - Monday, 13 March 2017, 11:22 PM

Spark vs. Hadoop: ¿es el motor de big data una parte de reemplazo?

por Jack Vaughan

Cómo se desarrollará la relación entre Spark y Hadoop es una pregunta abierta. Le preguntamos a profesionales de TI si ven a Spark más como compañero o competidor de Hadoop.

Conforme ha evolucionado el marco de procesamiento distribuido Hadoop, ha llegado a incluir mucho más que su núcleo original, que consistía en el sistema de archivos distribuido Hadoop (HDFS) y el entorno de programación MapReduce. Entre una serie de nuevos componentes del ecosistema Hadoop, una tecnología ha adquirido una especial atención: el motor de procesamiento de datos en memoria Spark. Spark está reemplazando a MapReduce en un número creciente de trabajos de procesamiento por lotes en los conjuntos de Hadoop; sus defensores afirman que puede ejecutarlos hasta 100 veces más rápido.

Después de que el software de código abierto Apache Spark estuvo disponible el año pasado, los proveedores de la distribución de Hadoop se apresuraron a agregar la tecnología –que pronto será actualizada en un lanzamiento de la versión 1.6– a su cartera de productos. Pero mientras Spark  ahora a menudo se encuentra en aplicaciones de big data, junto con HDFS y el administrador de recursos YARN de Hadoop, también puede ser utilizado como un servicio independiente. Eso está provocando un creciente debate en los círculos de gestión de datos en relación con Spark vs. Hadoop.

¿Continuará Hadoop siendo un punto de partida para Spark? Para obtener una visión de usuario sobre esa pregunta, nuestro portal hermano SearchDataManagement preguntó a asistentes a Strata + Hadoop World 2015 en Nueva York si ven el motor de procesamiento Spark como complemento de Hadoop, o una alternativa al mismo y a componentes tales como YARN y MapReduce. Esto es lo que algunos de ellos dijeron sobre el tema de Spark vs. Hadoop.

Sridhar Alla, arquitecto de big data de la compañía de televisión por cable Comcast: "Spark no almacena realmente nada. Procesar en Spark está reemplazando MapReduce y YARN, pero la capa de almacenamiento va a ser Hadoop durante mucho tiempo”.

Hakan Jonsson, científico de datos para el equipo de producto LifeLog en Sony Mobile Communications: "Es un reemplazo. Spark es mucho más rápido que Hadoop. Y desde el punto de vista de la productividad, usted no tiene que hacer el modelado [analítico] en una herramienta separada”.

Brett Shriver, director senior de tecnología de regulación del mercado para la Autoridad Reguladora de la Industria Financiera o FINRA: "Hay cuatro o cinco patrones [de vigilancia] desafiantes en cuanto a desempeño en nuestra cartera, y están dirigidos hacia Spark. A largo plazo, ¿quién sabe? Puede que sea la forma en que vayamos. El jurado aún está deliberando".

Joe Hsy, director de plataformas y herramientas de servicios de nube para la unidad de WebEx de Cisco: "Creo que Spark va a reemplazar una gran parte de lo aquello para lo que usamos hoy MapReduce. Con el tiempo, si Spark continúa ampliando su funcionalidad, podría reemplazar MapReduce por completo".

William Theisinger, vicepresidente de ingeniería en el productor de Páginas Amarillas YP LLC: "Usted necesita llegar a donde el uso de las tecnologías es predecible, y yo no diría eso sobre Spark hoy. Todavía voy a tener que soportar MapReduce, también”.

Charlie Crocker, líder del programa de análisis de negocios en el proveedor de software Autodesk: "Ya sea que esté utilizando Hadoop o Spark, creo que va a convertirse en una cuestión filosófica. Si quieres ser revolucionario, puede decir que Hadoop está muerto. Pero Hadoop no está muerto".

Hadoop tiene algo de ventaja en las implementaciones, y a pesar de la reducida estatura de MapReduce, es probable que muchos trabajos de MapReduce que ya se están ejecutando continúen haciendo precisamente eso: correr. Además, ha habido una curva de aprendizaje al poner las aplicaciones de prueba de concepto de Hadoop en producción, y Spark puede igualmente enfrentar una curva similar.

En cierto modo, el ascenso de Spark muestra la capacidad de Hadoop para expandirse más allá de sus componentes originales. Y es probable que la avalancha de nuevas tecnologías de big data  continúe, sin importar cómo se resuelve el tema de Spark vs. Hadoop.

Jack Vaughan es el editor de noticias y editor general del sitio SearchDataManagement. Envíele un correo electrónico a, y sígalo en Twitter: @sDataManagement.

El editor ejecutivo Craig Stedman contribuyó a esta historia.



Picture of System Administrator

Specification by Example (SBE)

by System Administrator - Tuesday, 30 December 2014, 3:21 PM

Specification by Example (SBE)

Posted by Margaret Rouse

Specification by example (SBE) is a user-driven contextual approach to defining software requirements. This approach encourages communication between a project's business owners and the software development team while also aligning software specifications with user acceptance testing.

Specification by example (SBE) is a user-driven contextual approach to defining software requirements.

SBE requires business stakeholders to provide realistic scenarios for how the software will be used and those examples are used to determine the scope of the project. This approach has two major benefits -- it encourages communication between the business owners of a project and the software development team and it helps the developers align software specifications with user acceptance testing (UAT). When done right, the specifications can be validated through automated software tests that run frequently. 

In order for SBE to succeed, it's important that the business owners provide the development team with precise examples that illustrate how slices of the system should behave. It's equally important for the development team to make sure each specification by example is testable. SBE may deliver less than optimal outcomes if examples focus on how the software works rather than on the business goals it seeks to achieve. This is where communication and collaboration become key. For example, if the business stakeholders spend too much time describing  how they would like an online form to be formatted, it is up to the SBE project manager to bring the focus of the conversation back to how the data that is entered in the form will be used to drive productivity, profitability and business growth. When SBE is implemented appropriately, it can simplify design, reduce unnecessary code in development and speed deployment by shortening or eliminating feedback loops. 

SBE is often used in iterative software development methodologies such asAgileScrum and Extreme Programming (XP).  Depending upon the implementation, the examples that the business owners provide may also be referred to as executable requirements or use cases.  What the team decides to call project artifacts is not important -- the only thing that matters is that the team agrees upon a common language and uses it consistently. It is equally important that documentation be created and updated throughout the project to ensure that code can be maintained or updated easily when the project is over. SBE project managers call this "living documentation."  Whatever the team decides to call the project's documentation, it should serve as a way for the IT team to demonstrate additional business value when change is required.

As a concept, SBE is credited to Gojko Adzic, a software development consultant who wrote a book in 2011 entitled  "Specification by Example: How Successful Teams Deliver the Right Software."  In the real world, the concepts presented in the book may also be referred to as example-driven development (EDD) or behavior-driven development (BDD), two similar approaches that are also the subjects of books.  

See alsofunctional specification


Picture of System Administrator

Sprint (software development) definition

by System Administrator - Wednesday, 17 June 2015, 4:40 PM

Sprint (software development) definition

In product development, a sprint is a set period of time during which specific work has to be completed and made ready for review.

Each sprint begins with a planning meeting. During the meeting, the product owner (the person requesting the work) and the development team agree upon exactly what work will be accomplished during the sprint. The development team has the final say when it comes to determining how much work can realistically be accomplished during the sprint, and the product owner has the final say on what criteria need to be met for the work to be approved and accepted.

The duration of a sprint is determined by the scrum master, the team's facilitator. Once the team reaches a consensus for how many days a sprint should last, all future sprints should be the same. Traditionally, a sprint lasts 30 days.

After a sprint begins, the product owner must step back and let the team do their work. During the sprint, the team holds daily stand up meeting to discuss progress and brainstorm solutions to challenges. The project owner may attend these meetings as an observer but is not allowed to participate unless it is to answer questions. (See pigs and chickens). The project owner may not make requests for changes during a sprint and only the scrum master or project manager has the power to interrupt or stop the sprint.

At the end of the sprint, the team presents its completed work to the project owner and the project owner uses the criteria established at the sprint planning meeting to either accept or reject the work.


See also: agile development


Picture of System Administrator

SQL Server Statistics Primer

by System Administrator - Monday, 13 July 2015, 5:38 PM

SQL Server Statistics Primer

This white paper is an introductory guide for DBAs about SQL Server statistics. It covers how to use them, how to maintain them, and how they affect performance. Statistics, or "stats," are fundamental components of SQL Server performance, but vastly under-appreciated and misunderstood. They are at the core of query optimization and can have tremendous effect on query plan selection. The query optimizer uses statistics to estimate I/O costs and memory grants. Poor statistics, whether they are skewed or incorrect, can cause massive performance problems when they lead to selection of a bad plan.

Regardless, having good statistics is still no guarantee that the plan will be optimal for the query. Statistics are mostly self-maintaining, though they can require a little care and feeding when they cause poor plan selection. However, caution is warranted when deciding to do regular maintenance on statistics. Sometimes doing maintenance on stats when it is not warranted can cause more harm than good. The key is to make sure you are addressing the problem, and not just the symptom.

Now, let's get to know statistics. We will take a look at what statistics are, how to view the information they provide, how they are used, and how to maintain them.

Please read the attached eBook.

Picture of System Administrator

SQL-on-Hadoop tools

by System Administrator - Friday, 11 September 2015, 2:37 AM

Evaluating SQL-on-Hadoop tools? Start with the use case

by Jack Vaughan

In a Q&A, Clarity Solution Group CTO Tripp Smith says to base SQL-on-Hadoop software decisions on actual workloads. Some Hadoop tools target batch jobs, while others are intended for interactive ones.

The flowering of the Hadoop ecosystem is both a blessing and a curse for prospective users. The numerous technologies revolving around the distributed processing framework augment the functionality found in Hadoop itself. But there are so many to choose from that evaluating them and finding the right one can be difficult. That's particularly true in the emerging SQL-on-Hadoop space, where tools such as Drill, Hawq, Hive, Impala and Presto vie for attention.

To get a better view of them, SearchDataManagement recently turned to Tripp Smith, CTO at Clarity Solution Group LLC, a Chicago-based data management and analytics consultancy that works with user organizations on Hadoop deployments and other big data projects. In an interview, Smith said the path to selecting among the surge of SQL-on-Hadoop tools begins with understanding use cases.

Hadoop has been around for a while, but in terms of going mainstream, it still seems very new to a lot of people. And when they seek to tame Hadoop to gain business benefits from big data, it often turns into a multiyear effort.


Tripp Smith

Tripp Smith: I think SQL interfaces to Hadoop are helping to bridge that gap. They also enhance portability for business logic from legacy applications, both to Hadoop and to different execution engines that now run within the Hadoop platform. We saw it start with the introduction of Hive. A lot of very smart folks at Facebook introduced that to the Hadoop ecosystem, and now the concept has expanded in a lot of different directions, not the least of which are Spark SQL, Impala and Presto, the latter also [coming] out of Facebook.

What SQL is doing for Hadoop is to bring kind of a common language for the average business user working on the legacy analytics platforms, as well as to the seasoned engineers and data scientists. It's easier now to trade off information and data processing between different components when you have Agile data teams using SQL on Hadoop.

By most counts, there are even more Hadoop tools than we've just talked about. What parameters do you look at when trying to evaluate products in this wide group of tools?

Each of the tools has a specialization. But that is where there's still a lot that needs to be fleshed out.

- Tripp SmithCTO, Clarity Solution Group

Smith: What you find is that the decision you make on SQL-on-Hadoop tools should be based on the use cases that you have. We look at Hadoop through the lens of what we call MESH -- that's a strategic architecture framework for 'mature, enterprise-strength Hadoop.' It looks at data management and analytical capabilities, as well as data governance capabilities and platform components.

Tool selection and approaches vary depending on the nuance of the problem you're trying to solve -- depending on whether you're looking at doing more of an extract, transform and load or to do extract, load and transform data integration, or you're looking at a real-time data integration use case, or whether you're looking at interactive queries. Each of the tools has a specialization. But that is where there's still a lot that needs to be fleshed out.

What are the steps people take as they walk through the process of choosing between these new technologies?

Smith: Most of the people we work with are not 'greenfield' -- they're into managing these tools without arbitrarily increasing their portfolio diversity. Admittedly, that may be a buzzword-full answer. But usually, they have an idea of how to judge how their workloads fit with the different SQL-on-Hadoop tools.

They will find that some of these tools have a limited type of [SQL] grammar for the things they want to do. I would throw Impala, as it first emerged, into that group. It was leading the pack around performance but maybe providing a limited subset of capabilities. Hive has been around the longest, and is relatively mature for the Hadoop ecosystem -- that is probably more focused to your data integration batch processing workload.

In each case, there is a bit of discovery required around taking your business use cases, what your infrastructure is today [and] where the new Hadoop components would fit in within the context of managing an IT portfolio. You have to have a process to introduce new components for your analytical workloads.

Jack Vaughan is SearchDataManagement's news and site editor. Email him at, and follow us on Twitter: @sDataManagement.

Next Steps


Picture of System Administrator


by System Administrator - Friday, 12 September 2014, 1:13 PM
Picture of System Administrator

SSL Certificates

by System Administrator - Friday, 20 February 2015, 4:36 PM

Understanding SSL Certificates

Protecting Against Web Application Threats Using SSL

  • A guide to understanding SSL certificates, how they operate and their application. By making use of an SSL certificate on your web server, you can securely collect sensitive information online, and increase business by giving your customers confidence that their transactions are safe.
  • Businesses face an increasingly complex set of threats to their Web applications - from malware and advanced persistent threats to disgruntled employees and unintentional data leaks. Although there is no single security measure than can prevent all threats there are some that provide broad-based mitigation to a number of threats. The use of SSL encryption and digital certificate based authentication is one of them. In this shortcut guide, readers will learn how changes in the way we deliver services, the increasing use of mobile devices, the adoption of cloud computing compounded by the ever-evolving means of stealing information and compromising services leave Web applications vulnerable to attack. You will also learn how SSL encryption can protect server to server communications, client devices, cloud resources and other endpoints in order to help prevent the risk of data loss. Readers are provided with a step by step guide to assess their current state of vulnerability, determine where SSL encryption and digital certificate-based authentication is needed, plan for the rollout of SSL to Web applications, and establish policies and procedures to manage the full lifecycle of SSL certificates.

Please read the attached whitepaper.

Picture of System Administrator

SSL Exploits

by System Administrator - Wednesday, 29 October 2014, 2:41 PM

Top 10 Ways to Defend Against the Latest SSL Exploits

Staying on top of the latest web exploits is a challenge for most network admins who are busy with the day-to-day management of their environment. Quickly learn the top SSL exploits that your network could be vulnerable to along with simple steps you can immediately take to protect yourself.

Please read the attached whitepaper.

Picture of System Administrator


by System Administrator - Friday, 5 December 2014, 9:34 PM


A stakeholder is any person with a legitimate interest, or stake, in the actions of an organization. 

R. Edward Freeman and David L. Reed defined the term stakeholder in their 1983 article, Stockholders and Stakeholders: A New Perspective on Corporate Governance, as "any group or individual who can affect the achievement of an organization's objectives or who is affected by the achievement of an organization's objectives."  Traditionally, stockholders are the most important people in a company and business decisions are made to increase the value of the stock.  Freeman and Reed proposed that there are other people who are just as important and good business decisions align everyone's interests with those of the stockholders. In this context, "everyone" might include employees, suppliers, customers and business partners as well as unions, government agencies or trade associations. 

Quite literally, a stakeholder is a person who holds the prize in a contest or the money in a bet. According to Freeman and Reed, the term stakeholder was first used in business in an internal memorandum at the Stanford Research Institute in 1963 and had the more narrow meaning of "those groups without whose support the organization would cease to exist." 

Continue Reading About stakeholder


Picture of System Administrator

STONITH (Shoot The Other Node In The Head)

by System Administrator - Friday, 30 January 2015, 5:31 PM

STONITH (Shoot The Other Node In The Head)

Posted by Margaret Rouse

STONITH (Shoot The Other Node In The Head) is a Linuxservice for maintaining the integrity of nodes in a high-availability (HA) cluster.

STONITH automatically powers down a node that is not working correctly. An administrator might employ STONITH if one of the nodes in a cluster can not be reached by the other node(s) in the cluster.

STONITH is traditionally implemented by hardware solutions that allow a cluster to talk to a physical server without involving the operating system (OS). Although hardware-based STONITH works well, this approach requires specific hardware to be installed in each server, which can make the nodes more expensive and result in hardware vendor lock-in.

A disk-based solution, such as split brain detection (SBD), can be easier to implement because this approach requires no specific hardware. In SBD STONITH, the nodes in the Linux cluster keep each other updated by using a Heartbeat mechanism. If something goes wrong with a node in the cluster, the injured node will terminate itself.

Continue Reading About STONITH (Shoot The Other Node In The Head)

Related Terms


Picture of System Administrator

Storage Configuration Guide (I)

by System Administrator - Tuesday, 16 June 2015, 9:29 PM


Storage Configuration Guide


This document is designed to aid in the configuration and deployment of Nexsan storage solutions to meet specific performance requirements ranging from backup to virtual server infrastructures. It is intended as a guide and should not supersede advice or information provided by an Imation Systems Engineer.

Please read the attached whitepaper.

Picture of System Administrator

Storage Configuration Guide (II)

by System Administrator - Tuesday, 16 June 2015, 9:35 PM

Storage Configuration Guide

Learn how to meet specific storage requirements ranging from backup to virtual server infrastructures. With the ever-changing and fast moving IT market, it is more critical than ever to efficiently design a storage system as it provides an underpinning for all elements within the IT infrastructure.

Please read the attached whitepaper.

Picture of System Administrator

Story Point (Story Points)

by System Administrator - Tuesday, 30 April 2013, 1:17 PM
Part of the Project management glossary.
A story point is a metric used in agile project management and development to determine (or estimate) the difficulty of implementing a given story. In this context, a story is a particular business need assigned to the software development team. Story points are usually expressed according to a numerical range, such as an adaptation of a Fibonacci sequence, or according to a size range from X-S (extra-small) to X-L (extra large). 

Elements considered in assigning a story point include the complexity of the story, the number of unknown factors and the potential effort required to implement it.

This Geek and Poke cartoon illustrates how story points are assigned.

Geek and Poke Agile breakfast

Contributor(s): Ivy Wigmore / Posted by: Margaret Rouse

Picture of System Administrator

Strategic Asset Management

by System Administrator - Wednesday, 10 September 2014, 9:22 PM

The Path to Strategic Asset Management

Companies today look for ways to gain more control and accuracy when it comes to fixed asset data. Fixed assets can represent a significant sum on the balance sheets of many organizations. This white paper introduces best practices for integrating fixed assets management technology into your organization as part of a strategic asset management initiative. These best practices are based on lessons learned over the course of decades of successfully implementing and integrating GAAP and tax depreciation and fixed assets management solutions in both SMBs and Fortune 500 companies.

Please read the attached whitepaper

Picture of System Administrator

Subnetting an IP Address (ICT/TIC)

by System Administrator - Tuesday, 2 September 2014, 5:43 PM

Subnetting an IP Address

The process of subnetting is both a mathematical process and a network design process. Mathematics drives how subnets are calculated, identified, and assigned. The network design and requirements of the organization drive how many subnets are needed and how many hosts an individual subnet needs to support. Binary basics and IPv4 address structure were covered in part one of this two-part paper. This paper focuses on the process rules and helpful hints for learning to subnet an IPv4 address. It covers the following topics:

  • 1. Need for subnets
  • 2. Process for subnetting
  • 3. Formulas for subnet calculation
  • 4. Examples for putting everything together
  • 5. Variable Length Subnet Mask (VLSM)
  • 6. Determine the subnet, usable range of host addresses, and broadcast address for a given host.
  • 7. Helpful tables

Note: Throughout this document, the term IP address refers to an IPv4 address. This document does not include IPv6.

Please read the attached whitepaper

Picture of System Administrator

Sustainable Computing

by System Administrator - Monday, 27 October 2014, 2:55 PM

Fiscally in favor of sustainable computing

Data center energy consumption is more than a sustainability issue -- it's a major cost to the business. When proposing green IT initiatives, start at the bottom line.

The green movement got companies interested in saving the planet, which looked promising until the world's economic systems flipped.

When first-world banking systems went into turmoil and rapid expansion in emerging economies created additional shockwaves, investing in green seemed like pouring money down a drain.

But green isn't dead. We've had some strange weather -- snow in the southern U.S., floods in the U.K., hard drought in the more fertile states of Australia -- that brings environmental conservation back to the fore.

But even to the most critical global warming naysayer, sustainable computing could still pay off. Reducing reliance on fossil fuels isn't just good for the planet; it's also good for business.

How to frame the eco data center discussion

Organizations are high energy users, in large part due to computing. Cutting data center energy consumption and investing in green power sources could reduce carbon emissions. But the data center doesn't just process predictable back-end tasks -- it supports business growth.

In times of economic constraint, digitally enabled sales and customer outreach seem far more important than the possibility that, a few hundred years from now, water will be scarce. Harsh, but essentially true -- no one has yet lost their job by putting the good of their organization ahead of the planet.

So bring economic sense and green together for sustainable computing.

Going to the executive team and asking for investment in green technologies will earn blank stares or outright laughter. Walk in with a proposal based on energy savings that demonstrates how green initiatives cut data center costs, which enables greater investment in other areas of the business and helps minimize the impact of highly variable but upwardly trending energy prices. That should get the business' interest, and free up the funding required.

The discussions have nothing to do with green initiatives, sustainability or saving the planet. However, once you have funding and implement changes, make sure that the corporate social responsibility (CSR) team is aware of the projects. They can use the results to show that your organization is green and conscientious, which is due to solid economic decisions. Green investments should be used to enhance the organization's brand as much as possible.

Green options for computing racks

Any data center that cools racks with standard computer room air conditioning (CRAC) systems wastes energy. Most run at power utilization effectiveness (PUE) ratios of 2.5 or greater. For every watt of energy that goes to the server, another 1.4 Watts goes to peripheral equipment, mostly the CRAC units. In a 1 MW data center facility, the CRAC units consume 500 kW of power.

Replace these CRAC designs with a low-energy, free air cooling system, and run the data center at higher temperatures that still fit within ASHRAE's guidelines. The 1 MW data center will end up saving 450 kW of power consumed. This saves money that can go straight to the bottom line or be reinvested in areas that create business value.

The investment in revamping data center cooling is actually quite low and the ROI will be rapid and ongoing. The economic benefit pairs with CSR goals -- saving 450 kW is a considerable reduction in carbon emissions.

The same goes for consolidation and rationalization onto a virtualized or cloud platform. The move from 10% to 20% server utilization rates up to 50% or 60% means that the data center only needs to power one-third to one-fifth the amount of physical servers. In addition to reducing electricity use, this saves licensing, maintenance, real estate and other costs. The capital investment to move to a new virtualized platform can be high, but ROI is rapid. Take an incremental approach of consolidation on existing equipment and introduce a rolling upgrade program for new equipment.

Many countries also offer tax concessions (or at least a lower overall tax burden) to organizations that demonstrate an active approach to lower carbon emissions. In the U.K., there is the CRC EES program; the EU has emissions trading standards called EU ETS; and the U.S. has myriad programs by state and federal agencies. For data centers working under green-conscious jurisdictions, cutting down on carbon emissions through lower energy use provides immediate payback.

Consider the original green argument: Burning fossil fuels creates greenhouse gases that cause global climate change, resulting in more extreme weather patterns, leading to food and water availability risks and species extinctions, among other effects. By cutting down our dependence on fossil fuels, we may -- just may -- be able to slow down or stop the slide to the human race killing itself off. And we can certainly save some operating expenses along the way.

About the author:
Clive Longbottom is the co-founder and service director of IT research and analysis firm Quocirca, based in the U.K. Longbottom has more than 15 years of experience in the field. With a background in chemical engineering, he's worked on automation, control of hazardous substances, document management and knowledge management projects.

Next Steps

Dig deeper on Data center energy efficiency



Picture of System Administrator

Synapse (KW)

by System Administrator - Thursday, 2 May 2013, 8:27 PM

The synapse contains a small gap separating neurons. Information from one neuron flows to another neuron across a synapse. The synapse consists of:

  1. A presynaptic ending that contains neurotransmitters, mitochondria and other cell organelles
  2. A postsynaptic ending that contains receptor sites for neurotransmitters
  3. A synaptic cleft or space between the presynaptic and postsynaptic endings.

At the synaptic terminal (the presynaptic ending), an electrical impulse will trigger the migration of vesicles (the red dots in the figure to the left) containing neurotransmitters toward the presynaptic membrane. 

The vesicle membrane will fuse with the presynaptic membrane releasing the neurotransmitters into the synaptic cleft. 

Until recently, it was thought that a neuron produced and released only one type of neurotransmitter. This was called "Dale's Law." However, there is now evidence that neurons can contain and release more than one kind of neurotransmitter.

Our synapse representation with KW tools:

Synapse representation with KW tools

And ...

KW University Synapse

Picture of System Administrator

System z mainframe

by System Administrator - Tuesday, 13 January 2015, 10:20 PM

New System z mainframe may lift IBM's cloud, mobile fortunes


by Ed Scannell

IBM looks to pivot in a new direction with a revitalized mainframe aimed at mobile and cloud markets along with a rumored major reorganization.

To revive its sagging hardware fortunes, IBM will introduce a new member to its System z series of mainframes with a major technology overhaul. It is intended to lure new users that need more muscle for applications involving cloud, analytics and, in particular, mobile applications.

The new z13, as it's being referred to, is designed from the ground up to more efficiently handle transaction processing, including the billions of transactions conducted by users with a wide assortment of mobile devices, a source close to the company said. Big Blue has reportedly spent five years and $1 billion developing the new system, quietly beta testing it among some 60 global accounts.

The system, to be introduced this week, features a new eight-core processor with a wider instruction pipeline, more memory and support processors than any of its zSeries predecessors, improved multi-threading, larger caches and a "significantly improved intelligent I/O system," that dramatically improves the performance of applications involved with transaction processing, according to sources close to the company.

"The whole thing is tuned for better performance, especially its souped-up intelligent I/O, where you can have dedicated channels for individual types of I/O," said one source who requested anonymity. "Essentially [IBM has] tuned this for environments focused on mobility, analytics and the cloud."

To further improve the system's capabilities for mobile transactions, IBM reportedly focused on improving security for the system, coming up with new cryptography technology, similar to that used by vendors such as Apple and major credit card companies, according to sources.

"They have implemented some new forms -- plural -- of encryption, similar to what is used in Chrome and Firefox, as well as Apple's technology for messaging," one IT industry source who works with IBM said. "[IBM], I think wisely, have adapted the security schemes here to meet users' needs, which increasingly have to do with mobility and credit card transaction processing."

Given the deal IBM signed with Apple last year to distribute the latter's mobile products to corporate users, and the emphasis IBM will put on the new mainframe's mobile transaction capabilities, synching up with Apple's security technology may be more than a coincidence.

Also not so coincidental may be the timing of this week's announcement, given its close proximity to the company's 2014 revenues and earnings report later this month. With sales of the company's proprietary Power series of servers stumbling badly the past five or six quarters, and mainframe sales dipping the past quarter or two as part of its natural sales cycle in addition to fourth quarter hardware numbers figuring to be down again, company officials may be looking for some good hardware news to distract Wall Street's attention away from further bad news. The new system could be the answer.

IBM may also talk about its intent to promote the system's appeal to Millennials. With an increasing number of aging mainframe veterans retiringin ever larger numbers, Big Blue wants to make it clear to 20-somethings that they could have a lucrative career working in mainframe environments, as opposed to lower-end distributed environments. Company officials will reportedly talk about a new jobs board that matches up younger workers to new job opportunities in the mainframe area.

"[IBM] is trying to overcome this fear, the psychological barrier that Millennials have toward mainframes," according to another source familiar with the company's plans.

There have also been reports that IBM may be edging toward a major reorganization, one that would give clearer focus and emphasis on mobility, analytics, security and, of course, cloud. How the newly designed mainframe figures into this corporate realignment will be interesting. The reorganization is being driven by IBM CEO Ginny Rometty, whose performance has been under close scrutiny by Wall Street and the company's corporate accounts over the past year.

It will be ironic -- or perhaps poetically just -- depending on your perspective, if IBM's hardware resurgence is led by a mainframe that makes a big splash in the mobile market.

Read more:


Page: (Previous)   1  2