Glosario KW | KW Glossary


Ontology Design | Diseño de Ontologías

Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

Page: (Previous)   1  2  3  4  5  6  7  8  9  10  ...  22  (Next)
  ALL

D

Picture of System Administrator

Data Center Efficiency

by System Administrator - Wednesday, 26 August 2015, 7:17 PM
 

eGuide: Data Center Efficiency

APC by Schneider Electric

Data center efficiency is one of the cornerstones of an effective IT infrastructure. Data centers that deliver energy efficiency, high availability, density, and scalability create the basis for well-run IT operations that fuel the business. With the right approach to data center solutions, organizations have the potential to significantly save on costs, reduce downtime, and allow for future growth.

In this eGuide, Computerworld, CIO, and Network World examine recent trends and issues related to data center efficiency. Read on to learn how a more efficient data center can make a difference in your organization.

Please read the attached eGuide.

Picture of System Administrator

Data Confabulation

by System Administrator - Tuesday, 12 May 2015, 12:30 AM
 

Data Confabulation

Posted by: Margaret Rouse

Data confabulation is a business intelligence term for the selective and possibly misleading use of data to support a decision that has already been made.

Within the volumes of big data there are often a lot of small bits of evidence that are contradictory to even clearly data-supported facts. Generally, this data noise can be seen as such and, in the context of the body of data, it is clearly outweighed. When data is selectively chosen from vast sources, however, a picture can often be created to support a desired view, decision or argument that would not be supported by a more rigorously controlled method.

Data confabulation can be used both intentionally and unintentionally to promote the user’s viewpoint. When a decision is made before data is examined, there is a danger of falling prey to confirmation bias even when people are trying to be honest. The term confabulation comes from the field of psychology, where it refers to the tendency of humans to selectively remember, misinterpret or create memories to support a decision, belief or sentiment.

Related Terms

Definitions

  • de-anonymization (deanonymization)

    - De-anonymization is a method used to detect the original data that was subjected to processes to make it impossible -- or at least harder -- to identify the personally identifiable information (PII... (WhatIs.com)

  • data anonymization

    - The purpose of data anonymization is to make its source untraceable. Data anonymization processes include encryption, substitution, shuffing, number and data variance and nulling out data. (WhatIs.com)

  • change management

    - Change management is a systematic approach to dealing with change, both from the perspective of an organization and on the individual level. (SearchCIO.com)

Glossaries

  • Business intelligence - business analytics

    - Terms related to business intelligence, including definitions about business analytics and words and phrases about gathering, storing, analyzing and providing access to business data.

  • Internet applications

    - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Picture of System Administrator

Data Exhaust

by System Administrator - Tuesday, 12 May 2015, 12:26 AM
 

Data Exhaust

Posted by: Margaret Rouse

Data exhaust is the data generated as a byproduct of people’s online actions and choices.

Data exhaust consists of the various files generated by web browsers and their plug-ins such as cookies, log files, temporary internet files and and .sol files (flash cookies). In its less hidden and more legitimate aspect, such data is useful to improve tracking trends and help websites serve their user bases more effectively. Studying data exhaust can also help improve user interface and layout design. As these files reveal the specific choices an individual has made, they are very revealing and are a highly-sought source of information for marketing purposes. Websites store data about people’s actions to maintain user preferences, among other purposes. Data exhaust is also used for the lucrative but privacy-compromising purposes of user tracking for research and marketing.

Data exhaust is named for the way it streams out behind the web user similarly to the way car exhaust streams out behind the motorist. An individual’s digital footprint, sometimes known as a digital dossier, is the body of data that exists as a result of actions and communications online that can in some way be traced back to them. That footprint is broken down as active and passive data traces; digital exhaust consists of the latter. In contrast with the data that people consciously create, data exhaust is unintentionally generated and people are often unaware of it.

Security and privacy software makers struggle with the conflicting goals of marketing and privacy. User software designed to protect security and privacy often disrupts online marketing and research business models. While new methods of persistently storing tracking data are always in development, software vendors constantly design new methods to remove them.

See Michelle Clark's TEDx talk about digital footprints:

Link: http://whatis.techtarget.com

Picture of System Administrator

Data Lake

by System Administrator - Thursday, 25 June 2015, 10:29 PM
 

 

Author: John O’Brien

It would be an understatement to say that the hype surrounding the data lake is causing confusion in the industry. Perhaps, this is an inherent consequence of the data industry's need for buzzwords: it's not uncommon for a term to rise to popularity long before there is clear definition and repeatable business value. We have seen this phenomena many times when concepts including "big data," "data reservoir," and even the "data warehouse" first emerged in the industry. Today's newcomer to the data world vernacular—the "data lake"—is a term that has endured both the scrutiny of pundits who harp on the risk of digging a data swamp and, likewise, the vision of those who see the potential of the concept to have a profound impact on enterprise data architecture. As the data lake term begins to come off its hype cycle and face the pressures of pragmatic IT and business stakeholders, the demand for clear data lake definitions, use cases, and best practices continues to grow.

This paper aims to clarify the data lake concept by combining fundamental data and information management principles with the experiences of existing implementations to explain how current data architectures will transform into a modern data architecture. The data lake is a foundational component and common denominator of the modern data architecture enabling, and complementing specialized components, such as enterprise data warehouses, discovery-oriented environments, and highly-specialized analytic or operational data technologies within or external to the Hadoop ecosystem. Therefore, the data lake has become the metaphor for the transformation of enterprise data management, and will continue to evolve the data lake definition according to established principles, drivers, and best practices that will quickly emerge as hindsight is applied at companies.

Please read the attached guide.

 

Picture of System Administrator

Data Profiling

by System Administrator - Tuesday, 30 December 2014, 3:24 PM
 

Data Profiling

Posted by Margaret Rouse

Data profiling, also called data archeology, is the statistical analysis and assessment of data values within a data set for consistency, uniqueness and logic.

The data profiling process cannot identify inaccurate data; it can only identify  business rules violations and anomalies.The insight gained by data profiling can be used to determine how difficult it will be to use existing data for other purposes.  It can also be used to provide metrics to assess data quality and help determine whether or not metadata accurately describes the source data. 

Profiling tools evaluate the actual content, structure and quality of the data by exploring relationships that exist between value collections both within and across data sets. For example, by examining the frequency distribution of different values for each column in a table, an analyst can gain insight into the type and use of each column. Cross-column analysis can be used to expose embedded value dependencies and inter-table analysis allows the analyst to discover overlapping value sets that represent foreign keyrelationships between entities.  

See also: data modelingdata dictionarydata deduplication

Link: http://searchdatamanagement.techtarget.com

Picture of System Administrator

Data Silo

by System Administrator - Monday, 20 July 2015, 4:59 PM
 

Data Silo

Posted by Margaret Rouse

A data silo is a repository of fixed data that an organization does not regularly use in its day-to-day operation.

A data silo is a repository of fixed data that an organization does not regularly use in its day-to-day operation. So-called siloed data cannot exchange content with other systems in the organization. The expressions "data silo" and "siloed data" arise from the inherent isolation of the information. The data in a silo remains sealed off from the rest of the organization, like grain in a farm silo is closed off from the outside elements.

In recent years, data silos have faced increasing criticism as an impediment to productivity and a danger to data integrity. Data silos also increase the risk that current (or more recent) data will accidentally get overwritten with outdated (or less recent) data. When two or more silos exist for the same data, their contents might differ, creating confusion as to which repository represents the most legitimate or up-to-date version.

Cloud-based data, in contrast to siloed data, can continuously evolve to keep pace with the needs of an organization, its clients, its associates, and its customers. For frequently modified information, cloud backup offers a reasonable alternative to data silos, especially for small and moderate quantities of data. When stored information does not need to be accessed regularly or frequently, it can be kept in a single cloud archive rather than in multiple data silos, ensuring data integration (consistency) among all members and departments in the organization. For these reasons, many organizations have begun to move away from data silos and into cloud-based backup and archiving solutions.

Continue Reading About data silo

Link: http://searchcloudapplications.techtarget.com

 

Picture of System Administrator

Database-as-aService (DBaaS)

by System Administrator - Monday, 16 February 2015, 3:42 PM
 

Why Database-as-aService (DBaaS)?

IBM Cloudant manages, scales and supports your fast-growing data needs 24x7, so you can stay focused on new development and growing your business.

Fully managed, instantly provisioned, and highly available

In a large organization, it can take several weeks for a DBMS instance to be provisioned for a new development project, which limits innovation and agility. Cloudant DBaaS helps to enable instant provisioning of your data layer, so that you can begin new development whenever you need. Unlike Do-It-Yourself (DIY) databases, DBaaS solutions like Cloudant provide specific levels of data layer performance and up time. The managed DBaaS capability can help reduce risk of service delivery failure for you and your projects.

Build more. Grow more

With a fully managed NoSQL database service, you do not have to worry about the time, cost and complexity associated with database admnistration, architecture and hardware. Now you can stay focused on developing new apps and growing your business to new heights.

Who uses DBaaS?

Companies of all sizes, from start ups to mega-users use Cloudant to manage data for large or fast-growing web and mobile apps in ecommerce, on-line education, gaming, financial services, and other industries. Cloudant is best suited for applications that need a database to handle a massively concurrent mix of low-latency reads and writes. Its data replication and synchronization technology also enables continuous data availability, as well as off-line app usage for mobile or remote users.

As a JSON document store, Cloudant is ideal for managing multi- or un-structured data. Advanced indexing makes it easy to enrich applications with location-based (geo-spatial) services, full-text search, and near real-time analytics.

Please read the attached whitepaper.

Picture of System Administrator

Decoding DNA: New Twists and Turns (DNA)

by System Administrator - Wednesday, 26 June 2013, 10:19 PM
 

The Scientist takes a bold look at what the future holds for DNA research, bringing together senior investigators and key leaders in the field of genetics and genomics in this 3-part webinar series.

The structure of DNA was solved on February 28, 1953 by James D. Watson and Francis H. Crick, who recognized at once the potential of DNA's double helical structure for storing genetic information — the blueprint of life. For 60 years, this exciting discovery has inspired scientists to decipher the molecule's manifold secrets and resulted in a steady stream of innovative advances in genetics and genomics.

Honoring our editorial mission, The Scientist will take a bold look at what the future holds for DNA research, brining together senior investigators and key leaders in the field of genetics and genomics in this 3-part webinar series.

What's Next in Next-Generation Sequencing?


Original Broadcast Date: Tuesday March 5, 2013
VIEW THE VIDEO NOW

The advent of Next-Generation Sequencing is considered the most propelling technological advance, which has resulted in  the doubling of sequence data almost every 5 months and the precipitous drop in the cost of sequencing a piece of DNA. The first webinar will track the evolution of next-generation sequencing and explore what the future holds in terms of the technology and its applications.

Panelists:

George Church is a professor of genetics at Harvard Medical School, and Director of the Personal Genome Project, providing the world's only open-access information on human genomic, environmental and trait data (GET).His 1984 Harvard PhD included the first methods for direct genome sequencing, molecular multiplexing, and barcoding. These lead to the first commercial genome sequence (pathogen, Helicobacter pylori) in 1994. Hisinnovations in "next generation" genome sequencing and synthesis and cell/tissue engineering resulted in 12 companies spanning fields including medical genomics (KnomeAlacrisAbVitro,GoodStartPathogenica) and synthetic biology (LS9JouleGen9,WarpDrive) as well as new privacy, biosafet, and biosecurity policies. He is director of the NIH Centers of Excellence in Genomic Science. His honors include election to NAS & NAE and Franklin Bower Laureate for Achievement in Science.

George Weinstock is currently a professor of genetics and of molecular microbiology at Washington University in Saint Louis. He was previously codirector of the Human Genome Sequencing Center at Baylor College of Medicine in Houston, Texas where he was also a professor of molecular and human genetics. Dr. Weinstock received his BS degree from the University of Michigan (Biophysics, 1970) and his PhD from the Massachusetts Institute of Technology (Microbiology, 1977).

Joel Dudley is an assistant professor of genetics and genomic sciences and Director of Biomedical Informatics at Mount Sinai School of Medicine in New York City. His current research is focused on solving key problems in genomic and systems medicine through the development and application of translational and biomedical informatics methodologies. Dudley's published research covers topics in bioinformatics, genomic medicine, personal and clinical genomics, as well as drug and biomarker discovery. His recent work with coauthors describing a novel systems-based approach for computational drug repositioning, was featured in the Wall Street Journal, and earned designation as the NHGRI Director's Genome Advance of the Month. He is also coauthor (with Konrad Karczewski) of the forthcoming book, Exploring Personal Genomics. Dudley received a BS in microbiology from Arizona State University and an MS and PhD in biomedical informatics from Stanford University School of Medicine.

Unraveling the Secrets of the Epigenome

Original Broadcast Date: Thursday April 18, 2013
VIEW THE VIDEO NOW

This second webinar in The Scientist's Decoding DNA series will cover the Secrets of the Epigenome, discussing what is currently known about DNA methylation, histone modifications, and chromatin remodeling and how this knowledge can translate to useful therapies.

Panelists:

Stephen Baylin is a professor of medicine and of oncology at the Johns Hopkins University School of Medicine, where he is also Chief of the Cancer Biology Division of the Oncology Center and Associate Director for Research of The Sidney Kimmel Comprehensive Cancer Center. Together with Peter Jones of the University of Southern California, Baylin also leads the Epigenetic Therapy Stand up to Cancer Team (SU2C). He and his colleagues have fostered the concept that DNA hypermethylation of gene promoters, with its associated transcriptional silencing, can serve as alternatives to mutations for producing loss oftumor-suppressor gene function. Baylin earned both his BS and MD degrees from Duke University, where he completed his internship and first-year residency in internal medicine. He then spent 2 years at the National Heart and Lung Institute of the National Institutes of Health. In 1971, he joined the departments of oncology and medicine at the Johns Hopkins University School of Medicine, an affiliation that still continues.

Victoria Richon heads the Drug Discovery and Preclinical Development Global Oncology Division at Sanofi. Richon joined Sanofi in November 2012 from Epizyme, where she served as vice president of biological sciences beginning in 2008. At Epizyme she was responsible for the strategy and execution of drug discovery and development efforts that ranged from target identification through candidate selection and clinical development, including biomarker strategy and execution. Richon received her BA in chemistry from the University of Vermont and her PhD in biochemistry from the University of Nebraska. She completed her postdoctoral research at Memorial Sloan-Kettering Cancer Center.

Paolo Sassone-Corsi is Donald Bren Professor of Biological Chemistry and Director of the Center for Epigenetics and Metabolism at the University of California, Irvine, School of Medicine. Sassone-Corsi is a molecular and cell biologist who has pioneered the links between cell-signaling pathways and the control of gene expression. His research on transcriptional regulation has elucidated a remarkable variety of molecular mechanisms relevant to the fields of endocrinology, neuroscience, metabolism, and cancer. He received his PhD from the University of Naples and completed his postdoctoral research at CNRS, in Strasbourg, France.

The Impact of Personalized Medicine


Original Broadcast Date: Tuesday May 7, 2013
VIEW THE VIDEO NOW

After the human genome was sequenced, Personalized Medicine became an end goal, driving both academia and the pharma/biotech industry to find and target cellular pathways and drug therapies that are unique to an individual patient. The final webinar in the series will help us better understand The Impact of Personalized Medicine, what we can expect to gain and where we stand to lose.

Panelists:

Jay M. ("Marty") Tenenbaum is founder and chairman of Cancer Commons. Tenenbaum’s background brings a unique perspective of a world-renowned Internet commerce pioneer and visionary. He was founder and CEO of Enterprise Integration Technologies, the first company to conduct a commercial Internet transaction. Tenenbaum joined Commerce One in January 1999, when it acquired Veo Systems. As chief scientist, he was instrumental in shaping the company's business and technology strategies for the Global Trading Web. Tenenbaum holds BS and MS degrees in electrical engineering from MIT, and a PhD from Stanford University.

Amy P. Abernethy, a palliative care physician and hematologist/oncologist, directs both the Center for Learning Health Care (CLHC) in the Duke Clinical Research Institute, and the Duke Cancer Care Research Program (DCCRP) in the Duke Cancer Institute. An internationally recognized expert in health-services research, cancer informatics, and delivery of patient-centered cancer care, she directs a prolific research program (CLHC/DCCRP) which conducts patient-centered clinical trials, analyses, and policy studies. Abernethy received her MD from Duke University School of Medicine.

Geoffrey S. Ginsburgis the Director of Genomic Medicine at the Duke Institute for Genome Sciences & Policy. He is also the Executive Director of the Center for Personalized Medicine at Duke Medicine and a professor of medicine and pathology at Duke University Medical Center. His work spans oncology, infectious diseases, cardiovascular disease, and metabolic disorders. His research is addressing the challenges for translating genomic information into medical practice using new and innovative paradigms, and the integration of personalized medicine into health care. Ginsburg received his MD and PhD in biophysics from Boston University and completed an internal medicine residency at Beth Israel Hospital in Boston, Massachusetts.

Abhijit “Ron” Mazumder obtained his BA from Johns Hopkins University, his PhD from the University of Maryland, and his MBA from Lehigh University. He worked for Gen-Probe, Axys Pharmaceuticals, and Motorola, developing genomics technologies. Mazumder joined Johnson & Johnson in 2003, where he led feasibility research for molecular diagnostics programs and managed technology and biomarker partnerships. In 2008, he joined Merck as a senior director and Biomarker Leader. Mazumder rejoined Johnson & Johnson in 2010 and is accountable for all aspects of the development of companion diagnostics needed to support the therapeutic pipeline, including selection of platforms and partners, oversight of diagnostic development, support of regulatory submissions, and design of clinical trials for validation of predictive biomarkers.

Link: http://www.the-scientist.com//?articles.view/articleNo/33846/title/Decoding-DNA--New-Twists-and-Turns/

Picture of System Administrator

Delivering Data Warehousing as a Cloud Service

by System Administrator - Wednesday, 8 July 2015, 9:27 PM
 

Delivering Data Warehousing as a Cloud Service

The current data revolution has made it an imperative to provide more people with access to data-driven insights faster than ever before. That's not news. But in spite of that, current technology seems almost to exist to make it as hard as possible to get access to data.

That's certainly the case for conventional data warehouse solutions, which are so complex and inflexible that they require their own teams of specialists to plan, deploy, manage, and tune them. By the time the specialists have finished, it's nearly impossible for the actual users to figure out how to get access to the data they need.

Newer 'big data' solutions do not get rid of those problems. They require new skills and often new tools as well, making them dependent on hard-to-find operations and data science experts.

Please read the attached whitepaper.

Picture of System Administrator

Designing and Building an Open ITOA Architecture

by System Administrator - Tuesday, 16 June 2015, 10:51 PM
 

Designing and Building an Open ITOA Architecture

This white paper provides a roadmap for designing and building an open IT Operations Analytics (ITOA) architecture. You will learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets. After weighing the role of each IT data source for your organization, you can learn how to combine them in an open ITOA architecture that avoids vendor lock-in, scales out cost-effectively, and unlocks new and unanticipated IT and business insights.
Please read the attached whitepaper.
Picture of System Administrator

Designing For DevOps

by System Administrator - Monday, 7 August 2017, 2:16 PM
 

Designing For DevOps

Sponsored by Stackify

DevOps first started as a movement around 2008 and it has grown rapidly over the last several years. In our Designing for DevOps guide, we share...

Please read the attached whitepaper...

Picture of System Administrator

Desktop as a Service (DaaS)

by System Administrator - Wednesday, 11 November 2015, 6:29 PM
 

Desktop as a Service (DaaS)

Posted by Margaret Rouse

Desktop as a Service (DaaS) is a cloud service in which the back-end of a virtual desktop infrastructure (VDI) is hosted by a cloud service provider.

DaaS has a multi-tenancy architecture and the service is purchased on a subscription basis. In the DaaS delivery model, the service provider manages the back-end responsibilities of datastoragebackup, security and upgrades. Typically, the customer's personal data is copied to and from the virtual desktop during logon/logoff and access to the desktop is device, location and network independent. While the provider handles all the back-end infrastructure costs and maintenance, customers usually manage their own desktop images, applications and security, unless thosedesktop management services are part of the subscription.

Desktop as a Service is a good alternative for a small or mid-size businesses (SMBs) that want to provide their end users with the advantages a virtual desktop infrastructure offers, but find that deploying a VDI in-house to be cost-prohibitive in terms of budget and staffing.

This definition is part of our Essential Guide: What you need to know about cloud desktops and DaaS providers

Link: http://searchvirtualdesktop.techtarget.com

Picture of System Administrator

Desktop Virtualization Security

by System Administrator - Tuesday, 16 September 2014, 12:43 AM
 

Top 10 reasons to strengthen information security with desktop virtualization

Regain control and reduce risk without sacrificing business productivity and growth

  • New ways of working call for new ways of managing risk. Mobility, flexwork, bring-your-own device (BYOD)  and increased collaboration across organizations have changed the risk profile and undermine existing IT architectures. The challenge is to allow people the flexibility they need for optimal business productivity while ensuring the security and compliance required by the enterprise.
  • Both IT and the business are demanding more of their networks. But networks designed to simply forward packets don't have the capability or the intelligence to understand these high-level, application-related demands. Networks need to change, as does the way IT thinks about them and manages them. In this white paper, see how enterprises can accommodate today's needs while laying the groundwork for supporting tomorrow's more-advanced software-defined networks.

Please read the attached whitepapers.

Picture of System Administrator

Development testing for C# Applications

by System Administrator - Wednesday, 26 August 2015, 3:16 PM
 

Development testing for C# Applications

Static analysis shouldn't be about finding loads of coding style or standard issues. It should be focused on finding the most critical defects. Although traditional byte code analysis solutions such as FxCop are useful, they can miss critical, crash causing defects - plus produce a large set of coding style issues, which can slow down the development team. Learn how the Coverity Development Testing Platform can help you:

  • Find and fix resource leaks, concurrency problems and null references within Visual Studio
  • Eliminate defects such as inconsistent indention issues and copy paste errors that can only be found by understanding the intent of the programmer through source code analysis
  • Understand the impact of change to better prioritize and focus your automated testing efforts

Please read the attached whitepaper.

 

Picture of System Administrator

DevOps

by System Administrator - Wednesday, 15 February 2017, 7:07 PM
 

DevOps

How to utilize it in your IT workspace

by TechTarget

Please read the attached whitepaper.

 

Picture of System Administrator

DevOps (PMI)

by System Administrator - Monday, 29 December 2014, 5:45 PM
 

Definición de DevOps: mejor explicamos lo que no es

by Jennifer Lent

Mucho se ha escrito acerca de lo que es DevOps: Un camino para que los desarrolladores y directores de operaciones colaboren; un conjunto de mejores prácticas para la gestión de aplicaciones en la nube; una idea Ágil que se basa en la integración continua, lo que permite frecuentes liberaciones de código.

Según Wikipedia: "DevOps es un acrónimo inglés de development (desarrollo) y operations (operaciones), que se refiere a una metodología de desarrollo de software que se centra en la comunicación, colaboración e integración entredesarrolladores de software y los profesionales de operaciones en las tecnologías de la información (IT). DevOps es una respuesta a la interdependencia del desarrollo de software y las operaciones IT. Su objetivo es ayudar a una organización a producir productos y servicios software rápidamente. Las empresas con entregas (releases) muy frecuentes podrían requerir conocimientos de DevOps. Flickr desarrolló un sistema DevOps para cumplir un requisito de negocio de diez despliegues al día. A este tipo de sistemas se les conoce como despliegue continuo (continuous deployment) o entrega continua (continuous delivery), y suelen estar asociados a metodologías lean startup. Grupos de trabajo, asociaciones profesionales y blogs usan el término desde 2009."

La definición de DevOps cubre todas estas cosas y más. Pero dado que el término ha adquirido estatus de palabra de moda, puede ser más interesante preguntarse no lo que es DevOps, sino lo que no es. En este artículo, SearchSoftwareQuality preguntó a algunos profesionales de software exactamente eso. He aquí lo que dijeron.

1. DevOps no es un puesto de trabajo.

Publicaciones en sitios de empleo sugieren otra cosa, pero DevOps no es un puesto de trabajo, dijo el consultor de Agile, Scott Ambler. "¿Gestor de DevOps? No sé lo que es eso". DevOps no debe ser un rol laboral, dijo. "DevOps se trata de que los desarrolladores entiendan la realidad de las operaciones y de que el equipo de operaciones comprenda lo que involucra el desarrollo." DevOps, el concepto, es un aspecto importante del desarrollo y la entrega de software, dijo Ambler. "Pero el puesto de DevOps es un síntoma de que las organizaciones que contratan [gerentes de DevOps] no entienden lo que DevOps es realmente. Ellos no lo entienden todavía."

La postura de Ambler sobre DevOps va en contra de la sabiduría convencional. DevOps apareció en la lista de 10 títulos de trabajo que es probable encontrar, de acuerdo con SearchCIO.com.

2. DevOps no es una categoría de herramienta de software.

DevOps no se trata de herramientas, sino de cultura, dijo Patrick Debois en unapresentación titulada “DevOps: tonterías, herramientas y otras cosas inteligentes”, durante la Conferencia GOTO. Debois, quien acuñó el término "DevOps" y fundó una conferencia conocida como DevOpsDays, dijo que las herramientas juegan un papel importante en el apoyo al enfoque de DevOps para la entrega y la gestión de software, pero DevOps no se trata de las herramientas en sí.

Ambler dijo la noción de que hay "herramientas que hacen que DevOps" refleje la realidad actual: DevOps, la palabra de moda, todavía se está moviendo hacia el pico de la curva. "Cada herramienta es una herramienta DevOps", agregó que mientras los vendedores de software continúan empujando sus visiones de DevOps, "mucha de la discusión es ingenua."

3. DevOps no se trata de resolver un problema de TI.

A pesar de sus muchos significados, DevOps es ampliamente entendido como una forma de resolver un problema de TI: permite que desarrollo y operaciones colaboren en la entrega de software. Pero ese no es su objetivo final, dijo Damon Edwards, socio gerente de consultoría de TI, Soluciones DTO, en Redwood City, California. "El punto de DevOps es permitirle a su empresa reaccionar ante las fuerzas del mercado lo más rápido, eficiente y confiable como sea posible. Sin el negocio, no hay otra razón para que estemos hablando de problemas DevOps, mucho menos pasar tiempo resolviéndolos", escribió Edwards escribió en su blog.

Kevin Parker, experto de SearchSoftwareQuality, dijo que el nuevo reto que encaran los gerentes de DevOps es toda la atención que el tema obtiene por parte del negocio. "Lo que antes era una tarea arcana, de elaborada coordinación y gestión de proyectos es ahora en parte diplomacia, parte protector –y una buena cantidad de innovación."

4. DevOps no es sinónimo de integración continua.

DevOps se originó en Agile como una forma de apoyar la práctica ágil de liberaciones de código más frecuentes. Pero DevOps es más que eso, dijo Ambler. "El hecho de que se practique la integración continua no significa que se está haciendo DevOps." Él ve a los gerentes de operaciones como los principales interesados ​​que los equipos ágiles necesitan trabajar para liberar software.

 

5. DevOps no... desaparecerá.

A pesar de las ideas falsas a su alrededor, DevOps está aquí para quedarse y sigue siendo importante para la entrega exitosa de software. "Tanto si lo llamamos DevOps o no, la gestión de cambios y versiones está experimentando una [exponencial] expansión en importancia", dijo Parker. Hay sustancia de fondo en DevOps, añadió el analista de Ovum Michael Azoff. "Por supuesto, hay expectación en torno a DevOps. Todavía estamos en la primera fase. Es donde Agile se ubicaba hace un par de años."

Please read the attached whitepaper: "Top tips for DevOps testing: Achieve continuous delivery"

Más noticias y tutorials:

Link: http://searchdatacenter.techtarget.com

 

Picture of System Administrator

DevSecOps

by System Administrator - Friday, 27 February 2015, 11:23 AM
 

 

Gartner: DevOps is good; DevSecOps is better

by Nicole Laskowski

Make way for DevSecOps. According to Gartner analyst David Cearley, CIOs need to add security professionals to their DevOps teams.

DevOps, or the blending of an enterprise's applications development and systems operations teams, has become a trendy IT topic. The new operating model is often employed in conjunction with Agile software development methods and leverages the scalability of cloud computing -- all in the interest of making companies more nimble and competitive. But, according to one expert, the approach as it is typically practiced today doesn't go far enough.

David Cearley, an analyst at Gartner Inc., believes today's CIOs need to revise DevOps to include security. He calls it DevSecOps. "It's development, it's security, it's operations operating as a dynamic force to create solutions," he said.

Investing in firewalls and perimeter defense isn't bad per se, Cearley said. But with high profile breaches at Target, Home Depot and Sony that left these organizations (among others) with black eyes, it's clear that simply guarding the borders is not enough. By adding security to a DevOps program, CIOs and their teams will be forced to think about security in a more granular way -- at the start of the software development process, rather than as an afterthought.

 

David Cearley

Adding security to DevOps, in classic IT language, turns out to be a people and process problem more than a technology problem. For many organizations, these teams work in separate closets "that don't even have a common wall between them," Cearley said. Still, getting everyone in the same room will be easier than getting everyone on the same page. Luckily, most enterprises have a person uniquely suited to break down cultural barriers and demand that security become a DevOps best practice, Cearley argued: the CIO.

"The CIO is the only one [who] is in a position to do something about this because the security team reports to him, the operations team reports to him, the applications team reports to him and the architecture team reports to him," he said. "The CIO is the leader; the CIO has to direct his team to say, 'If you don't work together, go get another job somewhere else.'"

DevSecOps manifesto

1. CIO-driven

2. Collaboration of unlike teams

3. Focus on risk, not security

Source: David Cearley, Gartner Inc.

Confronting the teams' "biases and preconceived notions" of how this work should be done will be one of the CIO's biggest challenges, Cearley said. "The CIO is asking them to rethink that." One suggestion? Rather than accepting separate reports on application development, operations and security, CIOs should reinforce the importance of collaboration by demanding a "unified approach for how we're going to be able to develop, secure, operate and manage the services we're delivering to our users," he said.

Cearley also recommended that CIOs direct the conversation away from security toward risk, which can help IT better integrate the business perspective into the process. "If you start with security, the focus becomes what tools are needed to get the ultimate security. I'm sorry, but that's the wrong focus," Cearley said. "You have to start with risk." By keeping thefocus on risk, CIOs will help the business understand how IT can contribute to breaking into a new market or experimenting with a new type of analytics -- as well as how IT can minimize the potential dangers of doing so.

Let us know what you think of the story; email Nicole Laskowski, senior news writer, or find her on Twitter @TT_Nicole.

Link: http://searchcio.techtarget.com

 

Picture of System Administrator

Difusión (KW)

by System Administrator - Thursday, 2 May 2013, 5:13 PM
 

Antes de introducirnos en los detalles de este tema, es importante que cualquier técnica de difusión no presente al principio productos específicos, siglas u otros aspectos técnicos que no contribuyan a la visión práctica del asunto. En el momento de la presentación a los potenciales Usuarios, se debe hablar de los beneficios generales. Luego, para cada sector, adaptar el tipo y profundidad de los mismos al lenguaje e "idiosincracia" de cada uno. 

Cada producto o servicio “KW Compatible” tiene su propia forma de entrega y distribución. Éstas dependen de:

1. Sector al que apuntan.

2. Nivel de los HKW/DKW involucrados.

3. Tipo de producto o servicio, que en su aspecto más simple pueden ser de tres tipos: 

a. Contenidos Reusables para todas las plataformas tecnológicas (redes sociales, "cloud computing", web 3.0, "virtualización", normativas, direct target, logística, distribución, existencias, indicadores de calidad, bibliografía, catálogos, reglas de negocio, autorizaciones, historias clínicas, investigación, etc).

b. Transacciones KIP (a través de un servidor KIP). “¿Cuántos KIPS ha recibido de sus distribuidores este mes?”.

c. Hardware (telefonía móvil, automatismos industriales, handhelds, hogar digital).

En general fabricantes y distribuidores se manejan sobre la demanda de sus clientes. Saben que tomar la iniciativa es caminar sobre “el filo de la navaja”. Por eso es tan importante generar primero la aceptación en el ambiente científico y universitario, pues aquí se podrán generar componentes funcionales reales y salir airoso de la gran pregunta: “¿esto realmente funciona?”.

En estos temas la Comunidad Europea lleva importantes iniciativas, como el eEurope. En cada país de dicha Comunidad, el Estado ofrece subsidios importantes para este tipo de iniciativas. 

El argumento “liderazgo” es tomado con suma cautela por los potenciales inversores en este tipo de tecnología. La opción KW oficia como incubadora de emprendimientos sectoriales.

Otro argumento importante del que también hablamos es “no competir con los desarrolladores de software”. Es imprescindible presentar la integración KW a los desarrollos de estas empresas como un verdadero “seguro de vida”. Con la compatibilidad KW tendrán una curva de obsolescencia menor y la amenaza del código abierto será relativa. Recordemos que el proyecto GNU/Linux impulsa la apertura y reusabilidad de los componentes de programación y no de contenidos creados por aquel Usuario Final que no sabe o no le interesa programar.

Cada uno de los grandes componentes del proyecto tiene su propia estrategia de difusión. 

Picture of System Administrator

Digital Marketing Plan

by System Administrator - Thursday, 17 September 2015, 7:00 PM
 

 

por Juan Carlos Muñoz | Marketing Manager, Interactive & CRM at Volvo Car España | Profesor de ICEMD

Picture of System Administrator

Distributed Computing

by System Administrator - Monday, 10 August 2015, 10:13 PM
 

Distributed Computing

Posted by: Margaret Rouse

Distributed computing is a model in which components of a software system are shared among multiple computers to improve efficiency and performance. 

According to the narrowest of definitions, distributed computing is limited to programs with components shared among computers within a limited geographic area. Broader definitions include shared tasks as well as program components. In the broadest sense of the term, distributed computing just means that something is shared among multiple systems which may also be in different locations. 

In the enterprise, distributed computing has often meant putting various steps in business processes at the most efficient places in a network of computers. For example, in the typical distribution using the 3-tier model, user interface processing is performed in the PC at the user's location, business processing is done in a remote computer, and database access and processing is conducted in another computer that provides centralized access for many business processes. Typically, this kind of distributed computing uses the client/server communications model.

The Distributed Computing Environment (DCE) is a widely-used industry standard that supports this kind of distributed computing. On the Internet, third-party service providers now offer some generalized services that fit into this model.

Grid computing is a computing model involving a distributed architecture of large numbers of computers connected to solve a complex problem. In the grid computing model, servers or personal computers run independent tasks and are loosely linked by the Internet or low-speed networks. Individual participants may allow some of their computer's processing time to be put at the service of a large problem. The largest grid computing project is SETI@home, in which individual computer owners volunteer some of their multitasking processing cycles (while concurrently still using their computer) to the Search for Extraterrestrial Intelligence (SETI) project. This computer-intensive problem uses thousands of PCs to download and search radio telescope data.

There is a great deal of disagreement over the difference between distributed computing and grid computing. According to some, grid computing is just one type of distributed computing. The SETI project, for example, characterizes the model it’s based on as distributed computing. Similarly, cloud computing, which simply involves hosted services made available to users from a remote location, may be considered a type of distributed computing, depending on who you ask.

One of the first uses of grid computing was the breaking of a cryptographic code by a group that is now known as distributed.net. That group also describes its model as distributed computing.

Related Terms

Definitions

Glossaries

  • Software applications

    - Terms related to software applications, including definitions about software programs for vertical industries and words and phrases about software development, use and management.

  • Internet applications

    - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Dig Deeper

Continue Reading About distributed computing

Link: http://whatis.techtarget.com

Picture of System Administrator

DKW© (KW)

by System Administrator - Thursday, 2 May 2013, 5:14 PM
 

Componente DKW©

Definición 2003

DKW

1. Introducción - Contenidos Tradicionales

En general, hablamos de “contenido” cuando nos referimos al desarrollo de una temática específica, presentado en algún medio o soporte para su difusión.

Una biblioteca bien organizada (con el software MicroIsis, por ejemplo), posee todos sus volúmenes accesibles por varios índices, tales como ISBN, Título, Autores, Editorial, País, Idioma, Año de Publicación, Palabras Clave (descriptores o keywords agrupados en estándares), etc.

Más aún, si tuviera gran parte de los volúmenes digitalizados y hasta un buscador contextual o por palabras clave, igual hablaríamos de “contenidos tradicionales”. Esto es así porque es el propio Usuario Final quien debe generar las relaciones entre estos contenidos, al igual que el contexto de su aplicabilidad. Para un médico, por ejemplo, puede ser la diferencia en la calidad de su diagnóstico, tratamiento o prescripción. Para un abogado puede ser la diferencia entre ganar o perder un caso. Cada actividad obtendrá gran asertividad si posee estas herramientas.

Volvamos un momento al ejemplo del material de una biblioteca. Con las debidas autorizaciones, el material impreso puede ser fotocopiado, mientras que el digital puede ser copiado e insertado en un documento, o introducido en una base de datos multimedia. Estos dos últimos casos constituyen la forma actual de reutilizar un contenido en forma relativamente automatizada. La “inteligencia” del contenido final generado depende del tiempo y capacidad del Usuario Final, pues es éste el que hace el "nexo" de todo.

La web está inundada de contenidos en forma desordenada y anárquica. Los “portales temáticos” ofrecen el "saber humano" en diferentes formatos (XML, texto ASCII, ODF, PHP, HTML, RTF, PDF, DOC, PPS/PPT, XLS, etc.), a veces clasificados por palabras clave que ni siquiera siguen normas bibliográficas estándar o codificaciones internacionales.

Catálogo Descriptores

Catálogo Descriptores

Ejemplo de estándar internacional de codificación CIE10 para medicina (fuente: TNG Consultores)

Por ejemplo, si un Ginecólogo desea buscar material bibliográfico sobre “anticonceptivos para el día después”, podrá utilizar bibliotecas médicas poderosas como MedLine, ingresar al portal de un Laboratorio Farmacéutico determinado o simplemente hacer uso de un motor de búsqueda tal como los proporcionados por Google, Yahoo o Altavista. ¿Cuáles son los posibles problemas al recuperar esta información?:

1. Le llegan cientos o miles de links hacia información de calidad buena, mala o regular. Esto fastidia bastante al Usuario Final pues comprueba una y otra vez que la web es poderosa, enorme… pero a menudo impráctica. Uno de los objetivos de este proyecto (componentes DKW) es que “el web se dirija al Usuario y no al revés, simplemente con un clic u orden verbal” (Fase 2).

2. El formato de la información obtenida podrá incluir texto, imágenes y hasta audio y video, pero debe recuperarla y clasificarla manualmente por tipo de objeto.

3. Si su objetivo es simplemente el de escribir una tesis, seleccionará los contenidos que crea interesantes.

Pero, ¿y el resto del material acopiado?.

 4. Partamos de la base de que el Usuario ha grabado las páginas que le interesaban en su disco. ¿Cómo las recuperará fácilmente? Si el Usuario es de conocimiento medio (como la inmensa mayoría de los Usuarios autodidactas), habrá grabado las páginas en carpetas temáticas. Pero si se trata de un Usuario avanzado, utilizará una base de datos documental multimedia que, a través de links, generará catálogos automáticos para una rápida recuperación por palabras clave (descriptores, keywords), autores, títulos, fragmentos de texto, etc.

Hasta aquí hemos hablado de “Contenidos Tradicionales” en un sentido compatible con bibliografía.

2. Definición

Qué es un DKW?

DKW

En la figura anterior observamos el concepto de cómo se puede generar un componente DKW (Data Knowledge Component). Necesitamos un "recipiente contenedor" y datos. 

Definición

DKW

El lenguaje XML nos permite transmitir "formatos" y "datos" con semántica para su interpretación y validación (CDA en medicina es uno de tantos ejemplos DTD) y ya existen bases de datos nativas XML. La tecnología está lista para los DKW.

Un "cluster DKW" es una sistema de almacenamiento de componentes DKW temáticos, controlado por un servidor KIP. Puede transmitir, además de datos concretos, el conocimiento HKW/SKW involucrado, o una referencia/link a dónde se encuentra.

Red KW

En una receta de cocina para el hogar digital, el componente DKW de dicha receta debe especificar ingredientes, forma de preparación y parámetros para los "drivers" de los electrodomésticos que serán utilizados. Todos estos datos son ejecutados a través del servidor KIP hogareño.

Cualquier tipo de comunidad (abierta o cerrada) podrá crear y almacenar sus componentes DKW en clusters controlados por servidores KIP.

3. Construcción

Los pasos para la construcción de componentes DKW pueden ser los siguientes: 

1. Elección del "conocimiento" a utilizar. Esto es cargar en el framework del Usuario Final el componente HKW/SKW que se corresponde con los datos a cargar.

2. "Alineación". Esto significa utilizar los estándares de codificación sectorial y/o los datos en el formato adecuado. Algunos de estos datos serán OIDs (identificadores universales de objetos).

3. Especificar si el DKW integrará el "conocimiento" HKW/SKW o simplemente se incluirá la referencia/link al mismo.

4. Grabar el DKW. Este DKW puede quedar en la base de datos local o puede cargarse en el cluster del KIP comunitario.

5. Opcionalmente, podrá ejecutar una "sinapsis" para recuperar y analizar conocimiento relacionado con el DKW recién creado. Esto puede generar uno o varios componentes SKW, que podrán tener sentido o no.

Existen hoy en el mercado muchas empresas altamente especializadas en generar catálogos temáticos, sectoriales y multimedia general para los medios de comunicación. Estas empresas poseen lo que da en llamarse el "Departamento de Contenidos", que cumplen en general dos funciones básicas:

A. Crean los contenidos (futuros componentes DKW) de los productos y servicios ofrecidos en un formato aceptable por el mercado.

B. Colaboran en la "alineación" de estos contenidos con los estándares sugeridos por la cadena de valor a la que pertenecen. Esto es imprescindible para lograr la "conexión" y el siguiente paso: interoperabilidad.

Comunidades, redes sociales, empresas o profesionales independientes pueden necesitar estos servicios para conectarse con su cadena de valor. Actualmente estos servicios de construcción de contenidos y alineación facturan mucho más que el tráfico de red utilizado.

La tecnología DKW puede cambiar totalmente este escenario.

4. Estrategia de Difusión

DKW y los Departamentos de Contenidos: Un componente DKW empaqueta datos puntuales y, opcionalmente, la lógica HKW y/o SKW relacionada (por ejemplo, un texto XML con datos y semántica para su interpretación y validación). Un DKW es, pues, datos e información sobre productos, servicios, personas, procesos, logística, bibliografía, noticias, redes sociales y/o comunidad.

  • IMPORTANTE: La tecnología DKW no contradice ni intenta cambiar los estándares sectoriales, internacionales y/o desarrollados por el propio Usuario (esto significa que el formato de los datos dentro del cuerpo de un paquete DKW puede seguir cualquier normativa). 
  • El rol del Dpto. de Contenidos es tanto o más importante que la tecnología involucrada. Es necesario plantear la facilidad de conversión para los datos e información ya existentes sobre productos y servicios del Usuario y que, por otra parte, han requerido un enorme coste a lo largo del tiempo. En todo momento se debe respetar la inversión en IT que ya ha desarrollado el Usuario. 
  • La tecnología HKW/SKW/DKW/KIP incrementa la productividad de lo que el Usuario ya tiene en su empresa o comunidad, significando un gran cambio en el rendimiento de la infraestructura informática que utiliza. Es ponerlo al día con lo que él siempre percibió como el ruido de la “era de la información”. 
  • Los diferentes Dptos. de Contenidos Sectoriales desarrollarán campañas concretas para los Usuarios, colaborando en la creación de componentes HKW y DKW para ellos. Esta política debe ser aplicada a aquellos Usuarios considerados “visagra” (redes sociales, laboratorios farmacéuticos, grandes superficies comerciales, telefónicas y fabricantes con grandes redes comerciales). Si un gran comprador integra la tecnología KW, toda su cadena de valor se volcará también.  
  • Una importante estrategia consiste en crear, en el seno de la “Fundación KW” local, comités sectoriales para “democratizar” el aporte de cada actor (individuo en una red social, fabricante, importador, distribuidor, comerciante, desarrollador de software, representante del estándar sectorial, delegados de los bancos, representante de consumidores y del Estado). Este comité “validará” la tecnología en forma natural y eficiente. De esta forma hay una congruencia entre la “filosofía KW” y las acciones de implantación de la misma (los clusters de conocimiento resultantes tendrán una doble certificación, la de “KW Compatible” y la del comité del sector de actuación). 

Los actuales proveedores de contenidos del mercado podrán distribuir sus productos de una forma más granular y eficiente. El mercado que se les abre es más grande y ambicioso que el que tienen actualmente. La proactividad de sus componentes DKW con las aplicaciones informáticas les genera un nuevo modelo/unidad de negocios y, por sobre todas las cosas, podrán evitar la piratería con medios mucho más fiables que los actuales.

Picture of System Administrator

DNA Machines (DNA)

by System Administrator - Monday, 1 July 2013, 12:53 PM
 

DNA Machines Inch Forward

Researchers are using DNA to compute, power, and sense.

By Sabrina Richards | March 5, 2013

Advances in nanotechnology are paving the way for a variety of “intelligent” nano-devices, from those that seek out and kill cancer cells to microscopic robots that build designer drugs. In the push to create such nano-sized devices, researchers have come to rely on DNA. With just a few bases, DNA may not have the complexity of amino acid-based proteins, but some scientists find this minimalism appealing.

“The rules that govern DNA’s interactions are simple and easy to control,” explained Andrew Turberfield, a nanoscientist at the University of Oxford. “A pairs with T, and C pairs with G, and that’s basically it.” The limited options make DNA-based nanomachines more straightforward to design than protein-based alternatives, he noted, yet they could serve many of the same functions. Indeed, the last decade has seen the development of a dizzying array of DNA-based nanomachines, including DNA walkers, computers, and biosensors.

Furthermore, like protein-based machines, the new technologies rely on the same building blocks that cells use. As such, DNA machines “piggyback on natural cellular processes and work happily with the cell,” said Timothy Lu, a synthetic biologist at the Massachusetts Institute of Technology (MIT), allowing nanoscientists to “think about addressing issues related to human disease.”

Walk the line

One of the major advancements of DNA nanotechnology is the development of DNA nanomotors—miniscule devices that can move on their own. Such autonomously moving devices could potentially be programmed to carry drugs directly to target tissues, or serve as tiny factories by building products like designer drugs or even other nanomachines.

DNA-based nanomachines rely on single-stranded DNA’s natural tendency to bind strands with complementary sequences, setting up tracks of DNA to serve as toeholds for the single-stranded feet of DNA walkers. In 2009, Nadrian Seeman’s team at New York University built a tiny DNA walker comprised of two legs that moved like an inch worm along a 49-nanometer-long DNA path. 

But to direct drugs or assemble useful products, researchers need DNA nanomachines to do more than move blindly forward. In 2010, Seeman created a DNA walker that served as a “nanoscale assembly line” to construct different products. In this system, a six-armed DNA walker shaped like a starfish somersaulted along a DNA track, passing three DNA way stations that each provided a different type of gold particle. The researchers could change the cargo stations conformations to bring the gold particles within the robot’s reach, allowing them to get picked up, or to move them farther away so that the robot would simply pass them by.

“It’s analogous to the chassis of a car going down an assembly line,” explained Seeman. The walker “could pick up nothing, any one of three different cargos, two of three different, or all three cargos,” he said—a total of 8 different products.

And last year, Oxford’s Turberfield added another capability to the DNA walker tool box: navigating divergent paths. Turberfield and his colleagues created a DNA nanomotor that could be programmed to choose one of four destinations via a branching DNA track. The track itself could be programmed to guide the nanomotor, and in the most sophisticated version of the system, Turberfield’s nanomachine carried its own path-determining instructions.

Next up, Turberfield hopes to make the process “faster and simpler” so that the nanomotor can be harnessed to build a biomolecule. “The idea we’re pursuing is as it takes a step, it couples that step to a chemical reaction,” he explained. This would enable a DNA nanomotor to string together a polymer, perhaps as a method to “build” drugs for medical purposes, he added.

DNA-based biosensing

DNA’s flexibility and simplicity has also been harnessed to create an easily regenerated biosensor. Chemist Weihong Tan at the University of Florida realized that DNA could be used to create a sensor capable of easily switching from its “on” state back to its “off” state. As proof of principle, Tan and his team designed biosensor switches by attaching dye-conjugated silver beads to DNA strands and studding the strands onto a gold surface. In the “off” state, the switches are pushed upright by extra DNA strands that fold around them, holding the silver beads away from the gold surface. These extra “off”-holding strands are designed to bind to the target molecule—in this case ATP—such that adding the target to the system coaxes the supporting strands away from the DNA switches. This allows the switch to fold over, bringing the silver bead within a few nanometers of the gold surface and creating a “hotspot” for Raman spectroscopy —the switch’s “on” state.

Previous work on creating biosensors based on Raman spectroscopy, which measures the shift in energy from a laser beam after it’s scattered by individual molecules, created irreversible hotspots. But Tan can wash away the ATP and add more supporting strands to easily ready his sensor for another round of detection, making it a re-usable technology.

Though his sensor is in its early stages, Tan envisions designing biosensors for medical applications like cancer biomarker detection. By using detection strands that bind directly to a specific cancer biomarker, biosensors based on Tan’s strategy would be able to sensitively detect signs of cancer without need for prior labeling with radionuclides or fluorescent dyes, he noted.

Computing with DNA

Yet another potential use for DNA is in data storage and computing, and researchers have recently demonstrated the molecule’s ability to store and transmit information. Researchers at Harvard University recently packed an impressive density of information into DNA—more than 5 petabits (1,000 terabits) of data per cubic millimeter of DNA—and other scientists are hoping to take advantage of DNA’s ability to encode instructions for turning genes on and off to create entire DNA-based computers.

Although it’s unlikely that DNA-based computing will ever be as lightning fast as the silicon-based chips in our laptops and smartphones, DNA “allows us to bring computation to other realms where silicon-based computing will not perform,” said MIT’s Lu—such as living cells.

In his latest project, published last month (February 10) in Nature Biotechnology, Lu and his colleagues used Escherichia coli cells to design cell-based logic circuits that “remember” what functions they’ve performed by permanently altering DNA sequences. The system relies on DNA recombinases that can flip the direction of transcriptional promoters or terminators placed in front of a green fluorescent protein (GFP) gene. Flipping a backward-facing promoter can turn on GFP expression, for example, as can inverting a forward-facing terminator. In contrast, inverting a forward-facing promoter or a backward-facing terminator can block GFP expression. By using target sequences unique to two different DNA recombinases, Lu could control which promoters or terminators were flipped. By switching the number and direction of promoters and terminators, as well as changing which recombinase target sequences flanked each genetic element, Lu and his team induced the bacterial cells to perform basic logic functions, such as AND and OR.

Importantly, because the recombinases permanently alter the bacteria’s DNA sequence, the cells “remember” the logic functions they’ve completed—even after the inputs are long gone and 90 cell divisions have passed. Lu already envisions medical applications relying on such a system. For example, he speculated that bacterial cells could be programmed to signal the existence of tiny intestinal bleeds that may indicate intestinal cancer by expressing a dye in response to bloody stool. Such a diagnostic tool could be designed in the form of a probiotic pill, he said, replacing more invasive procedures.

Applications based on these studies are still years away from the bedside or the commercial market, but researchers are optimistic. “[It’s] increasingly possible to build more sophisticated things on a nanometer scale,” said Turberfield. “We’re at very early stages, but we’re feeling our way.”

Picture of System Administrator

DNA Storage (DNA)

by System Administrator - Wednesday, 26 June 2013, 10:06 PM
 

DNA storage is the process of encoding and decoding binary data onto and from synthesized strands of DNA (deoxyribonucleic acid). In nature, DNA molecules contain genetic blueprints for living cells and organisms.

To store a binary digital file as DNA, the individual bits(binary digits) are converted from 1 and 0 to the letters A, C, G, and T. These letters represent the four main compounds in DNA: adenine, cytosine, guanine, and thymine. The physical storage medium is a synthesized DNA molecule containing these four compounds in a sequence corresponding to the order of the bits in the digital file. To recover the data, the sequence A, C, G, and T representing the DNA molecule is decoded back into the original sequence of bits 1 and 0.

Researchers at the European Molecular Biology Laboratory (EMBL) have encoded audio, image, and text files into a synthesized DNA molecule about the size of a dust grain, and then successfully read the information from the DNA to recover the files, claiming 99.99 percent accuracy.

An obvious advantage of DNA storage, should it ever become practical for everyday use, would be its ability to store massive quantities of data in media having small physical volume. Dr. Sriram Kosuri, a scientist at Harvard, believes that all the digital information currently existing in the world could reside in four grams of synthesized DNA.

A less obvious, but perhaps more significant, advantage of DNA storage is its longevity. Because DNA molecules can survive for thousands of years, a digital archive encoded in this form could be recovered by people for many generations to come. This longevity might resolve the troubling prospect of our digital age being lost to history because of the relative impermanence of optical, magnetic, and electronic media.

The principal disadvantages of DNA storage for practical use today are its slow encoding speed and high cost. The speed issue limits the technology's promise for archiving purposes in the near term, although eventually the speed may improve to the point where DNA storage can function effectively for general backup applications and perhaps even primary storage. As for the cost, Dr. Nick Goldman of the EMBL suggests that by the mid-2020s, expenses could come down to the point where the technology becomes commercially viable on a large scale.

This was last updated in April 2013

Contributor(s): Stan Gibilisco

Posted by: Margaret Rouse
 
 
Picture of System Administrator

DNA-based Data Storage (DNA)

by System Administrator - Wednesday, 26 June 2013, 10:26 PM
 


DNA-based Data Storage Here to Stay

The second example of storing digital data in DNA affirms its potential as a long-term storage medium.

Researchers have done it again—encoding 5.2 million bits of digital data in strings of DNA and demonstrating the feasibility of using DNA as a long-term, data-dense storage medium for massive amounts of information. In the new study released today (January 23) in Nature, researchers encoded one color photograph, 26 seconds of Martin Luther King Jr.’s “I Have a Dream” speech, and all 154 of Shakespeare’s known sonnets into DNA.

Though it’s not the first example of storing digital data in DNA, “it’s important to celebrate the emergence of a field,” said George Church, the Harvard University synthetic biologist whose own group published a similar demonstration of DNA-based data storage last year in Science.  The new study, he said, “is moving things forward.”

Scientists have long recognized DNA’s potential as a long-term storage medium. “DNA is a very, very dense piece of information storage,” explained study author Ewan Birney of the European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) in the UK. “It’s very light, it’s very small.” Under the correct storage conditions—dry, dark and cold—DNA easily withstands degradation, he said.

Advances in synthesizing defined strings of DNA, and sequencing them to extract information, have finally made DNA-based information storage a real possibility. Last summer, Church’s group published the first demonstration of DNA’s storage capability, encoding the digital version of Church’s bookRegenesis, which included 11 JPEG images, into DNA, using Gs and Cs to represent 1s of the binary code, and As and Ts to represent 0s.

Now, Birney and his colleagues are looking to reduce the error associated with DNA storage. When a strand of DNA has a run of identical bases, it’s difficult for next-generation sequencing technology to correctly read the sequence. Church’s work, for example, produced 10 errors out of 5.2 million bits. To prevent these types of errors, Birney and his EMBL-EBI collaborator Nick Goldman first converted each byte—a string of eight 0s and 1s—into a single “trit” made up of 5 or 6 digits of 0s, 1s, and 2s. Then, when converting these trits into the A, G, T and C bases of DNA, the researchers avoided repeating bases by using a code that took the preceding base into account when determining which base would represent the next digit.

The synthesizing process also introduces error, placing a wrong base for every 500 correct ones. To reduce this type of error, the researchers synthesized overlapping stretches of 117 nucleotides (nt), each of which overlapped with preceding and following strands, such that all data points were encoded four times. This effectively eliminated reading error because the likelihood that all four strings have identical synthesis errors is negligible, explained Birney.

Agilent Technologies in California synthesized more than 1 million copies of each 117-nt stretch of DNA, stored them as dried powder, and shipped it at room temperature from the United States to Germany via the UK. There, researchers took an aliquot of the sample, sequenced it using next-generation sequencing technology, and reconstructed the files.

Birney and Goldman envision DNA replacing other long-term archival methods, such as magnetic tape drives. Unlike other data storage systems, which are vulnerable to technological obsolescence, “methods for writing and reading DNA are going to be around for a long, long time,” said molecular biologist Thomas Bentin of the University of Copenhagen. Bentin, who was not involved in the research, compared DNA information storage to the fleeting heyday of the floppy disk—introduced only a few decades ago and already close to unreadable.  And though synthesizing and decoding DNA is currently still expensive, it is cheap to store. So for data that are intended to be stored for hundreds or even thousands of years, Goldman and Birney reckon that DNA could actually be cheaper than tape.

Additionally, there’s great potential to scale up from the 739 kilobytes encoded in the current study. The researchers calculate that 1 gram of DNA could hold more than 2 million megabytes of information, though encoding information on this scale will involve reducing the synthesis error rate even further, said bioengineer Mihri Ozkan at the University of California, Riverside, who did not participate in the research.

Despite the challenges that lie ahead, however, the current advance is “definitely worth attention,” synthetic biologist Drew Endy at Stanford University, who was not involved in the research, wrote in an email to The Scientist. “It should develop into a new option for archival data storage, wherein DNA is not thought of as a biological molecule, but as a straightforward non-living data storage tape.”

N. Goldman et al., “Towards practical, high-capacity, low-maintenance information storage in synthesized DNA,” Nature, doi: 10.1038/nature.11875, 2013.

Picture of System Administrator

Doble personalidad o dual persona (gestión de dispositivos móviles)

by System Administrator - Friday, 5 September 2014, 9:33 PM
 

Gestión de dispositivos móviles: Doble personalidad o dual persona

Publicado por: Margaret Rouse

La personalidad doble, en un contexto de gestión de móviles, es el aprovisionamiento y mantenimiento de dos entornos de usuario final, separados e independientes, en un solo dispositivo móvil. Por lo general, el primer ambiente es personal y el segundo es para el trabajo.

El objetivo de la gestión de aplicaciones móviles (MAM) de doble personalidad es proporcionar a una organización una forma de mantener las aplicaciones corporativas y sus datos asociados separados y protegidos en el dispositivo móvil personal de un empleado. Para proteger la privacidad del usuario final, el departamento de TI solo puede ver y gestionar los activos en el entorno empresarial. Si un empleado deja la empresa, el entorno empresarial se puede limpiar, dejando los datos y aplicaciones personales del empleado intactos. La tecnología de doble personalidad aborda uno de los retos de la tendencia de traer su propio dispositivo: la forma de ejercer el control de TI sobre los dispositivos personales de los empleados sin violar su derecho a la privacidad.

Un enfoque hacia la doble personalidad es la virtualización móvil, que utiliza un hipervisor para dividir los recursos de hardware entre dos sistemas operativos en el mismo dispositivo.VMware Horizon Mobile para Android es un ejemplo de un producto de virtualización móvil.

Otro enfoque requiere que el empleado descargue e instale un cliente móvil que crea un entorno de solo de trabajo aislado, llamado contenedor. El contenedor, que se pone en marcha como cualquier otra aplicación móvil, permite a los administradores establecer políticas seguras para el lado de trabajo del dispositivo que no interfieran con el lado personal del dispositivo. Tales políticas de seguridad incluyen protección por contraseña, limpieza y especificaciones remotas para establecer en qué momentos del día se puede acceder al contenedor de trabajo. AT&T Toggle es un ejemplo de este enfoque de doble personalidad.

Además de proteger los datos corporativos, una de las mayores preocupaciones con la administración de dispositivos móviles (MDM) de doble personalidad es la facilidad de uso. A medida que la línea entre la vida personal y profesional de las personas se desdibuja, existen dudas sobre si los usuarios de dispositivos móviles estarán dispuestos a alternar entre dos entornos móviles.

TÉRMINOS DE GLOSARIO RELACIONADOS: Transformación de TI Computación contextualCRM (Gestión de relaciones con los clientes)FemtoceldaTraiga sus propias aplicaciones (BYOA)

Picture of System Administrator

Documentación (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 1:04 AM
 

CONCEPTOS RELATIVOS A LA DOCUMENTACIÓN

Información: Datos que poseen significado.

Documento: Información y su medio de soporte.

Especificación: Documento que establece requisitos.

Manual de calidad: Documento que especifica el sistema de gestión de la calidad de una organización.

Plan de calidad: Documento qué específica que procedimientos y recursos asociados deben aplicarse, quién debe aplicarlos y cuándo deben aplicarse a un proyecto, producto o contrato específico.

Registro: Documento que presenta resultados obtenidos o proporciona evidencia de actividades desempeñadas.

Picture of System Administrator

DOS

by System Administrator - Sunday, 31 March 2013, 7:10 PM
 

DOS es una familia de sistemas operativos para PC.

El nombre es la sigla de Disk Operating System (sistema operativo de disco). Fue creado originalmente para computadoras de la familia IBM PC, que utilizaban los procesadores Intel 8086 y 8088, de 16 bits y 8 bits, respectivamente, siendo el primer sistema operativo popular para esta plataforma. Contaba con una interfaz de línea de comandos en modo texto o alfanumérico, con su propio intérprete de órdenes: COMMAND.COM.

Probablemente la más popular de sus variantes sea la perteneciente a la familia MS-DOS®, de Microsoft®. La misma era suministrada con buena parte de los ordenadores compatibles con IBM PC, en especial aquellos de la familia Intel, como sistema operativo independiente o nativo. El MS-DOS llegó hasta hasta la versión 6.22 (bien entrados los 90’s), frecuentemente adjunto a una versión de la interfaz gráfica MS-Windows de 16 bits (3.1x).

En las versiones nativas de Windows®, basadas en NT (Windows NT, 2000, 2003, XP, Vista, 7, 8) MS-DOS desaparece como sistema operativo propiamente dicho (entorno base, desde el que se arrancaba el equipo y sus procesos básicos, pasando luego a ejecutar y cargar la inferfaz gráfica o entorno operativo de Windows). Todo vestigio del mismo queda relegado, en tales versiones, a la existencia de un simple intérprete de comandos, denominado Símbolo del Sistema, ejecutado como aplicación mediante cmd.exe, a partir del propio entorno gráfico (elevado ahora a la categoría de sistema operativo).

Esto no es así en las versiones no nativas de Windows, que sí están basadas en MS-DOS, cargándose a partir del mismo. Desde Windows 1.0x a la 3.11, de 16 bits, MS-Windows tuvo el planteamiento de una simple aplicación de interfaz o entorno gráfico, complementaria al propio intérprete de comandos, desde el que era ejecutado. Fue a partir de las versiones de 32 bits, de nuevo diseño y mayor potencia, basadas en Windows 95 y 98, cuando el MS-DOS comienza a ser deliberadamente camuflado por el propio entorno gráfico de Windows durante el proceso de arranque, dando paso, por defecto, a su automática ejecución. Esto acapara la atención del Usuario medio y atribuye al antiguo sistema operativo un papel más dependiente y secundario, llegando a ser por muchos olvidado y desconocido. El mismo ha sido paulatinamente abandonado por los desarrolladores de software y hardware, empezando por la propia Microsoft. Sin embargo, en tales versiones de 32 bits Windows no funcionaba de forma autónoma como sistema operativo. Tanto es así que varias de las funciones primarias o básicas del sistema, como su arranque, tenían en su base a los distintos módulos y archivos de sistema que componían el modesto armazón del DOS, tales como IO.SYS, DRVSPACE. BIN, EMM386.EXE e HIMEM.SYS.

Existen varias versiones de DOS. El más conocido de ellos es el MS-DOS, de Microsoft (de ahí las iniciales MS). Otros sistemas son el PC-DOS, de IBM, el DR-DOS, de Digital Research, que pasaría posteriormente a Novell (Novell DOS 7.0), luego a Caldera y finalmente a DeviceLogics. Más recientemente, tenemos el FreeDOS, de licencia libre y código abierto. Éste último puede hacer las veces, en su versión para Linux/UNIX, de emulador del DOS bajo sistemas de este tipo.

Picture of System Administrator

DRaaS

by System Administrator - Monday, 6 July 2015, 8:38 PM
 

7 Critical Questions to Demystify DRaaS

This whitepaper is not a sermon on Disaster Recovery and whyyou need it. You don’t need a lesson in the perils of disasters or a theoretical “business case” that proves unpredictable events can damage your data and cost you thousands of dolllars. In fact,if you were not already aware of the need for Disaster Recovery you probably would not be reading this document.
 
Please read the attached whitepaper.
Picture of System Administrator

DRaaS (CLOUD)

by System Administrator - Wednesday, 5 June 2013, 6:51 PM
 

Las fallas de hardware y los errores humanos todavía son uno de los principales problemas de las empresas en cuanto a costos IT se refiere. Por este motivo, contar con una solución de recuperación de desastres nunca está de más. Con ella, además de salvar toda la información corporativa importante, es posible plantear una estrategia eficaz de seguridad. En definitiva, las soluciones DRaaS se convierten en una herramienta prácticamente vital para cualquier organización sin importar su tamaño ya que, como todo, la tecnología puede fallar en el momento menos oportuno.

Picture of System Administrator

DRaaS: Disaster Recovery as a Service

by System Administrator - Wednesday, 7 June 2017, 7:09 PM
 

Disaster Recovery as a Service (DRaaS)

Posted by: Margaret Rouse | Contributors: Kim Hefner and Stan Gibilisco

Disaster recovery as a service (DRaaS) is the replication and hosting of physical or virtual servers by a third party to provide failover in the event of a man-made or natural catastrophe.

Typically, DRaaS requirements and expectations are documented in a service-level agreement (SLA) and the third-party vendor provides failover to a cloud computing environment, either through a contract or on a pay-per-use basis. In the event of an actual disaster, an off-site vendor is less likely than the enterprise itself to suffer the direct and immediate effects, which allows the provider to implement the disaster recovery plan even in the event of the worst-case scenario: a total or near-total shutdown of the affected enterprise.

How to pick a DRaaS provider

If you determine DRaaS is the right approach to disaster recovery planning for your organization, there are some important questions to consider, according to analyst George Crump:

  • What percentage of customers can the service provider support concurrently during a regional disaster such as a hurricane?
  • What DR resources are available for recovery?
  • How does the provider manage, track and update these resources?
  • What happens if the provider cannot deliver DR services?
  • What are the rules for declaring a disaster?
  • Is it first-come, first-served until resources are maxed out?
  • What happens to customers who cannot be serviced?
  • How will users access internal applications?
  • Will virtual private networks be managed or rerouted?
  • How does a virtual desktop infrastructure affect user access and who manages it during disaster recovery?
  • How will customers, partners and users access outward-facing applications?
  • Will domain name system nodes be updated for outward or customer-facing applications?
  • How do you ensure administrators and users receive access to servers/applications?
  • What are the procedures for failback?
  • What professional services, skills and/or experiences are available from the service provider to facilitate disaster recovery and how much do they cost?
  • How much help can be expected in a DR event?
  • What are the DRaaS provider's testing processes?
  • Can customers perform their own testing?
  • How long can a customer run in the service provider's data center after a disaster is declared?
  • What are the costs associated with the various disaster recovery as a service options?
  • Are they a la carte, bundled or priced upfront?
  • Is there a mix of upfront and recovery event costs?

Some examples of disaster-recovery-as-a-service providers in the market include AcronisAmazon Web Services, Axcient, Bluelock, Databarracks, EVault, IBM, iland, Infrascale, Net3 Technology, Peak 10, Quorum, RapidScale, Sungard Availability Services (AS), Unitrends, Verizon Communications, VMware, Windstream Communications and Zerto.

 

DRaaS advantages and disadvantages

With disaster recovery as a service, the time to return applications to production is reduced because data does not need to be restored over the internet. DRaaS can be especially useful for small and medium-sized businesses that lack the necessary expertise to provisionconfigure and test an effective disaster recovery plan. Using DRaaS also means the organization doesn't have to invest in -- and maintain -- its own off-site DR environment.

The biggest disadvantage to DRaaS is that the business must trust its service provider to implement the plan in the event of a disaster and meet the defined recovery time and recovery point objectives. Additional drawbacks include possible performance issues with applications running in the cloud and potential migration issues when returning applications to a customer's on-premises data center.

DRaaS vs. backup as a service (BaaS)

DRaaS fails over processing to the cloud so an organization can continue to operate during a disaster. The failover notice can be automated or manual. The DRaaS operation remains in effect until IT can repair the on-premises environment and issue a failback order.

In backup as a service, an organization decides which files it will back up to a BaaS provider's storage systems. The customer organization is also responsible for setting up its RPO and RTO service levels, as well as its backup windows. A BaaS provider is only responsible for data consistency and restoring backed up copies of data.

Continue Reading About disaster recovery as a service (DRaaS)

Link: http://searchdisasterrecovery.techtarget.com


Page: (Previous)   1  2  3  4  5  6  7  8  9  10  ...  22  (Next)
  ALL