Glosario KW | KW Glossary

Ontology Design | Diseño de Ontologías

Currently sorted By last update ascending Sort chronologically: By last update change to descending | By creation date

Page: (Previous)   1  2  3  4  5  6  7  8  9  10  11  ...  23  (Next)

Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:11 PM
  • DIME (Direct Internet Message Encapsulation) - a specification that defines a format for attaching files to Simple Object Access Protocol (SOAP) messages between application programs over the Internet. DIME is similar to but somewhat simpler than the Internet's MIME protocol.
  • Document Object Model (DOM) - a programming interface from the W3C that lets a programmer create and modify HTML pages and XML documents as full-fledged program objects.
  • DSML (Directory Services Markup Language) - an XML application that enables different computer network directory formats to be expressed in a common format and shared by different directory systems.
  • DXL (Domino Extensible Language) - a specific version of Extensible Markup Language (XML) for Lotus Domino data.
  • DirXML - Novell's directory interchange software that uses XML to keep different directories synchronized.
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:13 PM
  • employee self-service (ESS) - an increasingly prevalent trend in human resources management that allows an employee to handle many job-related tasks (such as applications for reimbursement, updates to personal information, and access to company information) that otherwise would have fallen to management or administrative staff.
  • ebXML (Electronic Business XML) - a project to use the Extensible Markup Language (XML) to standardize the secure exchange of business data, perhaps in time replacing Electronic Data Interchange (EDI).
  • EDI (Electronic Data Interchange) - a well-established standard (ANSI X12) format for exchanging business data.
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:14 PM
  • infomediary - a Web site that provides specialized information on behalf of producers of goods and services and their potential customers.
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:15 PM
  • Java Message Service - an application program interface from Sun Microsystems that supports the formal communication known as messaging between computers in a network.
  • Java Database Connectivity (JDBC) - an application program interface for connecting programs written in Java to the data in popular databases.
  • JNDI (Java Naming and Directory Interface) - enables Java platform-based applications to access multiple naming and directory services in a distributed network.
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:16 PM
  • message-driven processing - an approach used within the client/server computing model in which a client (for example, your Web browser) sends a service request in the form of a specially-formatted message to a program that acts as a request broker, handling messages from many clients intended for many different server applications.
  • messaging - the exchange of messages (specially-formatted data describing events, requests, and replies) to a messaging server, which acts as a message exchange program for client programs. There are two major messaging server models: the point-to-point model and the publish/subscribe model.
  • MQSeries - an IBM family of middleware.
  • MathML - an application of XML (Extensible Markup Language) designed to facilitate the use of mathematical expressions in Web pages.
  • middleware - any programming that serves to "glue together" or mediate between two separate and often already existing programs. Middleware components often usesmessaging to communicate.
  • MySQL - an open source relational database management system (RDBMS) that uses Structured Query Language (SQL), the most popular language for adding, accessing, and processing data in a database.
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:17 PM

NewsML - a standard way to describe news information content so that it can distributed and reused widely on Web sites and other media.

Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:18 PM
  • OASIS (Organization for the Advancement of Structured Information Standards) - a nonprofit, international consortium whose goal is to promote the adoption of product-independent standards for information formats such as Standard Generalized Markup Language (SGML), Extensible Markup Language (XML), and Hypertext Markup Language (HTML).
  • on-demand computing - an increasingly popular enterprise model in which computing resources are made available to the user as needed. The resources may be maintained within the user's enterprise, or made available by a service provider.
  • Open Profiling Standard (OPS) - a proposed standard for how Web users can control the personal information they share with Web sites. OPS has a dual purpose: (1) to allow Web sites to personalize their pages for the individual user and (2) to allow users to control how much personal information they want to share with a Web site.
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:19 PM
  • PMML (Predictive Modeling Markup Language) - an XML-based language that enables the definition and sharing of predictive models between applications. A predictive model is a statistical model that is designed to predict the likelihood of target occurences given established variables or factors.
  • Portal Markup Language - an application of XML used to create a portal Web site.
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:20 PM
  • RDF Site Summary (RSS) - an application of XML that describes or other Web content that is available for "feeding" (distribution or syndication) from an online publisher to Web users.
  • Resource Description Framework (RDDF) - a general framework for how to describe any Internet resource such as a Web site and its content.
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:21 PM
  • SOAP (Simple Object Access Protocol) - a way for a program running in one kind of operating system (such as Windows 2000) to communicate with a progam in the same or another kind of an operating system (such as Linux) by using the Web's Hypertext Transfer Protocol (HTTP)and XML as the mechanisms for information exchange.
  • SQL (Structured Query Language) - the most popular language for adding, accessing, and processing data in a database
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:22 PM
  • Topic Map Query Language (TMQL) - an XML-based extension of SQL that was developed for use in meeting the specialized data access requirements of Topic Maps (TMs).
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:23 PM
  • UDDI (Universal Description, Discovery, and Integration) - an XML-based registry for businesses worldwide to list themselves on the Internet.
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:24 PM
  • Visual J# - a set of programmming tools that allow developers to use the Java programming language to write applications that will run on Microsoft's .NET runtime platform.
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:25 PM
  • XML (Extensible Markup Language) - a flexible way to create common information formats and share both the format and the data on the World Wide Web, intranets, and elsewhere.
  • XML Core Services (formerly MSXML) - an application for processing Extensible Stylesheet Language Transformation (XSLT) in an XML file. Based on Microsoft's Component Object Model (COM), XML Core Services is essentially an application programming interface (API) to an XML parser and the XPath processor. The parser organizes the XML data into a tree structure for processing, and the processor converts the XML to HTML for display.
  • XPath - a language that describes a way to locate and process items in XML documents by using an addressing syntax based on a path through the document's logical structure or hierarchy.
  • XSL (Extensible Stylesheet Language) - a language for creating a style sheet that describes how data sent over the Web using the Extensible Markup Language (XML) is to be presented to the user.
  • XSL Transformations (XSLT) - a standard way to describe how to transform (change) the structure of one XML document into an XML document with a different structure.
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:28 PM

Also see Webification.

1) To Webify is to convert some information content from its original format into a format capable of being displayed on the World Wide Web. Some conversion examples are:

  • Postscript source file or ASCII text to a Hypertext Markup Language (HTML) file
  • A Microsoft Word document to HTML (sometimes referred to as "DOC to HTML"). More recent versions of Microsoft Word include this capability.
  • Hard-copy print publication pages into files in the Portable Document Format (PDF) for viewing on the Web with Adobe's downloadable Acrobat viewer
  • A Lotus Notes database to HTML files
  • An image in a scanned-in or other format to a Web-ready image, either a GIF or a JPEGfile
  • A speech or interview into a file in the RealAudio format for playing as streaming soundon the Web
  • A video tape recording into a streaming video file

Using the File Transfer Protocol (FTP) from the Web browser, text pages (with files in theASCII TXT format) can also be "Webified" for display by Web users. Many Internet Request for Comment (Request for Comments) documents are available on the Web in the text format. The only Webification these files need is to simply make them available in a directory accessible to the FTP server.

2) Webify is the name of a program that makes a structured tree of HTML files and JPEG or GIF images from Postscript files.

Picture of System Administrator

Ruby on Rails

by System Administrator - Saturday, 1 June 2013, 3:30 PM

Ruby on Rails, sometimes known as "RoR" or just "Rails," is an open source framework for Web development in Ruby, an object-oriented programming (OOP) language similar to Perland Python.

The principle difference between Ruby on Rails and other frameworks for development lies in the speed and ease of use that developers working within the environment enjoy. Changes made to applications are immediately applied, avoiding the time consuming steps normally associated with the web development cycle. According to David Geary, a Java expert, the Ruby-based framework is five to 10 times faster than comparable Java-based frameworks. In a blog posting, Geary predicted that Rails would be widely adopted in the near future.

Rails is made up of several components, beyond Ruby itself, including:

  • Active record, an object-relational mapping layer
  • Action pack, a manager of controller and view functions
  • Action mailer, a handler of email
  • Action web services
  • Prototype, an implementer of drag and drop and Ajax functionality

Rails can run on most Web servers that support CGI. The framework also supports MySQL,PostgreSQL, SQLite, SQL ServerDB2 and Oracle. Rails is also an MVC (model, view, controller) framework where all layers are provided by Rails, as opposed to reliance on other, additional frameworks to achieve full MVC support. Invented by David Heinemeier Hanss, Ruby On Rails has been developed as an open-source project, with distributions available through

RELATED GLOSSARY TERMS: search enginecyberprisenamespaceWebificationkiller app,service-component architecture (SCA)WebifyProject TangoPersonal Web Server (PWS),MQSeries

Contributor(s): Alexander B. Howard
This was last updated in April 2006
Posted by: Margaret Rouse
Picture of System Administrator

Message-Driven Processing

by System Administrator - Saturday, 1 June 2013, 3:31 PM

Message-driven processing is an approach used within the client/server computing model in which a client (for example, your Web browser) sends a service request in the form of a specially-formatted message to a program that acts as a request broker, handling messages from many clients intended for many different server applications. A message may contain the name of the service (application) wanted and possibly a requested priority or time of forwarding. The request broker manages a queue of requests (and possibly replies) and screens the details of different kinds of clients and servers from each other. Both client and server need only understand the messaging interface. Message-driven processing is often used in distributed computing in a geographically-dispersed network and as a way to screen new client applications from having to interact directly with legacy server applications. Special software that provides message-driven processing is known as middleware.

In IBM's MQSeries middleware messaging product, its MDp (for "message-driven processor") provides an example. MDp is an intermediary layer between clients and a legacy system of applications, and serves as a request broker between clients and applications. The client formulates a request; MDp (which retains information about which applications and databases are to be invoked and where they reside) then breaks the request down into work units and sends these out to the appropriate server applications and databases. After executing the tasks, the back-end processes return the results to MDp, which in turn formulates replies to return to either the requesting client or some other target destination.

Contributor(s): Jonathan Caforio
This was last updated in September 2005
Posted by: Margaret Rouse
Picture of System Administrator

User Interface (UI)

by System Administrator - Saturday, 1 June 2013, 3:34 PM

Also see human-computer interaction.

In information technology, the user interface (UI) is everything designed into an information device with which a human being may interact -- including display screen, keyboard, mouse, light pen, the appearance of a desktop, illuminated characters, help messages, and how an application program or a Web site invites interaction and responds to it. In early computers, there was very little user interface except for a few buttons at an operator's console. The user interface was largely in the form of punched card input and report output.

Later, a user was provided the ability to interact with a computer online and the user interface was a nearly blank display screen with a command line, a keyboard, and a set of commands and computer responses that were exchanged. This command line interface led to one in which menus (list of choices written in text) predominated. And, finally, the graphical user interface (GUI) arrived, originating mainly in Xerox's Palo Alto Research Center, adopted and enhanced by Apple Computer, and finally effectively standardized by Microsoft in its Windows operating systems.

The user interface can arguably include the total "user experience," which may include the aesthetic appearance of the device, response time, and the content that is presented to the user within the context of the user interface.

RELATED GLOSSARY TERMS: search enginecyberprisenamespaceWebificationkiller app,service-component architecture (SCA)WebifyProject TangoPersonal Web Server (PWS),MQSeries

Contributor(s): Mike Dang
This was last updated in April 2005
Posted by: Margaret Rouse
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:35 PM

In the context of the World Wide Web, a gravesite is either:

1) A Web site that has been abandoned or forgotten by its originators that is nevertheless still accessible on a server. (There is a vast but untold number of these.) A synonym is ghost site.

2) A Web site that, in the eyes of marketers, has failed to get sufficient traffic to be interesting to advertisers or other revenue providers, possibly by not finding an audience niche or building an audience community, or by failing to find a distribution partner such as America Online, Yahoo, or Netscape.

RELATED GLOSSARY TERMS: WebifyMQSeriesRuby on Rails (RoR or Rails)message-driven processingInternet timecontentuser interface (UI)Object Management Group (OMG),softwarego bosh (Go Big or Stay Home)

This was last updated in April 2005
Posted by: Margaret Rouse
Picture of System Administrator


by System Administrator - Saturday, 1 June 2013, 3:52 PM

Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources.

You probably know a little about virtualization if you have ever divided your hard drive into different partitions. A partition is the logical division of a hard disk drive to create, in effect, two separate hard drives.

Operating system virtualization is the use of software to allow a piece of hardware to run multiple operating system images at the same time. The technology got its start on mainframes decades ago, allowing administrators to avoid wasting expensive processing power.

In 2005, virtualization software was adopted faster than anyone imagined, including the experts. There are three areas of IT where virtualization is making headroads, network virtualizationstorage virtualization and server virtualization:

  • Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. The idea is that virtualization disguises the true complexity of the network by separating it into manageable parts, much like your partitioned hard drive makes it easier to manage your files.
  • Storage virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Storage virtualization is commonly used in storage area networks (SANs).
  • Server virtualization is the masking of server resources (including the number and identity of individual physical servers, processors, and operating systems) from server users. The intention is to spare the user from having to understand and manage complicated details of server resources while increasing resource sharing and utilization and maintaining the capacity to expand later.

Virtualization can be viewed as part of an overall trend in enterprise IT that includesautonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and work loads.

See also: virtual machinevirtual memory

RELATED GLOSSARY TERMS: host virtual machine (host VM)guest virtual machine (guest VM),paravirtualizationhypervisor

This was last updated in December 2010
Posted by: Margaret Rouse
Picture of System Administrator


by System Administrator - Wednesday, 5 June 2013, 6:49 PM

Para cualquier organización es esencial contar con una buena gestión de sus dispositivos. Las tendencias como el Bring your own device (BYOD) y la consumerización IT también ayudan a que estos servicios tomen impulso. En este contexto, las soluciones DaaS posibilitan la gestión de servicios de las PCs de una forma segura con costos reducidos. No está de más recordar que este sistema ofrece portabilidad para que pueda ser administrado desde cualquier lugar y equipo. 

En definitiva, con DaaS pueden ahorrar costos operacionales ya que se reduce la administración del hardware y se descentraliza la gestión. Además, la empresa será más móvil, rápida en gestionar equipos y, por ende, se volverá más productiva.

Picture of System Administrator


by System Administrator - Wednesday, 5 June 2013, 6:50 PM

Las soluciones MaaS permiten desplegar servidores físicos que se coordinen con los servicios y las aplicaciones que utiliza la empresa. De esta forma, los administradores IT pueden escalar servicios, productos, agilizar los procesos o gestionar los recursos con mayor facilidad. Incluso estas soluciones podrían presentarse como la primera opción para quienes desean empezar a trabajar en la Nube.

Picture of System Administrator


by System Administrator - Wednesday, 5 June 2013, 6:51 PM

Las fallas de hardware y los errores humanos todavía son uno de los principales problemas de las empresas en cuanto a costos IT se refiere. Por este motivo, contar con una solución de recuperación de desastres nunca está de más. Con ella, además de salvar toda la información corporativa importante, es posible plantear una estrategia eficaz de seguridad. En definitiva, las soluciones DRaaS se convierten en una herramienta prácticamente vital para cualquier organización sin importar su tamaño ya que, como todo, la tecnología puede fallar en el momento menos oportuno.

Picture of System Administrator

Rogue Cloud (CLOUD)

by System Administrator - Wednesday, 5 June 2013, 6:58 PM

Rogue Cloud, la nube sin autorización

Según un reciente estudio de Symantec, un alto porcentaje de empresas tuvieron a lo largo del año pasado problemas con el uso de aplicaciones de nubes no autorizadas por parte de sus empleados. Espacios que son compartidos en la nube y que no están directamente controlados por la organización y a cuya información pueden estar accediendo terceras personas, con el riesgo añadido de una posible suplantación de identidad.

Por Xavier Pérez / IT-Sitio

Director de Calidad de Claranet

El libre acceso a la información, su accesibilidad y facilidad de uso, desde cualquier lugar y a cualquier hora, y a través de diferentes dispositivos, es una situación deseable para cualquier usuario. ¿Para quién no? podríamos preguntarnos en este ámbito tecnológico basado en la inmediatez y en la búsqueda incesante de la máxima eficiencia.

Ahora bien, qué ocurre cuando ese usuario particular que quiere compartir sus playlists de música preferida con sus compañeros a través de una aplicación en un Cloud Público es también un empleado de una organización que decide usar el mismo espacio para compartir información de la compañía para la cual trabaja, sin disponer de un nivel de seguridad apropiado y sin contar con la aprobación y autorización de su empresa.

La brecha que se abre a nivel de seguridad de la información es importante, no solo por el alto riesgo que entraña usar espacios sin gestión, no controlados ni integrados en la propia estructura corporativa, sino porque, además, están fuera del alcance competencial de la organización. Esta situación, denominada también de ‘rogue cloud’, suele escapar del conocimiento del área encargada de la seguridad, con lo que la exposición al riesgo que conlleva puede sostenerse por tiempo indefinido.

La información es uno de los activos más importantes de cualquier compañía, pero su facilidad de uso puede ser contraproducente si no se tiene en cuenta una correcta gestión de la seguridad que garantice su protección y unas medidas de seguridad adecuadas y a las que todo usuario debe ser sensible. Aquí es donde el concepto de ‘rogue cloud’, o de nube sin autorización, adquiere una mayor relevancia.

Según un recién estudio de Symantec, un alto porcentaje de empresas tuvieron a lo largo del año pasado problemas con el uso de aplicaciones de nubes no autorizadas por parte de sus empleados. Espacios que son compartidos en la nube y que no están directamente controlados por la organización y a cuya información pueden estar accediendo terceras personas, con el riesgo añadido de una posible suplantación de identidad.

Son, en definitiva, sistemas de información que no están integrados en la infraestructura de la compañía y por tanto, el uso que se hace de los mismos y de los datos que allí se comparten, carecen del debido control y de la debida autorización.

Ello nos lleva una vez más a considerar la gestión de la seguridad de la información como un aspecto que engloba a toda la organización. No basta con tener buenas políticas si no se asegura su correcta aplicación, o medidas restrictivas si los usuarios, en su ámbito personal, beneficiándose de una mal pretendida disponibilidad, acaban haciendo un uso indebido de la información que manejan, incluso si es de forma involuntaria, pero que acaba poniendo en riesgo su integridad y/o confidencialidad.

De aquí que la inversión de las empresas en programas de formación y concienciación de sus empleados, por un lado, y en la adopción de sistemas y recursos apropiados y a medida de sus necesidades, por otro, alejados de soluciones públicas demasiado expuestas, son la mejor vía de promoción de su seguridad y también de fomento de una buena reputación empresarial en esta área

Picture of System Administrator

DNA Storage (DNA)

by System Administrator - Wednesday, 26 June 2013, 10:06 PM

DNA storage is the process of encoding and decoding binary data onto and from synthesized strands of DNA (deoxyribonucleic acid). In nature, DNA molecules contain genetic blueprints for living cells and organisms.

To store a binary digital file as DNA, the individual bits(binary digits) are converted from 1 and 0 to the letters A, C, G, and T. These letters represent the four main compounds in DNA: adenine, cytosine, guanine, and thymine. The physical storage medium is a synthesized DNA molecule containing these four compounds in a sequence corresponding to the order of the bits in the digital file. To recover the data, the sequence A, C, G, and T representing the DNA molecule is decoded back into the original sequence of bits 1 and 0.

Researchers at the European Molecular Biology Laboratory (EMBL) have encoded audio, image, and text files into a synthesized DNA molecule about the size of a dust grain, and then successfully read the information from the DNA to recover the files, claiming 99.99 percent accuracy.

An obvious advantage of DNA storage, should it ever become practical for everyday use, would be its ability to store massive quantities of data in media having small physical volume. Dr. Sriram Kosuri, a scientist at Harvard, believes that all the digital information currently existing in the world could reside in four grams of synthesized DNA.

A less obvious, but perhaps more significant, advantage of DNA storage is its longevity. Because DNA molecules can survive for thousands of years, a digital archive encoded in this form could be recovered by people for many generations to come. This longevity might resolve the troubling prospect of our digital age being lost to history because of the relative impermanence of optical, magnetic, and electronic media.

The principal disadvantages of DNA storage for practical use today are its slow encoding speed and high cost. The speed issue limits the technology's promise for archiving purposes in the near term, although eventually the speed may improve to the point where DNA storage can function effectively for general backup applications and perhaps even primary storage. As for the cost, Dr. Nick Goldman of the EMBL suggests that by the mid-2020s, expenses could come down to the point where the technology becomes commercially viable on a large scale.

This was last updated in April 2013

Contributor(s): Stan Gibilisco

Posted by: Margaret Rouse
Picture of System Administrator

Radical Computer Rethink (DNA)

by System Administrator - Wednesday, 26 June 2013, 10:08 PM
DNA offers radical computer rethink
A team of researchers at the University of Toyama in Japan, led by Masahiko Inouye, claim to have created the world's first stable artificial DNA molecules, made from synthesised nucleosides that resemble their natural counterparts.
DNA is made up of four basic building blocks, or bases, which code proteins used in cell functioning and development. While other researchers have developed DNA molecules with a few select artificial parts, the Japanese team put together four completely new artificial bases inside the framework of a DNA molecule, creating unusually stable, double-stranded structures resembling natural DNA.
The scientists say the artificial DNA acts like the real thing, and even forms right-handed duplexes with complementary artificial strands. They hope to one day use their discovery to create a new biological information storage system that functions outside the cell. Artificial DNA could be advantageously used instead of natural DNA due to its stability against naturally occurring enzymes and its structural diversity.
The unique chemistry of these artificial bases and DNA structures, coupled with their high stability, offers limitless possibilities for new biotechnology materials and applications, such as the creation of powerful DNA computers. These computers are constructed using DNA as software and enzymes as hardware, rather than traditional silicon-based components. By mixing DNA and enzymes in this way and monitoring the reactions, complex computer calculations can be performed.
DNA molecules are similar to computer hard drives in the way they save information about an individual's genes. However, they have the potential to perform calculations much faster than today's fastest man-made computers. This is because, unlike a traditional computer, calculations are performed simultaneously - similar to a parallel computing schematic - as numerous different DNA molecules attempt to test various possibilities at once.

In addition, unlike today's PCs, DNA computers require minimal or no external power sources as they run on internal energy produced during cellular reactions. There is a huge amount of potential for a computer that does not need to be plugged in the implications this has for laptops and true mobility are endless.

Because of these reasons, scientists all over the world are looking for ways in which DNA may be integrated into a computer chip to create a biochip that will make standard computers faster and more energy efficient. DNA computers could potentially be the future of green IT.

Although the idea of artificial DNA and DNA computers may seem far fetched, the concept is entirely plausible if one keeps an open mind: although DNA solutions may seem impossibly complex, there are few people who actually understand how silicon-based computing works. In addition, current systems are based on the binary system, and DNA computers would be similar in nature: they could leverage the pre-existing relationships between the four bases that are the core of every DNA molecule.

However, the more sinister connotations of artificial DNA computing - even though unfounded - remain fixed in users' minds. Therefore, since the first concept of DNA computing came about in 1994, researchers have been trying to develop artificial versions of DNA. Since the components of artificial DNA that have been created by Inouye's team do not exist in natural DNA, it is nearly impossible for them to react together, eliminating any threat of mutation.

The discovery of artificial DNA by Inouye and the Japanese team could be vital to the furthering of DNA computing as it would allow researchers to build custom DNA structures, which are optimised for computing. Unfortunately, the current method used for constructing the DNA structures creates only short strands, which are not long enough to encode information.

The technology for building artificial DNA is still extremely new, however, and is only the first step (albeit a huge one) towards using DNA as an external information storage system. DNA computers will not be replacing today's standard PCs any time soon as there are still years of research to be conducted before it can be determined if this technology will be fruitful in computing. That said, as DNA computing becomes more high profile, it may be beneficial for hardware technology giants such as Apple, Dell, HP, IBM, Intel and Sun Microsystems to invest in research that emphasises artificial DNA and its potential applications.

Ultimately, DNA computers are still in their infancy, but, if successful, will be capable of storing much more data than a regular PC and would be considerably more energy efficient and smaller in size. Given these huge benefits, investors should not rule DNA computers out of their strategies purely because they seem too implausible. Those vendors that participate in this revolutionary research could be pioneers in the development of DNA microprocessors and computers, if and when the technology is found to be viable.

Ruchi Mallya is an analyst on Datamonitor's Public Sector Technology team, covering the life sciences. Her research focuses on the use of technology in pharmaceuticals and biotechnology.

Picture of System Administrator

Decoding DNA: New Twists and Turns (DNA)

by System Administrator - Wednesday, 26 June 2013, 10:19 PM

The Scientist takes a bold look at what the future holds for DNA research, bringing together senior investigators and key leaders in the field of genetics and genomics in this 3-part webinar series.

The structure of DNA was solved on February 28, 1953 by James D. Watson and Francis H. Crick, who recognized at once the potential of DNA's double helical structure for storing genetic information — the blueprint of life. For 60 years, this exciting discovery has inspired scientists to decipher the molecule's manifold secrets and resulted in a steady stream of innovative advances in genetics and genomics.

Honoring our editorial mission, The Scientist will take a bold look at what the future holds for DNA research, brining together senior investigators and key leaders in the field of genetics and genomics in this 3-part webinar series.

What's Next in Next-Generation Sequencing?

Original Broadcast Date: Tuesday March 5, 2013

The advent of Next-Generation Sequencing is considered the most propelling technological advance, which has resulted in  the doubling of sequence data almost every 5 months and the precipitous drop in the cost of sequencing a piece of DNA. The first webinar will track the evolution of next-generation sequencing and explore what the future holds in terms of the technology and its applications.


George Church is a professor of genetics at Harvard Medical School, and Director of the Personal Genome Project, providing the world's only open-access information on human genomic, environmental and trait data (GET).His 1984 Harvard PhD included the first methods for direct genome sequencing, molecular multiplexing, and barcoding. These lead to the first commercial genome sequence (pathogen, Helicobacter pylori) in 1994. Hisinnovations in "next generation" genome sequencing and synthesis and cell/tissue engineering resulted in 12 companies spanning fields including medical genomics (KnomeAlacrisAbVitro,GoodStartPathogenica) and synthetic biology (LS9JouleGen9,WarpDrive) as well as new privacy, biosafet, and biosecurity policies. He is director of the NIH Centers of Excellence in Genomic Science. His honors include election to NAS & NAE and Franklin Bower Laureate for Achievement in Science.

George Weinstock is currently a professor of genetics and of molecular microbiology at Washington University in Saint Louis. He was previously codirector of the Human Genome Sequencing Center at Baylor College of Medicine in Houston, Texas where he was also a professor of molecular and human genetics. Dr. Weinstock received his BS degree from the University of Michigan (Biophysics, 1970) and his PhD from the Massachusetts Institute of Technology (Microbiology, 1977).

Joel Dudley is an assistant professor of genetics and genomic sciences and Director of Biomedical Informatics at Mount Sinai School of Medicine in New York City. His current research is focused on solving key problems in genomic and systems medicine through the development and application of translational and biomedical informatics methodologies. Dudley's published research covers topics in bioinformatics, genomic medicine, personal and clinical genomics, as well as drug and biomarker discovery. His recent work with coauthors describing a novel systems-based approach for computational drug repositioning, was featured in the Wall Street Journal, and earned designation as the NHGRI Director's Genome Advance of the Month. He is also coauthor (with Konrad Karczewski) of the forthcoming book, Exploring Personal Genomics. Dudley received a BS in microbiology from Arizona State University and an MS and PhD in biomedical informatics from Stanford University School of Medicine.

Unraveling the Secrets of the Epigenome

Original Broadcast Date: Thursday April 18, 2013

This second webinar in The Scientist's Decoding DNA series will cover the Secrets of the Epigenome, discussing what is currently known about DNA methylation, histone modifications, and chromatin remodeling and how this knowledge can translate to useful therapies.


Stephen Baylin is a professor of medicine and of oncology at the Johns Hopkins University School of Medicine, where he is also Chief of the Cancer Biology Division of the Oncology Center and Associate Director for Research of The Sidney Kimmel Comprehensive Cancer Center. Together with Peter Jones of the University of Southern California, Baylin also leads the Epigenetic Therapy Stand up to Cancer Team (SU2C). He and his colleagues have fostered the concept that DNA hypermethylation of gene promoters, with its associated transcriptional silencing, can serve as alternatives to mutations for producing loss oftumor-suppressor gene function. Baylin earned both his BS and MD degrees from Duke University, where he completed his internship and first-year residency in internal medicine. He then spent 2 years at the National Heart and Lung Institute of the National Institutes of Health. In 1971, he joined the departments of oncology and medicine at the Johns Hopkins University School of Medicine, an affiliation that still continues.

Victoria Richon heads the Drug Discovery and Preclinical Development Global Oncology Division at Sanofi. Richon joined Sanofi in November 2012 from Epizyme, where she served as vice president of biological sciences beginning in 2008. At Epizyme she was responsible for the strategy and execution of drug discovery and development efforts that ranged from target identification through candidate selection and clinical development, including biomarker strategy and execution. Richon received her BA in chemistry from the University of Vermont and her PhD in biochemistry from the University of Nebraska. She completed her postdoctoral research at Memorial Sloan-Kettering Cancer Center.

Paolo Sassone-Corsi is Donald Bren Professor of Biological Chemistry and Director of the Center for Epigenetics and Metabolism at the University of California, Irvine, School of Medicine. Sassone-Corsi is a molecular and cell biologist who has pioneered the links between cell-signaling pathways and the control of gene expression. His research on transcriptional regulation has elucidated a remarkable variety of molecular mechanisms relevant to the fields of endocrinology, neuroscience, metabolism, and cancer. He received his PhD from the University of Naples and completed his postdoctoral research at CNRS, in Strasbourg, France.

The Impact of Personalized Medicine

Original Broadcast Date: Tuesday May 7, 2013

After the human genome was sequenced, Personalized Medicine became an end goal, driving both academia and the pharma/biotech industry to find and target cellular pathways and drug therapies that are unique to an individual patient. The final webinar in the series will help us better understand The Impact of Personalized Medicine, what we can expect to gain and where we stand to lose.


Jay M. ("Marty") Tenenbaum is founder and chairman of Cancer Commons. Tenenbaum’s background brings a unique perspective of a world-renowned Internet commerce pioneer and visionary. He was founder and CEO of Enterprise Integration Technologies, the first company to conduct a commercial Internet transaction. Tenenbaum joined Commerce One in January 1999, when it acquired Veo Systems. As chief scientist, he was instrumental in shaping the company's business and technology strategies for the Global Trading Web. Tenenbaum holds BS and MS degrees in electrical engineering from MIT, and a PhD from Stanford University.

Amy P. Abernethy, a palliative care physician and hematologist/oncologist, directs both the Center for Learning Health Care (CLHC) in the Duke Clinical Research Institute, and the Duke Cancer Care Research Program (DCCRP) in the Duke Cancer Institute. An internationally recognized expert in health-services research, cancer informatics, and delivery of patient-centered cancer care, she directs a prolific research program (CLHC/DCCRP) which conducts patient-centered clinical trials, analyses, and policy studies. Abernethy received her MD from Duke University School of Medicine.

Geoffrey S. Ginsburgis the Director of Genomic Medicine at the Duke Institute for Genome Sciences & Policy. He is also the Executive Director of the Center for Personalized Medicine at Duke Medicine and a professor of medicine and pathology at Duke University Medical Center. His work spans oncology, infectious diseases, cardiovascular disease, and metabolic disorders. His research is addressing the challenges for translating genomic information into medical practice using new and innovative paradigms, and the integration of personalized medicine into health care. Ginsburg received his MD and PhD in biophysics from Boston University and completed an internal medicine residency at Beth Israel Hospital in Boston, Massachusetts.

Abhijit “Ron” Mazumder obtained his BA from Johns Hopkins University, his PhD from the University of Maryland, and his MBA from Lehigh University. He worked for Gen-Probe, Axys Pharmaceuticals, and Motorola, developing genomics technologies. Mazumder joined Johnson & Johnson in 2003, where he led feasibility research for molecular diagnostics programs and managed technology and biomarker partnerships. In 2008, he joined Merck as a senior director and Biomarker Leader. Mazumder rejoined Johnson & Johnson in 2010 and is accountable for all aspects of the development of companion diagnostics needed to support the therapeutic pipeline, including selection of platforms and partners, oversight of diagnostic development, support of regulatory submissions, and design of clinical trials for validation of predictive biomarkers.


Picture of System Administrator

Human Genome Project (DNA)

by System Administrator - Wednesday, 26 June 2013, 10:20 PM

The Human Genome Project is a global, long-term research effort to identify the estimated 30,000 genes in human DNA (deoxyribonucleic acid) and to figure out the sequences of the chemical bases that make up human DNA. Findings are being collected in database s that researchers share. In addition to its scientific objectives, the Project also aims to address ethical, legal, and social issues (which the Project refers to as "ELSI"). The Project will make use also of results from the genetic research done on other animals, such as the fruit fly and the laboratory mouse. Research findings are expected to provide a dramatically greater understanding of how life works and specifically how we might better diagnose and treat human disorders. Besides giving us insights into human DNA, findings about nonhuman DNA may offer new ways to control our environment.

A genome is the sum of all the DNA in an organism. The DNA includes genes, each of which carries some information for making certain proteins, which in turn determine physical appearance, certain behavioral characteristics, how well the organism combats specific diseases, and other characteristics. There are four chemical bases in a genome. These bases are abbreviated as A, T, C, and G. The particular order of these chemical bases as they are repeated millions and even billions of time is what makes species different and each organism unique. The human genome has 3 billion pairs of bases.

Some databases that collect findings are already in existence. The plan is for all databases to be publicly available by the end of 2003. The organization of these databases and thealgorithm for making use of the data are the subject of new graduate study programs and a new science called bioinformatics . A biochip is being developed that is expected to accelerate research by encapsulating known DNA sequences that can act as "test tubes" for trial substances that can then be analyzed for similarities.

This was last updated in September 2005

Contributor(s): Kevin

Posted by: Margaret Rouse
Picture of System Administrator

DNA-based Data Storage (DNA)

by System Administrator - Wednesday, 26 June 2013, 10:26 PM

DNA-based Data Storage Here to Stay

The second example of storing digital data in DNA affirms its potential as a long-term storage medium.

Researchers have done it again—encoding 5.2 million bits of digital data in strings of DNA and demonstrating the feasibility of using DNA as a long-term, data-dense storage medium for massive amounts of information. In the new study released today (January 23) in Nature, researchers encoded one color photograph, 26 seconds of Martin Luther King Jr.’s “I Have a Dream” speech, and all 154 of Shakespeare’s known sonnets into DNA.

Though it’s not the first example of storing digital data in DNA, “it’s important to celebrate the emergence of a field,” said George Church, the Harvard University synthetic biologist whose own group published a similar demonstration of DNA-based data storage last year in Science.  The new study, he said, “is moving things forward.”

Scientists have long recognized DNA’s potential as a long-term storage medium. “DNA is a very, very dense piece of information storage,” explained study author Ewan Birney of the European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) in the UK. “It’s very light, it’s very small.” Under the correct storage conditions—dry, dark and cold—DNA easily withstands degradation, he said.

Advances in synthesizing defined strings of DNA, and sequencing them to extract information, have finally made DNA-based information storage a real possibility. Last summer, Church’s group published the first demonstration of DNA’s storage capability, encoding the digital version of Church’s bookRegenesis, which included 11 JPEG images, into DNA, using Gs and Cs to represent 1s of the binary code, and As and Ts to represent 0s.

Now, Birney and his colleagues are looking to reduce the error associated with DNA storage. When a strand of DNA has a run of identical bases, it’s difficult for next-generation sequencing technology to correctly read the sequence. Church’s work, for example, produced 10 errors out of 5.2 million bits. To prevent these types of errors, Birney and his EMBL-EBI collaborator Nick Goldman first converted each byte—a string of eight 0s and 1s—into a single “trit” made up of 5 or 6 digits of 0s, 1s, and 2s. Then, when converting these trits into the A, G, T and C bases of DNA, the researchers avoided repeating bases by using a code that took the preceding base into account when determining which base would represent the next digit.

The synthesizing process also introduces error, placing a wrong base for every 500 correct ones. To reduce this type of error, the researchers synthesized overlapping stretches of 117 nucleotides (nt), each of which overlapped with preceding and following strands, such that all data points were encoded four times. This effectively eliminated reading error because the likelihood that all four strings have identical synthesis errors is negligible, explained Birney.

Agilent Technologies in California synthesized more than 1 million copies of each 117-nt stretch of DNA, stored them as dried powder, and shipped it at room temperature from the United States to Germany via the UK. There, researchers took an aliquot of the sample, sequenced it using next-generation sequencing technology, and reconstructed the files.

Birney and Goldman envision DNA replacing other long-term archival methods, such as magnetic tape drives. Unlike other data storage systems, which are vulnerable to technological obsolescence, “methods for writing and reading DNA are going to be around for a long, long time,” said molecular biologist Thomas Bentin of the University of Copenhagen. Bentin, who was not involved in the research, compared DNA information storage to the fleeting heyday of the floppy disk—introduced only a few decades ago and already close to unreadable.  And though synthesizing and decoding DNA is currently still expensive, it is cheap to store. So for data that are intended to be stored for hundreds or even thousands of years, Goldman and Birney reckon that DNA could actually be cheaper than tape.

Additionally, there’s great potential to scale up from the 739 kilobytes encoded in the current study. The researchers calculate that 1 gram of DNA could hold more than 2 million megabytes of information, though encoding information on this scale will involve reducing the synthesis error rate even further, said bioengineer Mihri Ozkan at the University of California, Riverside, who did not participate in the research.

Despite the challenges that lie ahead, however, the current advance is “definitely worth attention,” synthetic biologist Drew Endy at Stanford University, who was not involved in the research, wrote in an email to The Scientist. “It should develop into a new option for archival data storage, wherein DNA is not thought of as a biological molecule, but as a straightforward non-living data storage tape.”

N. Goldman et al., “Towards practical, high-capacity, low-maintenance information storage in synthesized DNA,” Nature, doi: 10.1038/nature.11875, 2013.

Picture of System Administrator

DNA Machines (DNA)

by System Administrator - Monday, 1 July 2013, 12:53 PM

DNA Machines Inch Forward

Researchers are using DNA to compute, power, and sense.

By Sabrina Richards | March 5, 2013

Advances in nanotechnology are paving the way for a variety of “intelligent” nano-devices, from those that seek out and kill cancer cells to microscopic robots that build designer drugs. In the push to create such nano-sized devices, researchers have come to rely on DNA. With just a few bases, DNA may not have the complexity of amino acid-based proteins, but some scientists find this minimalism appealing.

“The rules that govern DNA’s interactions are simple and easy to control,” explained Andrew Turberfield, a nanoscientist at the University of Oxford. “A pairs with T, and C pairs with G, and that’s basically it.” The limited options make DNA-based nanomachines more straightforward to design than protein-based alternatives, he noted, yet they could serve many of the same functions. Indeed, the last decade has seen the development of a dizzying array of DNA-based nanomachines, including DNA walkers, computers, and biosensors.

Furthermore, like protein-based machines, the new technologies rely on the same building blocks that cells use. As such, DNA machines “piggyback on natural cellular processes and work happily with the cell,” said Timothy Lu, a synthetic biologist at the Massachusetts Institute of Technology (MIT), allowing nanoscientists to “think about addressing issues related to human disease.”

Walk the line

One of the major advancements of DNA nanotechnology is the development of DNA nanomotors—miniscule devices that can move on their own. Such autonomously moving devices could potentially be programmed to carry drugs directly to target tissues, or serve as tiny factories by building products like designer drugs or even other nanomachines.

DNA-based nanomachines rely on single-stranded DNA’s natural tendency to bind strands with complementary sequences, setting up tracks of DNA to serve as toeholds for the single-stranded feet of DNA walkers. In 2009, Nadrian Seeman’s team at New York University built a tiny DNA walker comprised of two legs that moved like an inch worm along a 49-nanometer-long DNA path. 

But to direct drugs or assemble useful products, researchers need DNA nanomachines to do more than move blindly forward. In 2010, Seeman created a DNA walker that served as a “nanoscale assembly line” to construct different products. In this system, a six-armed DNA walker shaped like a starfish somersaulted along a DNA track, passing three DNA way stations that each provided a different type of gold particle. The researchers could change the cargo stations conformations to bring the gold particles within the robot’s reach, allowing them to get picked up, or to move them farther away so that the robot would simply pass them by.

“It’s analogous to the chassis of a car going down an assembly line,” explained Seeman. The walker “could pick up nothing, any one of three different cargos, two of three different, or all three cargos,” he said—a total of 8 different products.

And last year, Oxford’s Turberfield added another capability to the DNA walker tool box: navigating divergent paths. Turberfield and his colleagues created a DNA nanomotor that could be programmed to choose one of four destinations via a branching DNA track. The track itself could be programmed to guide the nanomotor, and in the most sophisticated version of the system, Turberfield’s nanomachine carried its own path-determining instructions.

Next up, Turberfield hopes to make the process “faster and simpler” so that the nanomotor can be harnessed to build a biomolecule. “The idea we’re pursuing is as it takes a step, it couples that step to a chemical reaction,” he explained. This would enable a DNA nanomotor to string together a polymer, perhaps as a method to “build” drugs for medical purposes, he added.

DNA-based biosensing

DNA’s flexibility and simplicity has also been harnessed to create an easily regenerated biosensor. Chemist Weihong Tan at the University of Florida realized that DNA could be used to create a sensor capable of easily switching from its “on” state back to its “off” state. As proof of principle, Tan and his team designed biosensor switches by attaching dye-conjugated silver beads to DNA strands and studding the strands onto a gold surface. In the “off” state, the switches are pushed upright by extra DNA strands that fold around them, holding the silver beads away from the gold surface. These extra “off”-holding strands are designed to bind to the target molecule—in this case ATP—such that adding the target to the system coaxes the supporting strands away from the DNA switches. This allows the switch to fold over, bringing the silver bead within a few nanometers of the gold surface and creating a “hotspot” for Raman spectroscopy —the switch’s “on” state.

Previous work on creating biosensors based on Raman spectroscopy, which measures the shift in energy from a laser beam after it’s scattered by individual molecules, created irreversible hotspots. But Tan can wash away the ATP and add more supporting strands to easily ready his sensor for another round of detection, making it a re-usable technology.

Though his sensor is in its early stages, Tan envisions designing biosensors for medical applications like cancer biomarker detection. By using detection strands that bind directly to a specific cancer biomarker, biosensors based on Tan’s strategy would be able to sensitively detect signs of cancer without need for prior labeling with radionuclides or fluorescent dyes, he noted.

Computing with DNA

Yet another potential use for DNA is in data storage and computing, and researchers have recently demonstrated the molecule’s ability to store and transmit information. Researchers at Harvard University recently packed an impressive density of information into DNA—more than 5 petabits (1,000 terabits) of data per cubic millimeter of DNA—and other scientists are hoping to take advantage of DNA’s ability to encode instructions for turning genes on and off to create entire DNA-based computers.

Although it’s unlikely that DNA-based computing will ever be as lightning fast as the silicon-based chips in our laptops and smartphones, DNA “allows us to bring computation to other realms where silicon-based computing will not perform,” said MIT’s Lu—such as living cells.

In his latest project, published last month (February 10) in Nature Biotechnology, Lu and his colleagues used Escherichia coli cells to design cell-based logic circuits that “remember” what functions they’ve performed by permanently altering DNA sequences. The system relies on DNA recombinases that can flip the direction of transcriptional promoters or terminators placed in front of a green fluorescent protein (GFP) gene. Flipping a backward-facing promoter can turn on GFP expression, for example, as can inverting a forward-facing terminator. In contrast, inverting a forward-facing promoter or a backward-facing terminator can block GFP expression. By using target sequences unique to two different DNA recombinases, Lu could control which promoters or terminators were flipped. By switching the number and direction of promoters and terminators, as well as changing which recombinase target sequences flanked each genetic element, Lu and his team induced the bacterial cells to perform basic logic functions, such as AND and OR.

Importantly, because the recombinases permanently alter the bacteria’s DNA sequence, the cells “remember” the logic functions they’ve completed—even after the inputs are long gone and 90 cell divisions have passed. Lu already envisions medical applications relying on such a system. For example, he speculated that bacterial cells could be programmed to signal the existence of tiny intestinal bleeds that may indicate intestinal cancer by expressing a dye in response to bloody stool. Such a diagnostic tool could be designed in the form of a probiotic pill, he said, replacing more invasive procedures.

Applications based on these studies are still years away from the bedside or the commercial market, but researchers are optimistic. “[It’s] increasingly possible to build more sophisticated things on a nanometer scale,” said Turberfield. “We’re at very early stages, but we’re feeling our way.”

Page: (Previous)   1  2  3  4  5  6  7  8  9  10  11  ...  23  (Next)