Glosario KW | KW Glossary
Ontology Design | Diseño de Ontologías
Currently sorted By last update ascending Sort chronologically: By last update | By creation date
D (WEB SERVICES)
E (WEB SERVICES)
I (WEB SERVICES)
J (WEB SERVICES)
M (WEB SERVICES)
N (WEB SERVICES)
NewsML - a standard way to describe news information content so that it can distributed and reused widely on Web sites and other media.
O (WEB SERVICES)
P (WEB SERVICES)
R (WEB SERVICES)
S (WEB SERVICES)
T (WEB SERVICES)
U (WEB SERVICES)
V (WEB SERVICES)
X (WEB SERVICES)
Also see Webification.
1) To Webify is to convert some information content from its original format into a format capable of being displayed on the World Wide Web. Some conversion examples are:
Using the File Transfer Protocol (FTP) from the Web browser, text pages (with files in theASCII TXT format) can also be "Webified" for display by Web users. Many Internet Request for Comment (Request for Comments) documents are available on the Web in the text format. The only Webification these files need is to simply make them available in a directory accessible to the FTP server.
2) Webify is the name of a program that makes a structured tree of HTML files and JPEG or GIF images from Postscript files.
Ruby on Rails
The principle difference between Ruby on Rails and other frameworks for development lies in the speed and ease of use that developers working within the environment enjoy. Changes made to applications are immediately applied, avoiding the time consuming steps normally associated with the web development cycle. According to David Geary, a Java expert, the Ruby-based framework is five to 10 times faster than comparable Java-based frameworks. In a blog posting, Geary predicted that Rails would be widely adopted in the near future.
Rails is made up of several components, beyond Ruby itself, including:
Rails can run on most Web servers that support CGI. The framework also supports MySQL,PostgreSQL, SQLite, SQL Server, DB2 and Oracle. Rails is also an MVC (model, view, controller) framework where all layers are provided by Rails, as opposed to reliance on other, additional frameworks to achieve full MVC support. Invented by David Heinemeier Hanss, Ruby On Rails has been developed as an open-source project, with distributions available through rubyonrails.org.
Contributor(s): Alexander B. Howard
This was last updated in April 2006
Message-driven processing is an approach used within the client/server computing model in which a client (for example, your Web browser) sends a service request in the form of a specially-formatted message to a program that acts as a request broker, handling messages from many clients intended for many different server applications. A message may contain the name of the service (application) wanted and possibly a requested priority or time of forwarding. The request broker manages a queue of requests (and possibly replies) and screens the details of different kinds of clients and servers from each other. Both client and server need only understand the messaging interface. Message-driven processing is often used in distributed computing in a geographically-dispersed network and as a way to screen new client applications from having to interact directly with legacy server applications. Special software that provides message-driven processing is known as middleware.
In IBM's MQSeries middleware messaging product, its MDp (for "message-driven processor") provides an example. MDp is an intermediary layer between clients and a legacy system of applications, and serves as a request broker between clients and applications. The client formulates a request; MDp (which retains information about which applications and databases are to be invoked and where they reside) then breaks the request down into work units and sends these out to the appropriate server applications and databases. After executing the tasks, the back-end processes return the results to MDp, which in turn formulates replies to return to either the requesting client or some other target destination.
RELATED GLOSSARY TERMS: search engine, cyberprise, namespace, Webification, killer app,service-component architecture (SCA), Webify, Project Tango, Personal Web Server (PWS),MQSeries
Contributor(s): Jonathan Caforio
This was last updated in September 2005
User Interface (UI)
Also see human-computer interaction.
In information technology, the user interface (UI) is everything designed into an information device with which a human being may interact -- including display screen, keyboard, mouse, light pen, the appearance of a desktop, illuminated characters, help messages, and how an application program or a Web site invites interaction and responds to it. In early computers, there was very little user interface except for a few buttons at an operator's console. The user interface was largely in the form of punched card input and report output.
Later, a user was provided the ability to interact with a computer online and the user interface was a nearly blank display screen with a command line, a keyboard, and a set of commands and computer responses that were exchanged. This command line interface led to one in which menus (list of choices written in text) predominated. And, finally, the graphical user interface (GUI) arrived, originating mainly in Xerox's Palo Alto Research Center, adopted and enhanced by Apple Computer, and finally effectively standardized by Microsoft in its Windows operating systems.
The user interface can arguably include the total "user experience," which may include the aesthetic appearance of the device, response time, and the content that is presented to the user within the context of the user interface.
Contributor(s): Mike Dang
This was last updated in April 2005
In the context of the World Wide Web, a gravesite is either:
2) A Web site that, in the eyes of marketers, has failed to get sufficient traffic to be interesting to advertisers or other revenue providers, possibly by not finding an audience niche or building an audience community, or by failing to find a distribution partner such as America Online, Yahoo, or Netscape.
RELATED GLOSSARY TERMS: Webify, MQSeries, Ruby on Rails (RoR or Rails), message-driven processing, Internet time, content, user interface (UI), Object Management Group (OMG),software, go bosh (Go Big or Stay Home)
This was last updated in April 2005
You probably know a little about virtualization if you have ever divided your hard drive into different partitions. A partition is the logical division of a hard disk drive to create, in effect, two separate hard drives.
Operating system virtualization is the use of software to allow a piece of hardware to run multiple operating system images at the same time. The technology got its start on mainframes decades ago, allowing administrators to avoid wasting expensive processing power.
In 2005, virtualization software was adopted faster than anyone imagined, including the experts. There are three areas of IT where virtualization is making headroads, network virtualization, storage virtualization and server virtualization:
Virtualization can be viewed as part of an overall trend in enterprise IT that includesautonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and work loads.
This was last updated in December 2010
DNA Storage (DNA)
DNA storage is the process of encoding and decoding binary data onto and from synthesized strands of DNA (deoxyribonucleic acid). In nature, DNA molecules contain genetic blueprints for living cells and organisms.
To store a binary digital file as DNA, the individual bits(binary digits) are converted from 1 and 0 to the letters A, C, G, and T. These letters represent the four main compounds in DNA: adenine, cytosine, guanine, and thymine. The physical storage medium is a synthesized DNA molecule containing these four compounds in a sequence corresponding to the order of the bits in the digital file. To recover the data, the sequence A, C, G, and T representing the DNA molecule is decoded back into the original sequence of bits 1 and 0.
Researchers at the European Molecular Biology Laboratory (EMBL) have encoded audio, image, and text files into a synthesized DNA molecule about the size of a dust grain, and then successfully read the information from the DNA to recover the files, claiming 99.99 percent accuracy.
An obvious advantage of DNA storage, should it ever become practical for everyday use, would be its ability to store massive quantities of data in media having small physical volume. Dr. Sriram Kosuri, a scientist at Harvard, believes that all the digital information currently existing in the world could reside in four grams of synthesized DNA.
A less obvious, but perhaps more significant, advantage of DNA storage is its longevity. Because DNA molecules can survive for thousands of years, a digital archive encoded in this form could be recovered by people for many generations to come. This longevity might resolve the troubling prospect of our digital age being lost to history because of the relative impermanence of optical, magnetic, and electronic media.
The principal disadvantages of DNA storage for practical use today are its slow encoding speed and high cost. The speed issue limits the technology's promise for archiving purposes in the near term, although eventually the speed may improve to the point where DNA storage can function effectively for general backup applications and perhaps even primary storage. As for the cost, Dr. Nick Goldman of the EMBL suggests that by the mid-2020s, expenses could come down to the point where the technology becomes commercially viable on a large scale.
This was last updated in April 2013
Contributor(s): Stan Gibilisco
Radical Computer Rethink (DNA)
DNA offers radical computer rethink
A team of researchers at the University of Toyama in Japan, led by Masahiko Inouye, claim to have created the world's first stable artificial DNA molecules, made from synthesised nucleosides that resemble their natural counterparts.
DNA is made up of four basic building blocks, or bases, which code proteins used in cell functioning and development. While other researchers have developed DNA molecules with a few select artificial parts, the Japanese team put together four completely new artificial bases inside the framework of a DNA molecule, creating unusually stable, double-stranded structures resembling natural DNA.
The scientists say the artificial DNA acts like the real thing, and even forms right-handed duplexes with complementary artificial strands. They hope to one day use their discovery to create a new biological information storage system that functions outside the cell. Artificial DNA could be advantageously used instead of natural DNA due to its stability against naturally occurring enzymes and its structural diversity.
The unique chemistry of these artificial bases and DNA structures, coupled with their high stability, offers limitless possibilities for new biotechnology materials and applications, such as the creation of powerful DNA computers. These computers are constructed using DNA as software and enzymes as hardware, rather than traditional silicon-based components. By mixing DNA and enzymes in this way and monitoring the reactions, complex computer calculations can be performed.
DNA molecules are similar to computer hard drives in the way they save information about an individual's genes. However, they have the potential to perform calculations much faster than today's fastest man-made computers. This is because, unlike a traditional computer, calculations are performed simultaneously - similar to a parallel computing schematic - as numerous different DNA molecules attempt to test various possibilities at once.
In addition, unlike today's PCs, DNA computers require minimal or no external power sources as they run on internal energy produced during cellular reactions. There is a huge amount of potential for a computer that does not need to be plugged in the implications this has for laptops and true mobility are endless.
Because of these reasons, scientists all over the world are looking for ways in which DNA may be integrated into a computer chip to create a biochip that will make standard computers faster and more energy efficient. DNA computers could potentially be the future of green IT.
Although the idea of artificial DNA and DNA computers may seem far fetched, the concept is entirely plausible if one keeps an open mind: although DNA solutions may seem impossibly complex, there are few people who actually understand how silicon-based computing works. In addition, current systems are based on the binary system, and DNA computers would be similar in nature: they could leverage the pre-existing relationships between the four bases that are the core of every DNA molecule.
However, the more sinister connotations of artificial DNA computing - even though unfounded - remain fixed in users' minds. Therefore, since the first concept of DNA computing came about in 1994, researchers have been trying to develop artificial versions of DNA. Since the components of artificial DNA that have been created by Inouye's team do not exist in natural DNA, it is nearly impossible for them to react together, eliminating any threat of mutation.
The discovery of artificial DNA by Inouye and the Japanese team could be vital to the furthering of DNA computing as it would allow researchers to build custom DNA structures, which are optimised for computing. Unfortunately, the current method used for constructing the DNA structures creates only short strands, which are not long enough to encode information.
The technology for building artificial DNA is still extremely new, however, and is only the first step (albeit a huge one) towards using DNA as an external information storage system. DNA computers will not be replacing today's standard PCs any time soon as there are still years of research to be conducted before it can be determined if this technology will be fruitful in computing. That said, as DNA computing becomes more high profile, it may be beneficial for hardware technology giants such as Apple, Dell, HP, IBM, Intel and Sun Microsystems to invest in research that emphasises artificial DNA and its potential applications.
Ultimately, DNA computers are still in their infancy, but, if successful, will be capable of storing much more data than a regular PC and would be considerably more energy efficient and smaller in size. Given these huge benefits, investors should not rule DNA computers out of their strategies purely because they seem too implausible. Those vendors that participate in this revolutionary research could be pioneers in the development of DNA microprocessors and computers, if and when the technology is found to be viable.
Ruchi Mallya is an analyst on Datamonitor's Public Sector Technology team, covering the life sciences. Her research focuses on the use of technology in pharmaceuticals and biotechnology.
Decoding DNA: New Twists and Turns (DNA)
The Scientist takes a bold look at what the future holds for DNA research, bringing together senior investigators and key leaders in the field of genetics and genomics in this 3-part webinar series.
The structure of DNA was solved on February 28, 1953 by James D. Watson and Francis H. Crick, who recognized at once the potential of DNA's double helical structure for storing genetic information — the blueprint of life. For 60 years, this exciting discovery has inspired scientists to decipher the molecule's manifold secrets and resulted in a steady stream of innovative advances in genetics and genomics.
Honoring our editorial mission, The Scientist will take a bold look at what the future holds for DNA research, brining together senior investigators and key leaders in the field of genetics and genomics in this 3-part webinar series.
What's Next in Next-Generation Sequencing?
The advent of Next-Generation Sequencing is considered the most propelling technological advance, which has resulted in the doubling of sequence data almost every 5 months and the precipitous drop in the cost of sequencing a piece of DNA. The first webinar will track the evolution of next-generation sequencing and explore what the future holds in terms of the technology and its applications.
George Church is a professor of genetics at Harvard Medical School, and Director of the Personal Genome Project, providing the world's only open-access information on human genomic, environmental and trait data (GET).His 1984 Harvard PhD included the first methods for direct genome sequencing, molecular multiplexing, and barcoding. These lead to the first commercial genome sequence (pathogen, Helicobacter pylori) in 1994. Hisinnovations in "next generation" genome sequencing and synthesis and cell/tissue engineering resulted in 12 companies spanning fields including medical genomics (Knome, Alacris, AbVitro,GoodStart, Pathogenica) and synthetic biology (LS9, Joule, Gen9,WarpDrive) as well as new privacy, biosafet, and biosecurity policies. He is director of the NIH Centers of Excellence in Genomic Science. His honors include election to NAS & NAE and Franklin Bower Laureate for Achievement in Science.
George Weinstock is currently a professor of genetics and of molecular microbiology at Washington University in Saint Louis. He was previously codirector of the Human Genome Sequencing Center at Baylor College of Medicine in Houston, Texas where he was also a professor of molecular and human genetics. Dr. Weinstock received his BS degree from the University of Michigan (Biophysics, 1970) and his PhD from the Massachusetts Institute of Technology (Microbiology, 1977).
Joel Dudley is an assistant professor of genetics and genomic sciences and Director of Biomedical Informatics at Mount Sinai School of Medicine in New York City. His current research is focused on solving key problems in genomic and systems medicine through the development and application of translational and biomedical informatics methodologies. Dudley's published research covers topics in bioinformatics, genomic medicine, personal and clinical genomics, as well as drug and biomarker discovery. His recent work with coauthors describing a novel systems-based approach for computational drug repositioning, was featured in the Wall Street Journal, and earned designation as the NHGRI Director's Genome Advance of the Month. He is also coauthor (with Konrad Karczewski) of the forthcoming book, Exploring Personal Genomics. Dudley received a BS in microbiology from Arizona State University and an MS and PhD in biomedical informatics from Stanford University School of Medicine.
Unraveling the Secrets of the Epigenome
Original Broadcast Date: Thursday April 18, 2013
This second webinar in The Scientist's Decoding DNA series will cover the Secrets of the Epigenome, discussing what is currently known about DNA methylation, histone modifications, and chromatin remodeling and how this knowledge can translate to useful therapies.
Stephen Baylin is a professor of medicine and of oncology at the Johns Hopkins University School of Medicine, where he is also Chief of the Cancer Biology Division of the Oncology Center and Associate Director for Research of The Sidney Kimmel Comprehensive Cancer Center. Together with Peter Jones of the University of Southern California, Baylin also leads the Epigenetic Therapy Stand up to Cancer Team (SU2C). He and his colleagues have fostered the concept that DNA hypermethylation of gene promoters, with its associated transcriptional silencing, can serve as alternatives to mutations for producing loss oftumor-suppressor gene function. Baylin earned both his BS and MD degrees from Duke University, where he completed his internship and first-year residency in internal medicine. He then spent 2 years at the National Heart and Lung Institute of the National Institutes of Health. In 1971, he joined the departments of oncology and medicine at the Johns Hopkins University School of Medicine, an affiliation that still continues.
Victoria Richon heads the Drug Discovery and Preclinical Development Global Oncology Division at Sanofi. Richon joined Sanofi in November 2012 from Epizyme, where she served as vice president of biological sciences beginning in 2008. At Epizyme she was responsible for the strategy and execution of drug discovery and development efforts that ranged from target identification through candidate selection and clinical development, including biomarker strategy and execution. Richon received her BA in chemistry from the University of Vermont and her PhD in biochemistry from the University of Nebraska. She completed her postdoctoral research at Memorial Sloan-Kettering Cancer Center.
Paolo Sassone-Corsi is Donald Bren Professor of Biological Chemistry and Director of the Center for Epigenetics and Metabolism at the University of California, Irvine, School of Medicine. Sassone-Corsi is a molecular and cell biologist who has pioneered the links between cell-signaling pathways and the control of gene expression. His research on transcriptional regulation has elucidated a remarkable variety of molecular mechanisms relevant to the fields of endocrinology, neuroscience, metabolism, and cancer. He received his PhD from the University of Naples and completed his postdoctoral research at CNRS, in Strasbourg, France.
The Impact of Personalized Medicine
After the human genome was sequenced, Personalized Medicine became an end goal, driving both academia and the pharma/biotech industry to find and target cellular pathways and drug therapies that are unique to an individual patient. The final webinar in the series will help us better understand The Impact of Personalized Medicine, what we can expect to gain and where we stand to lose.
Jay M. ("Marty") Tenenbaum is founder and chairman of Cancer Commons. Tenenbaum’s background brings a unique perspective of a world-renowned Internet commerce pioneer and visionary. He was founder and CEO of Enterprise Integration Technologies, the first company to conduct a commercial Internet transaction. Tenenbaum joined Commerce One in January 1999, when it acquired Veo Systems. As chief scientist, he was instrumental in shaping the company's business and technology strategies for the Global Trading Web. Tenenbaum holds BS and MS degrees in electrical engineering from MIT, and a PhD from Stanford University.
Amy P. Abernethy, a palliative care physician and hematologist/oncologist, directs both the Center for Learning Health Care (CLHC) in the Duke Clinical Research Institute, and the Duke Cancer Care Research Program (DCCRP) in the Duke Cancer Institute. An internationally recognized expert in health-services research, cancer informatics, and delivery of patient-centered cancer care, she directs a prolific research program (CLHC/DCCRP) which conducts patient-centered clinical trials, analyses, and policy studies. Abernethy received her MD from Duke University School of Medicine.
Geoffrey S. Ginsburg, is the Director of Genomic Medicine at the Duke Institute for Genome Sciences & Policy. He is also the Executive Director of the Center for Personalized Medicine at Duke Medicine and a professor of medicine and pathology at Duke University Medical Center. His work spans oncology, infectious diseases, cardiovascular disease, and metabolic disorders. His research is addressing the challenges for translating genomic information into medical practice using new and innovative paradigms, and the integration of personalized medicine into health care. Ginsburg received his MD and PhD in biophysics from Boston University and completed an internal medicine residency at Beth Israel Hospital in Boston, Massachusetts.
Abhijit “Ron” Mazumder obtained his BA from Johns Hopkins University, his PhD from the University of Maryland, and his MBA from Lehigh University. He worked for Gen-Probe, Axys Pharmaceuticals, and Motorola, developing genomics technologies. Mazumder joined Johnson & Johnson in 2003, where he led feasibility research for molecular diagnostics programs and managed technology and biomarker partnerships. In 2008, he joined Merck as a senior director and Biomarker Leader. Mazumder rejoined Johnson & Johnson in 2010 and is accountable for all aspects of the development of companion diagnostics needed to support the therapeutic pipeline, including selection of platforms and partners, oversight of diagnostic development, support of regulatory submissions, and design of clinical trials for validation of predictive biomarkers.
Human Genome Project (DNA)
The Human Genome Project is a global, long-term research effort to identify the estimated 30,000 genes in human DNA (deoxyribonucleic acid) and to figure out the sequences of the chemical bases that make up human DNA. Findings are being collected in database s that researchers share. In addition to its scientific objectives, the Project also aims to address ethical, legal, and social issues (which the Project refers to as "ELSI"). The Project will make use also of results from the genetic research done on other animals, such as the fruit fly and the laboratory mouse. Research findings are expected to provide a dramatically greater understanding of how life works and specifically how we might better diagnose and treat human disorders. Besides giving us insights into human DNA, findings about nonhuman DNA may offer new ways to control our environment.
A genome is the sum of all the DNA in an organism. The DNA includes genes, each of which carries some information for making certain proteins, which in turn determine physical appearance, certain behavioral characteristics, how well the organism combats specific diseases, and other characteristics. There are four chemical bases in a genome. These bases are abbreviated as A, T, C, and G. The particular order of these chemical bases as they are repeated millions and even billions of time is what makes species different and each organism unique. The human genome has 3 billion pairs of bases.
Some databases that collect findings are already in existence. The plan is for all databases to be publicly available by the end of 2003. The organization of these databases and thealgorithm for making use of the data are the subject of new graduate study programs and a new science called bioinformatics . A biochip is being developed that is expected to accelerate research by encapsulating known DNA sequences that can act as "test tubes" for trial substances that can then be analyzed for similarities.
This was last updated in September 2005
DNA-based Data Storage (DNA)
DNA-based Data Storage Here to Stay
The second example of storing digital data in DNA affirms its potential as a long-term storage medium.
Researchers have done it again—encoding 5.2 million bits of digital data in strings of DNA and demonstrating the feasibility of using DNA as a long-term, data-dense storage medium for massive amounts of information. In the new study released today (January 23) in Nature, researchers encoded one color photograph, 26 seconds of Martin Luther King Jr.’s “I Have a Dream” speech, and all 154 of Shakespeare’s known sonnets into DNA.
Though it’s not the first example of storing digital data in DNA, “it’s important to celebrate the emergence of a field,” said George Church, the Harvard University synthetic biologist whose own group published a similar demonstration of DNA-based data storage last year in Science. The new study, he said, “is moving things forward.”
Scientists have long recognized DNA’s potential as a long-term storage medium. “DNA is a very, very dense piece of information storage,” explained study author Ewan Birney of the European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) in the UK. “It’s very light, it’s very small.” Under the correct storage conditions—dry, dark and cold—DNA easily withstands degradation, he said.
Advances in synthesizing defined strings of DNA, and sequencing them to extract information, have finally made DNA-based information storage a real possibility. Last summer, Church’s group published the first demonstration of DNA’s storage capability, encoding the digital version of Church’s bookRegenesis, which included 11 JPEG images, into DNA, using Gs and Cs to represent 1s of the binary code, and As and Ts to represent 0s.
Now, Birney and his colleagues are looking to reduce the error associated with DNA storage. When a strand of DNA has a run of identical bases, it’s difficult for next-generation sequencing technology to correctly read the sequence. Church’s work, for example, produced 10 errors out of 5.2 million bits. To prevent these types of errors, Birney and his EMBL-EBI collaborator Nick Goldman first converted each byte—a string of eight 0s and 1s—into a single “trit” made up of 5 or 6 digits of 0s, 1s, and 2s. Then, when converting these trits into the A, G, T and C bases of DNA, the researchers avoided repeating bases by using a code that took the preceding base into account when determining which base would represent the next digit.
The synthesizing process also introduces error, placing a wrong base for every 500 correct ones. To reduce this type of error, the researchers synthesized overlapping stretches of 117 nucleotides (nt), each of which overlapped with preceding and following strands, such that all data points were encoded four times. This effectively eliminated reading error because the likelihood that all four strings have identical synthesis errors is negligible, explained Birney.
Agilent Technologies in California synthesized more than 1 million copies of each 117-nt stretch of DNA, stored them as dried powder, and shipped it at room temperature from the United States to Germany via the UK. There, researchers took an aliquot of the sample, sequenced it using next-generation sequencing technology, and reconstructed the files.
Birney and Goldman envision DNA replacing other long-term archival methods, such as magnetic tape drives. Unlike other data storage systems, which are vulnerable to technological obsolescence, “methods for writing and reading DNA are going to be around for a long, long time,” said molecular biologist Thomas Bentin of the University of Copenhagen. Bentin, who was not involved in the research, compared DNA information storage to the fleeting heyday of the floppy disk—introduced only a few decades ago and already close to unreadable. And though synthesizing and decoding DNA is currently still expensive, it is cheap to store. So for data that are intended to be stored for hundreds or even thousands of years, Goldman and Birney reckon that DNA could actually be cheaper than tape.
Additionally, there’s great potential to scale up from the 739 kilobytes encoded in the current study. The researchers calculate that 1 gram of DNA could hold more than 2 million megabytes of information, though encoding information on this scale will involve reducing the synthesis error rate even further, said bioengineer Mihri Ozkan at the University of California, Riverside, who did not participate in the research.
Despite the challenges that lie ahead, however, the current advance is “definitely worth attention,” synthetic biologist Drew Endy at Stanford University, who was not involved in the research, wrote in an email to The Scientist. “It should develop into a new option for archival data storage, wherein DNA is not thought of as a biological molecule, but as a straightforward non-living data storage tape.”
N. Goldman et al., “Towards practical, high-capacity, low-maintenance information storage in synthesized DNA,” Nature, doi: 10.1038/nature.11875, 2013.
DNA Machines (DNA)
DNA Machines Inch Forward
Researchers are using DNA to compute, power, and sense.
March 5, 2013|
Advances in nanotechnology are paving the way for a variety of “intelligent” nano-devices, from those that seek out and kill cancer cells to microscopic robots that build designer drugs. In the push to create such nano-sized devices, researchers have come to rely on DNA. With just a few bases, DNA may not have the complexity of amino acid-based proteins, but some scientists find this minimalism appealing.
“The rules that govern DNA’s interactions are simple and easy to control,” explained Andrew Turberfield, a nanoscientist at the University of Oxford. “A pairs with T, and C pairs with G, and that’s basically it.” The limited options make DNA-based nanomachines more straightforward to design than protein-based alternatives, he noted, yet they could serve many of the same functions. Indeed, the last decade has seen the development of a dizzying array of DNA-based nanomachines, including DNA walkers, computers, and biosensors.
Furthermore, like protein-based machines, the new technologies rely on the same building blocks that cells use. As such, DNA machines “piggyback on natural cellular processes and work happily with the cell,” said Timothy Lu, a synthetic biologist at the Massachusetts Institute of Technology (MIT), allowing nanoscientists to “think about addressing issues related to human disease.”
Walk the line
One of the major advancements of DNA nanotechnology is the development of DNA nanomotors—miniscule devices that can move on their own. Such autonomously moving devices could potentially be programmed to carry drugs directly to target tissues, or serve as tiny factories by building products like designer drugs or even other nanomachines.
DNA-based nanomachines rely on single-stranded DNA’s natural tendency to bind strands with complementary sequences, setting up tracks of DNA to serve as toeholds for the single-stranded feet of DNA walkers. In 2009, Nadrian Seeman’s team at New York University built a tiny DNA walker comprised of two legs that moved like an inch worm along a 49-nanometer-long DNA path.
But to direct drugs or assemble useful products, researchers need DNA nanomachines to do more than move blindly forward. In 2010, Seeman created a DNA walker that served as a “nanoscale assembly line” to construct different products. In this system, a six-armed DNA walker shaped like a starfish somersaulted along a DNA track, passing three DNA way stations that each provided a different type of gold particle. The researchers could change the cargo stations conformations to bring the gold particles within the robot’s reach, allowing them to get picked up, or to move them farther away so that the robot would simply pass them by.
“It’s analogous to the chassis of a car going down an assembly line,” explained Seeman. The walker “could pick up nothing, any one of three different cargos, two of three different, or all three cargos,” he said—a total of 8 different products.
And last year, Oxford’s Turberfield added another capability to the DNA walker tool box: navigating divergent paths. Turberfield and his colleagues created a DNA nanomotor that could be programmed to choose one of four destinations via a branching DNA track. The track itself could be programmed to guide the nanomotor, and in the most sophisticated version of the system, Turberfield’s nanomachine carried its own path-determining instructions.
Next up, Turberfield hopes to make the process “faster and simpler” so that the nanomotor can be harnessed to build a biomolecule. “The idea we’re pursuing is as it takes a step, it couples that step to a chemical reaction,” he explained. This would enable a DNA nanomotor to string together a polymer, perhaps as a method to “build” drugs for medical purposes, he added.
DNA’s flexibility and simplicity has also been harnessed to create an easily regenerated biosensor. Chemist Weihong Tan at the University of Florida realized that DNA could be used to create a sensor capable of easily switching from its “on” state back to its “off” state. As proof of principle, Tan and his team designed biosensor switches by attaching dye-conjugated silver beads to DNA strands and studding the strands onto a gold surface. In the “off” state, the switches are pushed upright by extra DNA strands that fold around them, holding the silver beads away from the gold surface. These extra “off”-holding strands are designed to bind to the target molecule—in this case ATP—such that adding the target to the system coaxes the supporting strands away from the DNA switches. This allows the switch to fold over, bringing the silver bead within a few nanometers of the gold surface and creating a “hotspot” for Raman spectroscopy —the switch’s “on” state.
Previous work on creating biosensors based on Raman spectroscopy, which measures the shift in energy from a laser beam after it’s scattered by individual molecules, created irreversible hotspots. But Tan can wash away the ATP and add more supporting strands to easily ready his sensor for another round of detection, making it a re-usable technology.
Though his sensor is in its early stages, Tan envisions designing biosensors for medical applications like cancer biomarker detection. By using detection strands that bind directly to a specific cancer biomarker, biosensors based on Tan’s strategy would be able to sensitively detect signs of cancer without need for prior labeling with radionuclides or fluorescent dyes, he noted.
Computing with DNA
Yet another potential use for DNA is in data storage and computing, and researchers have recently demonstrated the molecule’s ability to store and transmit information. Researchers at Harvard University recently packed an impressive density of information into DNA—more than 5 petabits (1,000 terabits) of data per cubic millimeter of DNA—and other scientists are hoping to take advantage of DNA’s ability to encode instructions for turning genes on and off to create entire DNA-based computers.
Although it’s unlikely that DNA-based computing will ever be as lightning fast as the silicon-based chips in our laptops and smartphones, DNA “allows us to bring computation to other realms where silicon-based computing will not perform,” said MIT’s Lu—such as living cells.
In his latest project, published last month (February 10) in Nature Biotechnology, Lu and his colleagues used Escherichia coli cells to design cell-based logic circuits that “remember” what functions they’ve performed by permanently altering DNA sequences. The system relies on DNA recombinases that can flip the direction of transcriptional promoters or terminators placed in front of a green fluorescent protein (GFP) gene. Flipping a backward-facing promoter can turn on GFP expression, for example, as can inverting a forward-facing terminator. In contrast, inverting a forward-facing promoter or a backward-facing terminator can block GFP expression. By using target sequences unique to two different DNA recombinases, Lu could control which promoters or terminators were flipped. By switching the number and direction of promoters and terminators, as well as changing which recombinase target sequences flanked each genetic element, Lu and his team induced the bacterial cells to perform basic logic functions, such as AND and OR.
Importantly, because the recombinases permanently alter the bacteria’s DNA sequence, the cells “remember” the logic functions they’ve completed—even after the inputs are long gone and 90 cell divisions have passed. Lu already envisions medical applications relying on such a system. For example, he speculated that bacterial cells could be programmed to signal the existence of tiny intestinal bleeds that may indicate intestinal cancer by expressing a dye in response to bloody stool. Such a diagnostic tool could be designed in the form of a probiotic pill, he said, replacing more invasive procedures.
Applications based on these studies are still years away from the bedside or the commercial market, but researchers are optimistic. “[It’s] increasingly possible to build more sophisticated things on a nanometer scale,” said Turberfield. “We’re at very early stages, but we’re feeling our way.”
Tagssynthetic biology, nanotechnology, DNA walker, DNA nanotechnology, DNA nanomachine, DNA computer and biosensor