Glosario KW | KW Glossary
Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS
Copyleft is the idea and the specific stipulation when distributing software that the user will be able to copy it freely, examine and modify the source code, and redistribute the software to others (free or priced) as long as the redistributed software is also passed along with the copyleft stipulation. The term was originated by Richard Stallman and the Free Software Foundation. Copyleft favors the software user's rights and convenience over the commercial interests of the software makers. It also reflects the belief that freer redistribution and modification of software would encourage users to make improvements to it. ("Free software" is not the same as freeware, which is usually distributed with copyright restrictions.)
Stallman and his adherents do not object to the price or profit aspects of creation and redistribution of software - only to the current restrictions placed on who can use how many copies of the software and how and whether the software can be modified and redistributed.
The de facto collaboration that developed and refined Unix and other collegially-developed programs led the FSF to the idea of "free" software and copyleft. In 1983, the FSF began developing a "free software" project that would both demonstrate the concept while providing value to users. The project was called GNU, an operating system similar to a Unix system. GNU and its various components are currently available and are distributed with copyleft stipulations. Using GNU components, the popular Linux system is also issued with a copyleft.
RELATED GLOSSARY TERMS: Hardy Heron (Ubuntu 8.04 LTS Server Edition) , high-performance computing (HPC), Open Directory Project (ODP), LiveDistro, Yellowdog Updater, Modified (YUM), BSD (Berkeley Software Distribution) , shell, Free Software Foundation (FSF) , Tcl/Tk (Tool Command Language), open source beer
This was last updated in September 2005
Copyright is the ownership of an intellectual property within the limits prescribed by a particular nation's or international law. In the United States, for example, the copyright law provides that the owner of a property has the exclusive right to print, distribute, and copy the work, and permission must be obtained by anyone else to reuse the work in these ways. Copyright is provided automatically to the author of any original work covered by the law as soon as the work is created. The author does not have to formally register the work, although registration makes the copyright more visible. (See Circular 66, "Copyright Registration for Online Works," from the U.S Copyright Office.) Copyright extends to unpublished as well as published works. The U.S. law extends copyright for 50 years beyond the life of the author. For reviews and certain other purposes, the "fair use" of a work, typically a quotation or paragraph, is allowed without permission of the author.
The Free Software Foundation fosters a new concept called copyleft in which anyone can freely reuse a work as long as they in turn do not try to restrict others from using their reuse.
EditPros, an editing and marketing communications firm, has allowed us to reprint below an article about copyright as it applies to the Internet.
Are You Violating Copyright on the Internet?
The Internet, inarguably one of the most remarkable developments in international communication and information access, is fast becoming a lair of copyright abuse. The notion of freedom of information and the ease of posting, copying and distributing messages on the Internet may have created a false impression that text and graphic materials on World Wide Web sites, postings in "usenet" news groups, and messages distributed through e-mail lists and other electronic channels are exempt from copyright statutes.
In the United States, copyright is a protection provided under title 17 of the U.S. Code, articulated in the 1976 Copyright Act. Copyright of a creative work extends 50 years beyond the lifespan of its author or designer. Works afforded copyright protection include literature, journalistic reports, musical compositions, theatrical scripts, choreography, artistic matter, architectural designs, motion pictures, computer software, multimedia digital creations, and audio and video recordings. Copyright protection encompasses Web page textual content, graphics, design elements, as well as postings on discussion groups. Canada's Intellectual and Industrial Property Law, Great Britain's Copyright, Designs and Patents Act of 1988, and legislation in other countries signatory to the international Berne Convention copyright principles provide similar protections.
Generally speaking, facts may not be copyrighted; but content related to presentation, organization and conclusions derived from facts certainly can be. Never assume that anything is in the "public domain" without a statement to that effect. Here are some copyright issues important to companies, organizations and individuals.
Handling of External Links
Even though links are addresses and are not subject to copyright regulations, problems can arise in their presentation. If your Web site is composed using frames, and linked sites appear as a window within your frame set, you may be creating the deceptive impression that the content of the linked site is yours. Use HTML coding to ensure that linked external sites appear in their own window, clearly distinct from your site. Incidentally, you may wish to disavow responsibility for the content of sites to which you provide links.
Work for Hire
While copyright ordinarily belongs to the author, copyright ownership of works for hire belong to the employer. The U.S. Copyright Act of 1976 provides two definitions of a work for hire: 1. a work prepared by an employee within the scope of his or her employment; or 2. a work specially ordered or commissioned for use as a contribution to a collective work, as a part of a motion picture or other audiovisual work, as a translation, as a supplementary work, as a compilation, as an instructional text, as a test, as answer material for a test, or as an atlas, if the parties expressly agree in a written instrument signed by them that the work shall be considered a work made for hire. U.S. Copyright Office documentation further states, "Copyright in each separate contribution to a periodical or other collective work is distinct from copyright in the collective work as a whole and vests initially with the author of the contribution."
Just as making bootleg tapes of recorded music and photocopying books are illegal activities, printing and distributing contents of Web pages or discussion group postings may constitute copyright infringement. And companies may be liable for such activities conducted by their employees using company computing or photocopying equipment. However, the law does not necessarily prohibit downloading files or excerpting and quoting materials. The doctrine of fair use preserves your right to reproduce works or portions of works for certain purposes, notably education, analysis and criticism, parody, research and journalistic reporting. The amount of the work excerpted and the implications of your use on the marketability or value of the works are considerations in determining fair use. Works that are not fixed in a tangible form, such as extemporaneous speeches, do not qualify for copyright protection. Titles of works, and improvisational musical or choreographic compositions that have not been annotated, likewise cannot be copyrighted. Names of musical groups, slogans and short phrases may gain protection as trademarks when registered through the U.S. Patent & Trademark Office.
Protecting Your Own Works
Although copyright automatically applies to any creative work you produce, you can strengthen your legal copyright protection by registering works with the U.S. Copyright Office. Doing so establishes an official record of your copyright, and must be done before filing an infringement civil lawsuit in Federal district court. Registration costs $20. For information, visit the Copyright Office Web site or call (202) 707-3000; TTY is (202) 707-6737.
If you appoint an independent Web developer to create and maintain your Web site, make sure through written agreement that you retain the copyright to your Web content.
Place a copyright notice on each of your Web pages and other published materials. Spell out the word "Copyright" or use the encircled "c" symbol, along with the year of publication and your name, as shown in this example:
Copyright 1998 EditPros marketing communications If you're concerned about copyright protection in other nations, add: "All rights reserved."
How to Stay Legal
If you'd like to share the contents of an interesting Web page with your company employees, describe the page and tell them the URL address of the Web site so they can look for themselves. And if the latest edition of a business newspaper contains an article you'd like to distribute to your 12 board members, either ask the publication for permission to make copies, or buy a dozen copies of the newspaper. Retention of value through sales of that newspaper, after all, is what copyright law is intended to protect.
The United States Copyright Office contains an explanation of American copyright basics and a list of frequently asked questions, as well as the complete text of the United States Copyright Act of 1976. Topics include copyright ownership and transfer, copyright notice, and copyright infringement and remedies. The site is maintained by the U.S. Library of Congress.
Most of the material in this definition/topic was reprinted from an EditPros newsletter with their permission. EditPros is a writing, editing, and publishing management firm in Davis, California with their own Web site.
RELATED GLOSSARY TERMS: FERPA (Family Educational Rights and Privacy Act of 1974),Electrohippies Collective, Carnivore, lawful interception (LI), cypherpunk, Information Awareness Office (IAO), lifestyle polygraph, Electronic Signatures in Global and National Commerce Act (e-signature bill), cyberstalking, I-SPY Act -- Internet Spyware Prevention Act of 2005 (H.R. 744)
This was last updated in September 2005
Posted by Margaret Rouse
Cowboy coding describes an undisciplined approach to software development that allows individual programmers to make up their own rules.
Cowboy coding is programming lingo for an approach tosoftware development that gives programmers almost complete control over the development process. In this context, cowboy is a synonym for maverick -- an independent rebel who makes his own rules.
An organization might permit cowboy coding because there are not enough resources to commit to the design phase or a project deadline is looming. Sometimes cowboy coding is permitted because of a misguided attempt to stimulate innovation or because communication channels fail and there is little or no business stakeholder involvement or managerial oversight. An individual developer or small team might be given only a minimal description of requirements and no guidance regarding how these objectives should be achieved. They are free to select frameworks, coding languages, libraries, technologies and other build tools as they see fit.
The cowboy approach to coding typically focuses on quick fixes and getting a working product into production as quickly as possible. There is nodocumentation or formal process for quality assurance testing, as required by continuous integration and other Agile software developmentmethodologies. Instead of producing lean, well-written code, cowboy code often has errors that cause failures upon deployment or make it difficult to maintain over time. Integrating the various components of the code may also be a challenge since with cowboy coding there are no agreed-upon best practices to provide continuity.
Continue Reading About cowboy coding
Creating and Testing Your IT Recovery Plan
Creating and Testing Your IT Recovery Plan
Regular tests of your IT disaster recovery plan can mean the difference between a temporary inconvenience or going out of business.
Testing at least once per month is important to maintain engineering best practices, to comply with stringent standards for data protection and recovery, and to gain confidence and peace of mind. In the midst of disaster is not the time to determine the flaws in your backup and recovery system. Backup alone is useless without the ability to efficiently recover, and technologists know all too well that the only path from “ought to work” to “known to work” is through testing.
A recent study found that only 16 percent of companies test their disaster recovery plan each month, with over half testing just once or twice per year, if ever. Adding to the concern, almost one – third of tests resulted in failure.
The reasons cited for infrequent testing include the usual litany of tight budgets, disruption to employees and customers, interruption of sales and revenue, and of course the scarcity of time. This survey covered mostly large enterprises, and the challenges are even greater for smaller firms. According to the survey findings1
Yet new systems have arrived that allow daily automated testing of full recovery, putting such assurances in reach of every business. Backup without rapid recovery and testing will soon be as obsolete as buildings without sprinklers or cars without seatbelts.
Please read the attached whitepaper.
Not Creating a Disaster Recovery Plan Could Cost You Everything
Disaster recovery planning is a very large topic, with just one part being about backing up and recovering your data. To give you a real life example of what I mean by saying that data backup and recovery is just part of an overall disaster recovery plan, I will refer to a recent posting on Reddit. The post talks about how the System Admin gets a ticket saying that the power is out in their office in Kiev and that the UPS battery is down to 13%. In response, the technician at the office simply shuts down the gear. The next day they received a news report that basically stated that the entire building, that was once their Kiev office, was no longer functional as fire and collapsed floors had completely devastated it. The System Admin ends his post by asking how is your disaster recovery plan, and have you tested it.
When you start thinking about planning out your disaster recovery plan, you need to think about completely unrealistic disasters, along with the normal types of disaster crisis scenarios. If you have a disaster recovery plan already in place, does it take into account what happens if the office is completely destroyed or is inaccessible? How about multiple points of connectivity? When was the last time that your disaster recovery plan was actually tested?
When to Test Your Disaster Recovery Plan
It is a good practice to update and test your disaster recovery plan whenever large changes are made. What happens when you have everything set the way you want it and nothing huge has changed? My suggestion is to treat it like your smoke detector; twice a year when the time changes and you change the batteries in your smoke detectors, test your entire disaster recovery plan. Testing that plan should include asking yourself questions and exploring “what if” scenarios like: what happens if Bob, the main System Admin goes missing or dies by the proverbial bus that hunts down System Admins, or what happens if the building is on fire and everything inside is gone, or what happens if the cloud service you rely on for production/backup/disaster recovery suddenly closes its doors. All of these things needs to be accounted for along with many other scenarios in order to be able to recover from a disaster and continue running your business.
Not Making Time for Disaster Recovery Could Cost You
It seems like one of the hardest things to do is to make the time to either create or test your disaster recovery plan. Most of the time it seems like it comes down to time, and not having enough time is the biggest excuse given for not creating or testing a disaster recovery plan. This issue of time almost always comes down to priorities. When creating or testing your disaster recovery plan is too low on your priority list, it simply never gets done.
One of the best ways to go about pushing up the priority of disaster recovery is simply to think about how much each minute, hour, day, and week of downtime will cost the company. For instance, say an hour of downtime on the company website costs the company $3000 in lost e-commerce revenue. Now multiply that over hours or even days and your talking about huge potential losses that could have been avoided. Plus, that is not even factoring in the potential revenue loss of new customers who may not even consider your company after not being able to read about your company/products or the negative affect it has on the company image. The costs, even in this small scale disaster scenery, add up quickly.
The reality is, if you think data loss won’t happen to your company think again. 74% of companies have experienced data loss at the workplace and 32% of companies take several days to recover from the loss of data. The scary truth is 16% of companies that experience data loss never recover. When you think in terms of the potential cost to the company, it should help you prioritize your disaster recovery planning and testing along with justifying the costs of both the planning, infrastructure, and testing.
I think Benjamin Franklin said it best when he stated “If you fail to plan, you plan to fail.” When it comes to disaster recovery, failing to have a plan is a sure-fire way to set the company up for failure in the event of a disaster, and it could cost the company everything.
If you liked this post, subscribe to our RSS feed
Creative Commons (COPYRIGHT)
Part of the Open source glossary:
Creative Commons is a nonprofit organization that offers copyright licenses for digital work.
No registration is necessary to use the Creative Commons licenses. Instead, content creators select which of the organization's six licenses best meets their goals, then tag their work so that others know under which terms and conditions the work is released. Users can search the CreativeCommons.org website for creative works such as music, videos, academic writing, code or images to use commercially or to modify, adapt or build upon.
The six categories of licenses offered are:
This was last updated in July 2013
Contributor(s): Emily McLaughlin
Please read the attached handbook.
CROWDSOURCING FOR ENTERPRISE IT
10 KEY QUESTIONS (AND ANSWERS) ON CROWDSOURCING FOR ENTERPRISE IT
A starting guide for augmenting technical teams with crowdsourced design, development and data science talent
A crowdsourcing platform is essentially an open marketplace for technical talent. The requirements, timelines, and economics behind crowdsourced projects are critical to successful outcomes. Varying crowdsourcing communities have an equal variety of payments being offered for open innovation challenges. Crowdsourcing is meritocratic - contributions are rewarded based on value. However, the cost-efficiencies of a crowdsourced model reside in the model's direct access to talent, not in the compensated value for that talent. Fair market value is expected for any work output. The major cost difference with between legacy sourcing models and a crowdsourcing model is (1) the ability to directly tap into technical expertise, and (2) that costs are NOT based around time or effort.
Please read the attached whitepaper.
Customer Service Model
Mastering the Modern Customer Service Model
by Wheelhouse Enterprises
Perfecting your in-house customer service system has never been easy until now. The cloud has made customer service tools readily available and revolutionized how they are implemented. Our newest white paper details the tools necessary for the most modern, up-to-date customer service tools for your organization. Whether you're looking for specific tools for your contact center or CRM, we have you covered.
Please read the attached whitepaper.
D (DATA CENTER)
D (OPEN SOURCE)
D (WEB SERVICES)
D - Vocabulario Popular Montevideano
MONTEVIDEANOS EN POCAS PALABRAS
Para cualquier organización es esencial contar con una buena gestión de sus dispositivos. Las tendencias como el Bring your own device (BYOD) y la consumerización IT también ayudan a que estos servicios tomen impulso. En este contexto, las soluciones DaaS posibilitan la gestión de servicios de las PCs de una forma segura con costos reducidos. No está de más recordar que este sistema ofrece portabilidad para que pueda ser administrado desde cualquier lugar y equipo.
Dan Bricklin: Spreadsheet Inventor on a Life in Computing
Spreadsheet Inventor Dan Bricklin on a Life in Computing
Posted by Martin Veitch
If you use spreadsheets — and today the number of users that do so must be in the hundreds of millions — then every time you open a new workbook, edit a cell or calculate a formula, you can thank Dan Bricklin’s legacy. Bricklin, an MIT graduate and Harvard MBA, developed VisiCalc with Bob Frankston back in 1979. The program not only gave rise to many of the elements of modern spreadsheet programs, selling over a million copies along the way, but, after its 1981 port, also helped the IBM Personal Computer become one of the most important new products of the 20thcentury.
Recently I spoke to Bricklin by phone about VisiCalc, its legacy, the rise of the PC generation and what’s happened since.
First, I asked him to sketch a picture of financial management as it was when he wrote the code for what would become VisiCalc.
“For hundreds of years, financial stuff was done on pen and paper and frequently on columns and rows with labels around them. [In the 1970s] we’d be doing it on paper or typing up things. That’s how you kept your books. When they talked about book-keeping, it was exactly that: there were pages in books. Our first name for VisiCalc was Calcu-ledger because that helped explain what we were doing: providing calculations for general ledger.”
Although the spreadsheet made his name, Bricklin had been largely concentrating on another software category that was to change the way the world worked.
“My background was in word processing but, back then, computerised printing of letters was mostly used in things like fundraising where you’d print one letter hundreds of times. The idea of using that equipment for a plain old letter by a typist… they were just too expensive. The idea of a screen-based word processor was a new thing when I was working in the Seventies at Digital Equipment Corporation (DEC) but I had been exposed to systems like [drawing program] Sketchpad which were interactive programs that had started to become popular in the Sixties and Seventies in research institutions. Computers were becoming important in newspapers so a reporter could type something in and see what it would look like but [for the majority of people] the idea of using a computer to do these things was new.”
When Bricklin prototyped VisiCalc, he showed it to his Harvard professor who told him that his competition was calculating on the back of an envelope; if VisiCalc wasn’t faster, people would never use it. That notion helped make Bricklin a pioneer in another way: delivering a user experience (even before the term had been coined) that was intuitive so a new computer user would understand the new electronic tools. So, VisiCalc looked like a ledger book. Similarly, in word processing, manual tools like scissors and glue became ‘cut’ and ‘paste’ features. Add in extra automation capabilities such as having words automatically wrap around lines of text and you had something that was revolutionary, in the days before even Wang and WordStar automated office tasks.
But at the time, computers were rare, pricey and lacking a standard.
“A person being able to buy a computer in a Radio Shack store was a new thing in the Seventies. The only connection most people had to a computer was using automated teller machines. Timesharing with a terminal where you all shared this remote computer was being developed in the Sixties and started to become popular in the Seventies. People were starting to do financial forecasting but that would cost thousands of dollars a month, plus you’d need terminals. For sizeable companies doing MRP [manufacturing resource planning] that was reasonable, but it would cost $5,000 to $10,000 each for a word processing system of letter quality.”
That pioneer environment explains why Bricklin had no great expectations for commercial success with VisiCalc but he was driven by an idea.
“I came from the word processing world and in this What-You-See-Is-What-You-Get world I’d seen a mouse and was familiar with interactive systems. I think it’d seen an Alto [early Xerox computer], played Space War [a game for the DEC PDP-1], seen Sketchpad, knew APL and Basic. The idea of having what we did with words and numbers on paper but with computation seemed pretty obvious; if there was a ‘eureka’ moment, that was it.
“But I was in word processing and did word processing take off like crazy? No. Was it on every desk? No. Today, people hardly know how to write [in longhand] but in those days the idea that computers would be cheap enough... We knew what should be but we also knew from hindsight that acceptance was very slow. I had seen the mouse in the 1970s, it was invented before that and didn’t come into acceptance until the 80s. So although we had something we saw was wonderful, we had no expectations.”
With the benefit of hindsight though, grass shoots and signs were discernible.
“There were people making money out of software. [Mainframe database maker] Cullinane was the first pre-packaged software company to go public so we knew it was possible. But on the PC there were no [commercial] models. Almost nobody knew who Bill Gates was and he was maybe making a few million dollars a year.”
Also, the economics of the day were very different as an Apple II “fully loaded” with daisywheel printer and screen cost about $5,000, the equivalent of about $18,000 today.
This was also a time of scepticism about personal computing with the leading IT suppliers considering it a fad for hobbyists rather than a big opportunity to sell to business users. This attitude was underlined when Bricklin says he considered putting VisiCalc on DEC’s PDP-11 microcomputer before deciding on the Apple II.
“I was thinking about it but the sales person wasn’t very aggressive. It was classical Innovator’s Dilemma. [DEC CEO] Ken Olsen saw PCs as wheelbarrows when he was selling pickup trucks.”
That sort of attitude was unlikely to change Bricklin’s desire to set up his own company with Frankston rather than market his idea to the computing giants of the time.
“I wanted to start a business and be an entrepreneur,” he recalls. “I had taken a few classes at Harvard; there weren’t many in those days but I took those that were on offer.”
Although VisiCalc is sometimes presented as a smash hit that immediately launched the IBM PC, that notion is wrong on three points. VisiCalc was released on the Apple II in 1979, there were other ports before it was made available on the IBM PC, and the initial reaction from the wider world was lukewarm.
“When it first came out, almost nobody but a few people in the computer press wrote about it. There was a humorous article about the National Computer Conference [scene of the VisiCalc launch] in the New York Times where the VisiCalc name was considered funny and the author was making fun of all the computer terms. It then appeared in an announcement about my wedding in the Fall and my father-in-law was able to put some wording in about me being the creator of VisiCalc…
“We had ‘serious volume’ of 1,000 units per month for the first year. That’s nothing, that’s how many copies of a program are downloaded onto iPads every day, or every minute.”
But by comparison to other business software for the personal computer, VisiCalc was a success and the most clued-in sales people at resellers used it to show what personal computers could do.
“They knew that by demonstrating VisiCalc they could sell a PC… it was a killer app. People at HP got it too. One of my classmates at business school worked there and was making a small desktop computer that ran Basic and his boss put VisiCalc on that. Tandy started advertising about VisiCalc and sales started doing a lot better. By the time the IBM PC came out it was understood that it was a good thing and people in the business press started to say ‘can it run VisiCalc?’”
If the laurels are to Bricklin and Frankston for creating the modern spreadsheet from the confluence of the rise in microcomputers, business interest and new software development languages, it was another program and company that cashed in on the full flowering of those trends.
“When Lotus 1-2-3 came out [in 1983], the moon and stars were aligned for [Lotus founder and Bricklin’s friend] Mitch Kapor [to be successful] just as they had been for me to create VisiCalc,” says Bricklin, who adds that he knew the better program when he saw it.
Turn another corner and things could have been different though. Microsoft’s dominance of PC software could have been even greater had it been smarter with its Multiplan product, Bricklin believes.
Had Bricklin been more aggressive and the laws of the day been different, he could have pursued Lotus through the courts for the many features that arguably were derived from VisiCalc.
“The law in the US was that you couldn’t patent software and the chances were one in ten you could try to sneak it through and call it a system,” he recalls.
In truth, Bricklin would make an unlikely litigant and says he never considered such a path. He is proud, rather, that his legacy still looms large, even if he didn’t make the millions that others did. The tech investor Ben Rosen called VisiCalc “the software tail that wags (and sells) the personal computer dog” and there’s no doubt that it played a big part in what happened later to our digitising universe.
While some pioneers skulk and criticise others that followed them and were successful, Bricklin appears to have no trace of bitterness. He remains a staunch fan of Microsoft and Excel, a product that remains a cash machine and still bears the stamp of VisiCalc, 35 years on.
“Doing VisiCalc, I had to come up with the essence of what you need in 32K of RAM and our notion of what was important was correct, it turned out,” he says.
But the power and richness of Excel are remarkable, he says, rejecting the notion that the Redmond company is guilty of creating bloatware.
“Microsoft came from engineers building things: programmers, programmers, programmers — and the hearts and minds of programmers mattered a lot to them. People want to customise things, make it right for what their problem is. It’s the difference between being a carpenter and being an architect — one size does not fit all.
“Microsoft built systems that could be customised, so users could replace that part themselves and it listened to a lot of people and provided what they wanted, all the bells and whistles. People say you end up with bloatware and only 10% of the features get used by any user but that 10% is different for a lot of users. Apple went for smaller number of people and that’s OK because there’s Microsoft for the rest. [Microsoft] had business practices that people didn’t like but is that different than other companies in other industries? Not necessarily.
“As a child of the Sixties I think of Bob Dylan: ‘the loser now will be later to win’. It goes in cycles. The founder of Intel [Andrew Grove] said it: Only the Paranoid Survive and you only have so much time [at the top].”
If Bricklin was before his time with VisiCalc, he was also early onto new trends in user input, creating pen-operated applications at Slate Corporation in the early 1990s and in 2009, a Note Taker app for the “magical” iPad he so admires.
“I decided I wanted to get into that [iOS] world because there were times when I wanted to get something down and if I write 5 and it’s a bit off, that’s OK. But if I did that on a keyboard and it’s a 6, that’s no good for a telephone number. I got to learn what’s it’s like, that world of app stores and so on, and I did all the support so I got to see what people needed.”
That took him to the latest stage of his journey, as CTO for a company specialising in using HTML5 to make software multiplatform.
“I saw businesses were going to replace clipboards with tablets and that’s why I decided to joinAlpha Software. I couldn’t do everything on my own because there’s so much at the back end you need to do but I wanted to innovate at the front end. Being able to customise is something that’s exciting to watch but it takes time. You’d think that in companies with billions of dollars, why would people be carrying around procedure manuals instead of on a digital reader? But they do. [Automating cross-platform capabilities] is extremely important in business. You’d think that most companies would have started taking great advantage of custom mobile opportunities internally, but most haven’t gotten there yet.”
Bricklin remains awestruck by changes he has seen in a lifetime of computing that has made him a sort of smarter Forrest Gump or Zelig for the binary age — a person who was around at some of the biggest the zeitgeist moments in computing history.
“When I was working in word processing in my early twenties, I was doing programming for a person who worked for Jay Forrester and the year I was born as it turned out, Forrester showed theWhirlwind computer on TV and that was the first time the general public got to see a computer in action, in this video from 1951 [YouTube clip
]. Bricklin’s boss stayed up for days making sure it was ready for demo and you can see him there in the background [starting at 4.01]. Those were computers that were the size of big rooms and I was working with him on something you could fit in a desk. And now it’s in the pocket and on a watch soon. This is a progression I’ve seen my whole life and it’s a joy each time.”
Bricklin seems content to be recognised as a founding father of the segment, rather than a Rockerfeller.
“If you look at the old basketball players, they didn’t make as much either,” he says, philosophically. “But we wanted to bring computing to more people and we did that.”
Bricklin delights in the fact that science fiction has become reality and that naysayers have been disproved. It gives him pleasure to think that that those who mocked the personal computer as a place to store recipes now Google their ingredients to automatically generate recipes.
“In 2001: A Space Odyssey they’re using a tablet that looks just like an iPad and it’s this magical device. The crystal ball of fiction is now real. In The Wizard of Oz they had this remote thing; the witch could see things at a distance, control things at distance. This is something you can now buy in a store: a drone-controlling iPad. You can communicate with other people in real time and you can control it with a wave of your hands.”
He ponders the rise of the PC and the changes it wrought as people were freed to create, compose, calculate and pay.
“God!” he exclaims, stretching the syllable in wonder, his voice rising to a crescendo. “It was so exciting to see the thing you believe in succeed and to be accepted. My daughter as a youngster once said, ‘Daddy, did they teach you spreadsheets at school?’ and then, after a few seconds corrected herself. ‘Wait a minute…’ That’s really cool to see people use things that we thought should be used. To be vindicated, that was pretty cool.”
Martin Veitch is Editorial Director at IDG Connect
Data center design standards bodies
Words to go: Data center design standards bodies
Need a handy reference sheet of the various data center standards organizations? Keep this list by your desk as a reference.
Several organizations produce data center design standards, best practices and guidelines. This glossary lets you keep track of which body produces which standards, and what each acronym means.
Dig deeper on Data center network cabling
Data Center Efficiency
eGuide: Data Center Efficiency
APC by Schneider Electric
Data center efficiency is one of the cornerstones of an effective IT infrastructure. Data centers that deliver energy efficiency, high availability, density, and scalability create the basis for well-run IT operations that fuel the business. With the right approach to data center solutions, organizations have the potential to significantly save on costs, reduce downtime, and allow for future growth.
In this eGuide, Computerworld, CIO, and Network World examine recent trends and issues related to data center efficiency. Read on to learn how a more efficient data center can make a difference in your organization.
Please read the attached eGuide.
Posted by: Margaret Rouse
Within the volumes of big data there are often a lot of small bits of evidence that are contradictory to even clearly data-supported facts. Generally, this data noise can be seen as such and, in the context of the body of data, it is clearly outweighed. When data is selectively chosen from vast sources, however, a picture can often be created to support a desired view, decision or argument that would not be supported by a more rigorously controlled method.
Data confabulation can be used both intentionally and unintentionally to promote the user’s viewpoint. When a decision is made before data is examined, there is a danger of falling prey to confirmation bias even when people are trying to be honest. The term confabulation comes from the field of psychology, where it refers to the tendency of humans to selectively remember, misinterpret or create memories to support a decision, belief or sentiment.
Continue Reading About data confabulation
People Who Read This Also Read...
Posted by: Margaret Rouse
Data exhaust is the data generated as a byproduct of people’s online actions and choices.
Data exhaust consists of the various files generated by web browsers and their plug-ins such as cookies, log files, temporary internet files and and .sol files (flash cookies). In its less hidden and more legitimate aspect, such data is useful to improve tracking trends and help websites serve their user bases more effectively. Studying data exhaust can also help improve user interface and layout design. As these files reveal the specific choices an individual has made, they are very revealing and are a highly-sought source of information for marketing purposes. Websites store data about people’s actions to maintain user preferences, among other purposes. Data exhaust is also used for the lucrative but privacy-compromising purposes of user tracking for research and marketing.
Data exhaust is named for the way it streams out behind the web user similarly to the way car exhaust streams out behind the motorist. An individual’s digital footprint, sometimes known as a digital dossier, is the body of data that exists as a result of actions and communications online that can in some way be traced back to them. That footprint is broken down as active and passive data traces; digital exhaust consists of the latter. In contrast with the data that people consciously create, data exhaust is unintentionally generated and people are often unaware of it.
Security and privacy software makers struggle with the conflicting goals of marketing and privacy. User software designed to protect security and privacy often disrupts online marketing and research business models. While new methods of persistently storing tracking data are always in development, software vendors constantly design new methods to remove them.
See Michelle Clark's TEDx talk about digital footprints:
Author: John O’Brien
It would be an understatement to say that the hype surrounding the data lake is causing confusion in the industry. Perhaps, this is an inherent consequence of the data industry's need for buzzwords: it's not uncommon for a term to rise to popularity long before there is clear definition and repeatable business value. We have seen this phenomena many times when concepts including "big data," "data reservoir," and even the "data warehouse" first emerged in the industry. Today's newcomer to the data world vernacular—the "data lake"—is a term that has endured both the scrutiny of pundits who harp on the risk of digging a data swamp and, likewise, the vision of those who see the potential of the concept to have a profound impact on enterprise data architecture. As the data lake term begins to come off its hype cycle and face the pressures of pragmatic IT and business stakeholders, the demand for clear data lake definitions, use cases, and best practices continues to grow.
This paper aims to clarify the data lake concept by combining fundamental data and information management principles with the experiences of existing implementations to explain how current data architectures will transform into a modern data architecture. The data lake is a foundational component and common denominator of the modern data architecture enabling, and complementing specialized components, such as enterprise data warehouses, discovery-oriented environments, and highly-specialized analytic or operational data technologies within or external to the Hadoop ecosystem. Therefore, the data lake has become the metaphor for the transformation of enterprise data management, and will continue to evolve the data lake definition according to established principles, drivers, and best practices that will quickly emerge as hindsight is applied at companies.
Please read the attached guide.
Posted by Margaret Rouse
The data profiling process cannot identify inaccurate data; it can only identify business rules violations and anomalies.The insight gained by data profiling can be used to determine how difficult it will be to use existing data for other purposes. It can also be used to provide metrics to assess data quality and help determine whether or not metadata accurately describes the source data.
Profiling tools evaluate the actual content, structure and quality of the data by exploring relationships that exist between value collections both within and across data sets. For example, by examining the frequency distribution of different values for each column in a table, an analyst can gain insight into the type and use of each column. Cross-column analysis can be used to expose embedded value dependencies and inter-table analysis allows the analyst to discover overlapping value sets that represent foreign keyrelationships between entities.
Posted by Margaret Rouse
A data silo is a repository of fixed data that an organization does not regularly use in its day-to-day operation.
A data silo is a repository of fixed data that an organization does not regularly use in its day-to-day operation. So-called siloed data cannot exchange content with other systems in the organization. The expressions "data silo" and "siloed data" arise from the inherent isolation of the information. The data in a silo remains sealed off from the rest of the organization, like grain in a farm silo is closed off from the outside elements.
In recent years, data silos have faced increasing criticism as an impediment to productivity and a danger to data integrity. Data silos also increase the risk that current (or more recent) data will accidentally get overwritten with outdated (or less recent) data. When two or more silos exist for the same data, their contents might differ, creating confusion as to which repository represents the most legitimate or up-to-date version.
Cloud-based data, in contrast to siloed data, can continuously evolve to keep pace with the needs of an organization, its clients, its associates, and its customers. For frequently modified information, cloud backup offers a reasonable alternative to data silos, especially for small and moderate quantities of data. When stored information does not need to be accessed regularly or frequently, it can be kept in a single cloud archive rather than in multiple data silos, ensuring data integration (consistency) among all members and departments in the organization. For these reasons, many organizations have begun to move away from data silos and into cloud-based backup and archiving solutions.
Continue Reading About data silo
Why Database-as-aService (DBaaS)?
IBM Cloudant manages, scales and supports your fast-growing data needs 24x7, so you can stay focused on new development and growing your business.
Fully managed, instantly provisioned, and highly available
In a large organization, it can take several weeks for a DBMS instance to be provisioned for a new development project, which limits innovation and agility. Cloudant DBaaS helps to enable instant provisioning of your data layer, so that you can begin new development whenever you need. Unlike Do-It-Yourself (DIY) databases, DBaaS solutions like Cloudant provide specific levels of data layer performance and up time. The managed DBaaS capability can help reduce risk of service delivery failure for you and your projects.
Build more. Grow more
With a fully managed NoSQL database service, you do not have to worry about the time, cost and complexity associated with database admnistration, architecture and hardware. Now you can stay focused on developing new apps and growing your business to new heights.
Who uses DBaaS?
Companies of all sizes, from start ups to mega-users use Cloudant to manage data for large or fast-growing web and mobile apps in ecommerce, on-line education, gaming, financial services, and other industries. Cloudant is best suited for applications that need a database to handle a massively concurrent mix of low-latency reads and writes. Its data replication and synchronization technology also enables continuous data availability, as well as off-line app usage for mobile or remote users.
As a JSON document store, Cloudant is ideal for managing multi- or un-structured data. Advanced indexing makes it easy to enrich applications with location-based (geo-spatial) services, full-text search, and near real-time analytics.
Please read the attached whitepaper.
Decoding DNA: New Twists and Turns (DNA)
The Scientist takes a bold look at what the future holds for DNA research, bringing together senior investigators and key leaders in the field of genetics and genomics in this 3-part webinar series.
The structure of DNA was solved on February 28, 1953 by James D. Watson and Francis H. Crick, who recognized at once the potential of DNA's double helical structure for storing genetic information — the blueprint of life. For 60 years, this exciting discovery has inspired scientists to decipher the molecule's manifold secrets and resulted in a steady stream of innovative advances in genetics and genomics.
Honoring our editorial mission, The Scientist will take a bold look at what the future holds for DNA research, brining together senior investigators and key leaders in the field of genetics and genomics in this 3-part webinar series.
What's Next in Next-Generation Sequencing?
The advent of Next-Generation Sequencing is considered the most propelling technological advance, which has resulted in the doubling of sequence data almost every 5 months and the precipitous drop in the cost of sequencing a piece of DNA. The first webinar will track the evolution of next-generation sequencing and explore what the future holds in terms of the technology and its applications.
George Church is a professor of genetics at Harvard Medical School, and Director of the Personal Genome Project, providing the world's only open-access information on human genomic, environmental and trait data (GET).His 1984 Harvard PhD included the first methods for direct genome sequencing, molecular multiplexing, and barcoding. These lead to the first commercial genome sequence (pathogen, Helicobacter pylori) in 1994. Hisinnovations in "next generation" genome sequencing and synthesis and cell/tissue engineering resulted in 12 companies spanning fields including medical genomics (Knome, Alacris, AbVitro,GoodStart, Pathogenica) and synthetic biology (LS9, Joule, Gen9,WarpDrive) as well as new privacy, biosafet, and biosecurity policies. He is director of the NIH Centers of Excellence in Genomic Science. His honors include election to NAS & NAE and Franklin Bower Laureate for Achievement in Science.
George Weinstock is currently a professor of genetics and of molecular microbiology at Washington University in Saint Louis. He was previously codirector of the Human Genome Sequencing Center at Baylor College of Medicine in Houston, Texas where he was also a professor of molecular and human genetics. Dr. Weinstock received his BS degree from the University of Michigan (Biophysics, 1970) and his PhD from the Massachusetts Institute of Technology (Microbiology, 1977).
Joel Dudley is an assistant professor of genetics and genomic sciences and Director of Biomedical Informatics at Mount Sinai School of Medicine in New York City. His current research is focused on solving key problems in genomic and systems medicine through the development and application of translational and biomedical informatics methodologies. Dudley's published research covers topics in bioinformatics, genomic medicine, personal and clinical genomics, as well as drug and biomarker discovery. His recent work with coauthors describing a novel systems-based approach for computational drug repositioning, was featured in the Wall Street Journal, and earned designation as the NHGRI Director's Genome Advance of the Month. He is also coauthor (with Konrad Karczewski) of the forthcoming book, Exploring Personal Genomics. Dudley received a BS in microbiology from Arizona State University and an MS and PhD in biomedical informatics from Stanford University School of Medicine.
Unraveling the Secrets of the Epigenome
Original Broadcast Date: Thursday April 18, 2013
This second webinar in The Scientist's Decoding DNA series will cover the Secrets of the Epigenome, discussing what is currently known about DNA methylation, histone modifications, and chromatin remodeling and how this knowledge can translate to useful therapies.
Stephen Baylin is a professor of medicine and of oncology at the Johns Hopkins University School of Medicine, where he is also Chief of the Cancer Biology Division of the Oncology Center and Associate Director for Research of The Sidney Kimmel Comprehensive Cancer Center. Together with Peter Jones of the University of Southern California, Baylin also leads the Epigenetic Therapy Stand up to Cancer Team (SU2C). He and his colleagues have fostered the concept that DNA hypermethylation of gene promoters, with its associated transcriptional silencing, can serve as alternatives to mutations for producing loss oftumor-suppressor gene function. Baylin earned both his BS and MD degrees from Duke University, where he completed his internship and first-year residency in internal medicine. He then spent 2 years at the National Heart and Lung Institute of the National Institutes of Health. In 1971, he joined the departments of oncology and medicine at the Johns Hopkins University School of Medicine, an affiliation that still continues.
Victoria Richon heads the Drug Discovery and Preclinical Development Global Oncology Division at Sanofi. Richon joined Sanofi in November 2012 from Epizyme, where she served as vice president of biological sciences beginning in 2008. At Epizyme she was responsible for the strategy and execution of drug discovery and development efforts that ranged from target identification through candidate selection and clinical development, including biomarker strategy and execution. Richon received her BA in chemistry from the University of Vermont and her PhD in biochemistry from the University of Nebraska. She completed her postdoctoral research at Memorial Sloan-Kettering Cancer Center.
Paolo Sassone-Corsi is Donald Bren Professor of Biological Chemistry and Director of the Center for Epigenetics and Metabolism at the University of California, Irvine, School of Medicine. Sassone-Corsi is a molecular and cell biologist who has pioneered the links between cell-signaling pathways and the control of gene expression. His research on transcriptional regulation has elucidated a remarkable variety of molecular mechanisms relevant to the fields of endocrinology, neuroscience, metabolism, and cancer. He received his PhD from the University of Naples and completed his postdoctoral research at CNRS, in Strasbourg, France.
The Impact of Personalized Medicine
After the human genome was sequenced, Personalized Medicine became an end goal, driving both academia and the pharma/biotech industry to find and target cellular pathways and drug therapies that are unique to an individual patient. The final webinar in the series will help us better understand The Impact of Personalized Medicine, what we can expect to gain and where we stand to lose.
Jay M. ("Marty") Tenenbaum is founder and chairman of Cancer Commons. Tenenbaum’s background brings a unique perspective of a world-renowned Internet commerce pioneer and visionary. He was founder and CEO of Enterprise Integration Technologies, the first company to conduct a commercial Internet transaction. Tenenbaum joined Commerce One in January 1999, when it acquired Veo Systems. As chief scientist, he was instrumental in shaping the company's business and technology strategies for the Global Trading Web. Tenenbaum holds BS and MS degrees in electrical engineering from MIT, and a PhD from Stanford University.
Amy P. Abernethy, a palliative care physician and hematologist/oncologist, directs both the Center for Learning Health Care (CLHC) in the Duke Clinical Research Institute, and the Duke Cancer Care Research Program (DCCRP) in the Duke Cancer Institute. An internationally recognized expert in health-services research, cancer informatics, and delivery of patient-centered cancer care, she directs a prolific research program (CLHC/DCCRP) which conducts patient-centered clinical trials, analyses, and policy studies. Abernethy received her MD from Duke University School of Medicine.
Geoffrey S. Ginsburg, is the Director of Genomic Medicine at the Duke Institute for Genome Sciences & Policy. He is also the Executive Director of the Center for Personalized Medicine at Duke Medicine and a professor of medicine and pathology at Duke University Medical Center. His work spans oncology, infectious diseases, cardiovascular disease, and metabolic disorders. His research is addressing the challenges for translating genomic information into medical practice using new and innovative paradigms, and the integration of personalized medicine into health care. Ginsburg received his MD and PhD in biophysics from Boston University and completed an internal medicine residency at Beth Israel Hospital in Boston, Massachusetts.
Abhijit “Ron” Mazumder obtained his BA from Johns Hopkins University, his PhD from the University of Maryland, and his MBA from Lehigh University. He worked for Gen-Probe, Axys Pharmaceuticals, and Motorola, developing genomics technologies. Mazumder joined Johnson & Johnson in 2003, where he led feasibility research for molecular diagnostics programs and managed technology and biomarker partnerships. In 2008, he joined Merck as a senior director and Biomarker Leader. Mazumder rejoined Johnson & Johnson in 2010 and is accountable for all aspects of the development of companion diagnostics needed to support the therapeutic pipeline, including selection of platforms and partners, oversight of diagnostic development, support of regulatory submissions, and design of clinical trials for validation of predictive biomarkers.