Glosario KW | KW Glossary


Ontology Design | Diseño de Ontologías

All categories

Page: (Previous)   1  2  3  4  5  6  7  8  9  10  ...  63  (Next)
  ALL

BUSINESS

Picture of System Administrator

Blockchain

by System Administrator - Friday, 28 April 2017, 3:48 PM
 

Cómo funciona blockchain: Una explicación infográfica

por Emily McLaughlin

 

Entender cómo funciona una cadena de bloques es el primer paso para aprovechar la tecnología. Aprenda cómo una unidad de valor de blockchain se mueve de la parte A a la parte B.

Una cadena de bloques es un tipo de libro de caja distribuido que utiliza cifrado para almacenar registros permanentes y a prueba de manipulaciones de datos de transacciones. Los datos se almacenan a través de una red peer-to-peer utilizando un principio de "consenso" para validar cada transacción.

Uno de los principales beneficios de un sistema de cadena de bloques es que tiene la promesa de eliminar –o reducir enormemente– la fricción y los costos en una amplia variedad de aplicaciones, principalmente los servicios financieros, ya que elimina una autoridad central (por ejemplo, una cámara de compensación) al realizar y validar transacciones.

La tecnología de blockchain subyace en las criptomonedas, específicamente bitcoin y Ethereum, y está siendo explorada como tecnología fundamental para una serie de otros sistemas de registro como pagos móviles, registros de propiedad y contratos inteligentes.

Cómo funciona la cadena de bloques

Para obtener una sólida comprensión de cómo utilizar blockchain en una configuración empresarial, los CIOs deben comprender primero cómo una unidad de valor en una transacción se mueve de la parte A a la parte B. Esta infografía detalla cómo funciona la cadena de bloques desde el inicio de la transacción, a través de la verificación y hasta llegar a la entrega.

Cómo implementar blockchain

Si bien se espera que la cadena de bloques sea adoptada en primer lugar en los servicios financieros, tiene potencial para una amplia gama de industrias verticales; por ejemplo, la Oficina del Coordinador Nacional para las Tecnologías de Información de la Salud y el NIST examinaron recientemente propuestas para 70 casos de uso diferentes de blockchain para la asistencia sanitaria. Pero no importa la industria, para las empresas que ven los beneficios potenciales de la cadena de bloques –ya sea en ahorros de costos o mayor eficiencia en los procesos existentes, o en oportunidades de ingresos de una nueva línea de negocio– hay un riguroso proceso de implementación estándar a seguir. En nuestra guía paso a paso, Jeff Garzik, cofundador de la empresa de software y servicios de cadena de bloques Bloq, recomienda que los CIO planifiquen una implementación blockchain en cuatro etapas:

  • Etapa 1: Identificar un caso de uso y asignar un plan tecnológico. La elección de casos de uso adecuados es fundamental.
  • Etapa 2: Crear una prueba de concepto.
  • Etapa 3: Realizar una prueba de campo que implique un ciclo de producción limitado con datos orientados al cliente y, a continuación, realizar pruebas adicionales con productos y volúmenes de datos más orientados al cliente.
  • Etapa 4: Realizar un despliegue de volumen completo en la producción.

Impacto social de la tecnología blockchain

Los expertos predicen que la lista de casos de uso de la cadena de bloques, y el impacto de la tecnología en la sociedad, seguirán creciendo. Según Don Tapscott, autor, consultor y CEO de The Tapscott Group, la promesa de blockchain de cambiar cómo se crea la riqueza en todo el mundo es uno de los impactos sociales más significativos a tener en cuenta.

En la Cumbre DC Blockchain en Washington, DC, Tapscott también sugirió que blockchain:

  • Permitirá que las personas que viven en el mundo en desarrollo, que actualmente no tienen cuentas bancarias, participen en la economía digital.
  • Protegerá los derechos a los registros de propiedad.
  • Ayudará a crear una economía compartida basada en el intercambio real.
  • Mejorará el proceso de envío de dinero a miembros de la familia en países extranjeros a través de remesas electrónicas.
  • Ayudará a los consumidores a monetizar los datos, incluidos sus propios datos.
  • Reducirá los costos de hacer negocios.
  • Responsabilizará a los funcionarios gubernamentales con contratos inteligentes.

En el gráfico de abajo, el representante estadounidense David Schweikert (de Arizona); Bart Chilton, ex presidente de la Comisión de Comercio de Bienes Futuros de los Estados Unidos; Carl Lehmann, director de investigación en 451 Research; y David Furlonger, analista de Gartner, son citados este año hablando sobre el impacto de blockchain.

 

Excavando aún más profundamente

Si usted se está poniendo al día en blockchain, aquí está un glosario de términos:

  • Bitcoin: Una moneda digital que no está respaldada por el banco central o gobierno de ningún país; negociados por bienes o servicios con proveedores que aceptan bitcoins como pago
  • Minería bitcoin: El acto de procesar transacciones en el sistema de moneda digital; los registros de transacciones bitcoin actuales –identificados como bloques– se añaden al registro de transacciones pasadas, conocido como la cadena de bloques.
  • Criptomoneda: Un subconjunto de monedas digitales; no tienen representación física y utilizan cifrado para asegurar los procesos involucrados en la realización de transacciones.
  • Billetera digital: Una aplicación de software, normalmente para un teléfono inteligente, que sirve como una versión electrónica de una cartera física.
  • Libro de caja distribuido: Una base de datos en la que partes de la base de datos se almacenan en múltiples ubicaciones físicas y el procesamiento se distribuye entre varios nodos de base de datos; los sistemas de cadena de bloques se denominan libros de caja (ledgers) distribuidos.
  • Ethereum: Una plataforma de computación distribuida pública basada en cadena de bloques con funcionalidad de contrato inteligente; ayuda a ejecutar contratos peer-to-peer usando una criptomoneda llamada éter.
  • Hash/hashing: La transformación de una cadena de caracteres en un valor normalmente más corto, de longitud fija, o una clave que representa la cadena original (similar a la creación de un enlace bitly).
  • Remesas: Una suma de dinero enviado, especialmente por correo o transferencia electrónica, en pago por bienes o servicios, o como regalo.
  • Contrato inteligente: Programa de computadora que controla directamente la transferencia de monedas o activos digitales entre partes bajo ciertas condiciones; almacenados en la tecnología blockchain. .

Profundice más

Picture of System Administrator

Blockchain Technology

by System Administrator - Thursday, 23 March 2017, 12:39 PM
 

Why it's disruptive: Blockchain promises to make firms' back-end operations more efficient and cheaper. Eventually, it could replace companies altogether.

Executive's guide to implementing blockchain technology

By Laura Shin

The technology behind bitcoin is one of the internet's most promising new developments. Here's how businesses can use it to streamline operations and create new opportunities.

Blockchains are one of the most important technologies to emerge in recent years, with many experts believing they will change our world in the next two decades as much as the internet has over the last two.

Although it is early in its development, firms pursuing blockchain technology include IBMMicrosoftWalmart, JPMorgan Chase, Nasdaq, Foxconn, Visa, and shipping giant Maersk. Venture capitalists have so far poured $1.5 billion into the space, with storied firms such as Andreessen Horowitz, Kleiner Perkins Caufield and Byers, and Khosla Ventures making bets on startups.

A blockchain is a golden record of the truth that creates trust among multiple parties.

 

Picture of System Administrator

Bob Metcalfe: Ethernet Inventor Still Rings the Changes

by System Administrator - Friday, 29 August 2014, 8:27 PM
 

 

Ethernet Inventor Bob Metcalfe Still Rings the Changes

 Posted by Martin Veitch

“It’s a great story,” says Bob Metcalfe, speaking down the line from his summer home in Maine, when I ask him if his family has British roots. The man who gave the world Ethernet has a bunch of great stories and, like the stand-up comedian he’s thinking of becoming (more of which later), he can improvise on seemingly any topic.

“We won the battle of Agincourt. We fought with the longbow that had a greater range and higher firing rate than the traditional bow and arrow. Four hundred Metcalfes slaughtered thousands of French. We were from Yorkshire but we blew our money and went to New York.”

He adds that the two-finger salute used by Brits to denote contempt for the recipient comes from the same page in history. The French cut off the fingers of captured archers and the English would show them two fingers to show their digits remained intact. In a clarification email he adds that he considers himself a Viking-American: “Marauding is my game.” It’s the sort of zig-zag way his thought processes go: a brilliant mind but restless in its computations.

I’m trying to get a psychological angle on what made Metcalfe because he’s an unusual character. The self-confidence, way with words and forays into venture capitalism might be classic Silicon Valley shtick, but who else decides spells in publishing and academia might be smart career moves after changing the world through computer networking?

His father was an aerospace test technician who never graduated college and Metcalfe has said in previous interviews that he didn’t get on well with Harvard where his dissertation was initially rejected in 1972, hinting there was a class divide.

“I still contend Harvard doesn’t like engineers much. They prefer the liberal arts. Even when they finally built an engineering school they had to call it the School of Engineering and Applied Sciences,” he says, spitting out the last few words.

That rejection (he finally received his PhD a year later) might have served to give him a thicker skin. He went to the renowned Xerox PARC research facility where his major achievement was the invention of Ethernet, the networking protocol that is the highway of the modern, hyper-connected world. Incidentally, he rebuts the notion of Xerox as a company unable to translate inventions (the graphical user interface, computer mouse and laser printing, for example) into real money. Instead, he says Xerox built a powerful printing business and spies some shifting of responsibilities.

“Usually [ex-Xerox] people say they failed but we worked there.”

He will always be associated with Ethernet but he generously shares credit with many others, even if he was the leading force.

“People tend to think it happened in a day—and it’s a myth I promulgated—but it’s been a 40-year effort. There was punctuated equilibrium. Slow and steady progress punctuated with some sort of breakthrough.” [To be referred to as the Father of Ethernet’ and such like] it’s a little bit cringing and I bend over backwards to include as many people as possible."

However, it wasn’t the invention of Ethernet that brought him wealth but rather the ability to sell local area networking at 3Com, the company he co-founded in 1979.

“I had to learn sales quickly,” he says, and it was the years of long trips across North America, and later the world, that made 3Com a powerful force.

Another myth is that he was ousted from 3Com in some sort of “bloody boardroom battle”, he says.

“3Com’s board of directors twice decided that I shouldn’t be CEO. The board of directors did their job. Both times they chose somebody else and both times their judgment was vindicated.” He only left because he didn’t think it right to have a former CEO contender second-guessing the CEO.

Always quote-worthy, his digs at 3Com CEO Eric Benhamou weren’t based on animosity, he says.

“I think the world of Benhamou. I made a crack that he was successful despite not being very charismatic. To me it was a revelation that a person lacking charisma could be so successful. He stilllacks charisma!”

I express surprise that his next move wasn’t to build another company but into computer-sector publishing, at IDG [this site is part of the IDG group] where he became a publisher, columnist and, later, a board member.

“[InfoWorld magazine editor-in-chief] Stewart Alsop asked me if I wanted to be his boss. Next thing, [the late IDG CEO] Pat McGovern called and invited me to visit corporate [in Framingham, Massachusetts] and San Mateo where InfoWorld was. I insisted on the title of CEO and publisher. Pat said, ‘You don’t want that: publishers sell ads to media buyers’, but it was the opportunity to learn a whole new business and hang out with my peeps. [Oracle CEO] Larry Ellison actually signed off insertion orders and laboured over the copy.”

Those were go-go days for tech publishers and Metcalfe says it didn’t feel like a slower or more conservative environment than tech itself.

“A printing press is much more high-tech than a personal computer. Then the web hit and I was at the heart of it. I watched as one publisher after another either succeeded or failed.”

Metcalfe made headlines himself after predicting the collapse of the internet in a column published in InfoWorld. I’d always suspected this as stemming some controversialist tendencies designed to cook up debate and Metcalfe concurs.

“I’d go much further and say it was a monumental publicity stunt,” he says. It was designed to court publicity for an imminent book, Internet Collapses and Other InfoWorld Punditry (“you can still buy it for $1 on Amazon”).

“People had made fun of [IBM founder] Tom Watson saying there would only be 11 computers in the world and Bill Gates saying you only needed 640K of RAM, and in that vein they made fun of me. It was a self-denying prophecy.”

Ever game, Metcalfe literally ate his words after whizzing them into an edible soupy sludge. Later he predicted the failure of wireless networks.

“In 1993, wireless was in one of those bubbles: the modems were bigger than PCs. I went too far in one of my columns and said it would never catch on… never say never.”

But, he says, the success of wireless only increases demand for Ethernet and back-haul networks. “LTE stands for ‘Leads To Ethernet’,” he quips.

In his writing, he was also among the first to take aim at Microsoft, criticising its business practices and foreshadowing its later conviction as a monopolist abusing its market power. Although some traced his criticisms back to a falling out over licensing, Metcalfe says there was nothing personal.

“It wasn’t Bill Gates; it was the twenty-something petty monopolists at Microsoft. [What I wrote] cost me my relationship with Bill Gates.”

He says he remains an admirer of Gates but recalls being in a room with Microsoft’s PR agency rep at the time of the brouhaha.

“She said how disappointed Bill Gates was. Disappointed! As if it was my job not to disappoint Bill Gates…”

However, the tensions between having been a tech industry star turned media all-rounder were becoming apparent.

“The unusual thing was that I’d crossed over to the dark side. It was confusing to people. I’d attack companies in my columns and then try to sell them ad pages.”

A conflict of interests, surely?

“It was a separation of church and state that took place entirely in my head,” he concedes with characteristic drollery.

“Before I continue I’d like to insist that I was right about Microsoft,” Metcalfe says with mock pomposity. “They were eventually convicted.”

To be just, Metcalfe also coined the term “extranet” and may have done the same for “ping”, as well as giving us Metcalfe’s Law, stating that the value of a network is proportionate to the number of potentially connected devices.

Returning to Microsoft, I ask him whether the US and the wider world is getting better at handling abuses of power in technology.

“We got better at it when we took down IBM and AT&T in the 1980s,” he says. “I think we’re getting worse now. The US has a bad government now and anti-trust has become anti-business.”

“Cronyism” in DC lets the powerful slip away, he says, but then the Europeans don’t get away scot-free either. He considers the recent “right to be forgotten” law relating to Google: “What a stupidthing that is.”

Regrets? He appears to have fewer than Sinatra although he beats himself up for not getting IBM to admit defeat on Token Ring, leaving the road open for a two-decade battle with Ethernet.

“IBM gave me two shots to convince them. My contention is that I hadn’t learned to sell yet. I wouldn’t have used the word ‘collision’ [to describe Ethernet traffic handling] and that was a mistake. That related to blood, breaking glass, like a car crash.”

He should have used the “mot juste”, he says, citing his recent discovery of the French term for an appropriate word.

He adds that today’s networking king of the hill Cisco “wouldn’t exist if I were a better person” although he admires the company and its CEO, John Chambers.

His current mission is helping beautiful Austin “become a better Silicon Valley” and is enjoying his work to that end at the University of Texas. He says that he is living his life in 10-year cycles. Having been an engineer/scientist (Ethernet/Xerox); entrepreneur/executive (3Com); publisher and pundit (IDG); venture capitalist; and Professor of Innovation (University of Texas).

In seven years’ time he might, he says, create a startup, picking up where he left off decades ago. Then again he might become a stand-up comic, he says, as if the two options were a ‘blue socks or red socks’ choice.

He could do the standup patter as he has something of the classic-period Steve Martin in his bearing, dryness, self-mocking and capacity for surprise. Say you were plumping for the former career move though, I ask.

“It’s a way off but if I were starting a company today it would be in computational biology. I know a bit about computation and I have a sense biology is about where computing was in 1980. All the trial and error is starting to give way to science and engineering.”

On the economy he is pessimistic and positive at the same time.

“It’s a bubble and it’s going to burst pretty soon but I like bubbles: they’re tools of innovation. There’s the debt bubble too. Everyone’s in debt, including the US to the tune of $17 trillion.”

I ask if he ever considered a career in politics but he says his contribution is limited to tweeting.

And with that our time is up. Metcalfe says he is getting ready to return to Texas after having the summer off and mentions that he was once a visiting professor in “the real Cambridge” in England.

“I loved it but in the end I was getting stir crazy and needed a change.”

I bet.

Martin Veitch is Editorial Director at IDG Connect

Link: http://www.idgconnect.com/abstract/8642/ethernet-inventor-bob-metcalfe-still-rings-changes

Picture of System Administrator

BPM in the Cloud

by System Administrator - Friday, 26 June 2015, 7:06 PM
 

Guide: BPM in the Cloud

BPM software and cloud computing make a fine pair, but is a move to the cloud the right fit for your organization? Uncover an expert list of considerations you should start with first.

Please read the attached guide.

 

Picture of System Administrator

Branch Office Recovery

by System Administrator - Wednesday, 10 September 2014, 9:12 PM
 

Eliminating the Challenge of Branch Office Recovery 

Nobody can afford to lose data. But managing the backup and recovery of data and services in far-flung locations can present many logistical and technology challenges that add complexity, expense, and risk. A new branch converged infrastructure approach allows IT to project virtual servers and data to the edge, providing for local access and performance while data is actually stored in centralized data centers. IT can now protect data centrally and restore branch operations in a matter of minutes versus days.

Please read the attached whitepaper

Picture of System Administrator

Bring Your Own Cloud (BYOC)

by System Administrator - Monday, 16 March 2015, 10:12 PM
 

Bring Your Own Cloud (BYOC)

Posted by Margaret Rouse

BYOC is a movement whereby employees and departments use their cloud computing service of choice in the workplace. Allowing employees to use a public cloud storage service to share very large files may be more cost-effective than rolling out a shared storage system internally.

BYOC (bring your own cloud) is the trend towards allowing employees to use the cloud service of their choice in the workplace.

In a small or mid-size business, allowing employees to use a public cloudstorage service like Dropbox to share very large files may be more cost-effective than rolling out a shared storage system internally.  Problems can occur, however, when employees fail to notify anyone when they use such services.  The use of any shadow IT can pose security and complianceconcerns in the workplace and BYOC in particular can prevent business owners from knowing exactly where their company’s information is being stored, who has access to it and what it’s being used for.

To prevent BYOC from becoming a problem, businesses should implement policies that strictly define what personal cloud services can be used for work-related tasks (if any) and who needs to be notified when a personal cloud service is used.
Continue Reading About bring your own cloud (BYOC)
Picture of System Administrator

Building BI Dashboards: What to Do—and Not Do

by System Administrator - Monday, 19 January 2015, 1:38 PM
 

Building BI Dashboards: What to Do—and Not Do

BY ALAN R. EARLS

Business intelligence dashboards make it easier for corporate executives and other business users to understand and analyze data. But there are right ways and wrong ways to design them.

Please read the attached PDF

Picture of System Administrator

Business Drivers (BUSINESS)

by System Administrator - Wednesday, 3 September 2014, 7:14 PM
 

Cloud economics subject to business drivers, customer perception

by: Kristen Lee

What are the financial benefits of using the cloud? Don't expect any hard-and-fast formulas. Cloud economics turn out to be a local affair, dependent on a company's business drivers and constraints -- and the ability of CIOs to understand them.

At Health Management Systems Inc., "data is our life blood," said CIO Cynthia Nustad. The Irving, Texas-based Health Management Systems (HMS) analyzes petabytes of data for large healthcare programs to determine whether payments were made to the correct payee and for the right amount. Nustad, who joined HMS as CIO in February 2011, doesn't handle just a lot of data but a lot of highly sensitive data. So, when it comes to calculating the cost benefits of using the cloud for crunching data, the expense oftransporting large data sets to the cloud is just one factor she weighs. Data security, of course, is another -- both real and perceived.

"It's always perception that we're battling, right?" Nustad said. "If a client perceives for any reason that there's less security, it's not worth the hassle to try to dissuade them, because it's always going to be a 'gotcha' if something does go bump in the night, God forbid."

Cloud-based business applications, however, are another story. "It's pretty easy to get a Salesforce, Silkroad, a Red Carpet … that are tuned to what the business team needs," she said. Indeed, HMS' use of SaaS predates her tenure, Nustad said, noting that these apps are now mature enough to either meet or beat any on-premises solutions she could come up with -- and they save her maintenance costs. "They are easy to get up and running, the value proposition is there and they fill a particular business need -- a win-win all the way around."

The potential cost-savings of cloud computing have long been touted as an obvious benefit of using this relatively new platform. And, to be sure, examples abound of companies that have saved millions of dollars in labor costs and upfront capital investment by migrating IT operations to the cloud. Even cloud security -- a cause of concern for many CIOs, not just those trading in super-sensitive data -- is gaining traction. Increasing numbers of companies are realizing that cloud-based security providers offer solutions that are not only cheaper but also better than what they could build and manage in-house.

 

Cynthia Nustad

But as Nustad made clear, any discussion of the economics of cloud is complicated. Hard-and-fast formulas for comparing the cost of cloud services versus in-house delivery of those services are difficult to come by, because for starters, the business models of cloud providers are often not transparent to customers. In addition, many CIOs, for reasons not always in their control, don't fully understand their own costs for providing IT services. Cultural factors also get in the way of calculating the economics of cloud, according to analysts and consultants who cover this field.

"A lot of IT departments are defensive about the use of cloud," said Forrester Research analyst James Staten. "They're worried that if the company starts using more cloud, they'll use less of the data center."

In those instances, the political overlay brings "bias into the analysis" of cloud economics, Staten said, with the result that internal IT staff may claim they're cheaper "when in reality they are not."

Perhaps the biggest reason for the lack of solid financial comparisons, however, is that the business's main motivator for using the cloud is usually not to save money, said David Linthicum, senior vice president at Cloud Technology Partners, a Boston software and services provider specializing in cloud migration services.

"The ability for the company to move into new markets, to acquire companies, to kind of change and shift its core processes around … that typically is where cloud pays off," Linthicum said. "So, even if you may not have direct or very obvious operational cost savings, the cloud may still be for you."

Forrester Research's Staten agrees. "It's pretty much across the board and universal that they use the cloud for agility first and foremost," he said, referring to business priorities. It's only later, after some of those benefits have been realized, that the question of cost savings comes up, and even that push for cost savings, he added, "is usually driven by the IT department ... [and] not usually driven by the business."

Nuanced approach to cloud economics

These complex and, at times, competing business needs often result in CIOs adopting a highly nuanced cloud strategy. While HMS, for example, relies on SaaS for some of its back-end business applications, the analytics it uses to weed out fraud, waste and abuse in healthcare payments, for example, is proprietary and deployed in-house.

"I think if you don't look at cloud and you don't look at the economics of cloud, they'll find another CIO who will."
Pat Smith, CIO

To crunch the data, Nustad said, her team mainly uses a combination of open source and vendor tools (from Teradata and Microstrategy), and the IBM DB2 mainframe software "is still, quite frankly, a cost-effective technology" for the task. Plus, she added, "the bandwidth doesn't exist" to move the data back and forth to the cloud.

"If I have data that I can't easily get at that's in a cloud app or on cloud infrastructure, then I've just disabled my business," she said.

Nustad's not the only one with a cloud economics strategy that is not just a matter of dollars and cents.

Pat Smith, CIO at Our Kids of Miami-Dade Monroe Inc., a not-for-profit serving abused and neglected children, said that she looks at cloud for "availability and reliability that would cost us a lot to duplicate."

 

Pat Smith

She too, however, has tweaked her cloud strategy to meet her company's needs. Smith plans to deployMicrosoft Office 365, and although this cloud service offers an archiving solution, she has decided to put the money into an on-premises archiving solution.

"We feel more comfortable," she said, keeping the archives on-premises. "We have a lot of e-discovery requirements like many organizations, so that's a non-negotiable item for us… . We feel like we have more control over it."

Cloud-first economics

But for some CIOs, parsing cloud economics is a moot exercise.

"It's never been about economics, it's always been about the benefits," said Jonathan Reichental, CIO for the city of Palo Alto. "I am solely focused on functionality and quality and those kinds of higher-value items."

Reichental is working on setting up a business registry for the California city, so that when people set up a business in Palo Alto, the registry has all its information: address, what the business does, revenue, number of staff, etc.

 

Jonathan Reichental

Ten years ago the city would have found a vendor and then built an infrastructure, he said. "The only conversation we're having today is who can provide this in the cloud and what's the user experience like," he said.

One thing is true for all CIOs: Sorting out the benefits of cloud services is a top priority. Our Kids' Smith thinks that what's happening with the cloud today is similar to what happened 10 years ago when CIOs needed to be looking at which services should be provided in-house and what services should be outsourced.

"I think cloud's in the same sphere right now," Smith said. "I think if you don't look at cloud and you don't look at the economics of cloud, they'll find another CIO who will."

Go to part two of this feature to read about expert advice for getting the most out of your cloud services. Steps required for a sound cloud economics include: analyzing business "value drivers," nailing the contract, using cloud monitoring tools and, when in doubt, calling up your CIO peers.

Let us know what you think about the story; email Kristen Lee, features writer, or find her on Twitter @Kristen_Lee_34.

Link: http://searchcio.techtarget.com

Picture of System Administrator

Business Information

by System Administrator - Monday, 16 February 2015, 10:40 PM
 

Launching big data initiatives? Be choosy about the data

Thanks to open source technologies like Hadoop and lower data storage costs, more organizations are able to store multi-structured data sets from any number of internal and external sources. That's a good thing, because valuable insight could lurk in all that info. But how do organizations know what to keep and what to get rid of? It's a problem that the February issue of Business Information aims to solve.

In the cover story, SearchBusinessAnalytics reporter Ed Burns talks to businesses that have learned just what to tease from their data. Take marketing analytics services provider RichRelevance, which runs an online recommendation engine for major retailers such as Target and Kohl's. The company has two petabytes of customer and product data in its systems, and the amount keeps growing. To sift through it for shopping suggestions, RichRelevance looks at just four factors: browsing history, demographic data, the products available on a retailer's website and special promotions currently being offered. That way, it keeps its head above the rising tide of data.

And finding themselves surrounded by a sea of data, businesses are finding it's increasingly important for to know how to swim. Many turn to the waters of the data lake, hoping to cash in on the benefits the Hadoop-based data repository promises. But the data lake may not be as tranquil as it sounds, reporter Stephanie Neil writes. Data governance challenges abound, and changes in workplace culture will most likely be required to make it work.

The issue also features a brand-new column. It's insight from a CIO for CIOs. Or would-be CIOs. The inaugural installation, by Celso Mello, of Canadian home heating and cooling company Reliance Home Comfort, dishes up advice for those wishing to climb the corporate ladder to the C-level.

The issue also puts the spotlight on an IT manager at a Boston nonprofit who used the skills inherited from her political family to usher in a human capital management system upgrade. It also captures some of the wants and needs of BI professionals who attended TechTarget's 2014 BI Leadership Summit, last December and takes a look at the origins and prospects of open source data processing engine Apache Spark. The issue closes with a few words by Craig Stedman, executive editor of SearchDataManagement and SearchBusinessAnalytics, on the hard work needed to put in place an effective business intelligence process.

Please read the attached whitepaper.

Picture of System Administrator

C (BUSINESS)

by System Administrator - Thursday, 2 May 2013, 9:36 PM
 
Picture of System Administrator

Cadena de Bloques

by System Administrator - Thursday, 23 March 2017, 12:48 PM
 

Formación de una cadena de bloques. La cadena mayor (negra) consiste del serie de bloques más larga del bloque de génesis (verde) al bloque actual. Bloques huerfanos (púrpura) existen fuera de la cadena mayor

Cadena de Bloques

Fuente: Wikipedia

Una cadena de bloques, también conocida por las siglas BC (del inglés Blockchain)1 2 3 4 5 es una base de datos distribuida, formada por cadenas de bloques diseñadas para evitar su modificación una vez que un dato ha sido publicado usando un sellado de tiempo confiable y enlazando a un bloque anterior.6 Por esta razón es especialmente adecuada para almacenar de forma creciente datos ordenados en el tiempo y sin posibilidad de modificación ni revisión. Este enfoque tiene diferentes aspectos:

  • Almacenamiento de datos.- Es lograda mediante la replicación de la información de la cadena de bloques
  • Transmisión de datos.- Es lograda mediante peer-to-peer
  • Confirmación de datos.- Es lograda mediante un proceso de consenso entre los nodos participantes. El tipo de algoritmo más utilizado es el de prueba de trabajo en el que hay un proceso abierto competitivo y transparente de validación de las nuevas entradas llamada minería.

El concepto de cadena de bloque fue aplicado por primera vez en 2009 como parte de Bitcoin.

Los datos almacenados en la cadena de bloques normalmente suelen ser transacciones (Ej. financieras) por eso es frecuente llamar a los datos transacciones. Sin embargo no es necesario que lo sean. Realmente podríamos considerar que lo que se registran son cambios atómicos del estado del sistema. Por ejemplo una cadena de bloques puede ser usada para estampillar documentos y securizarlos frente a alteraciones.7

 

Aplicaciones

El concepto de cadena de bloques es usado en los siguientes campos:

  • En el campo de las criptomonedas la cadena de bloques se usa como notario público no modificable de todo el sistema de transacciones a fin de evitar el problema de que una moneda se pueda gastar dos veces. Por ejemplo es usada en BitcoinEthereumDogecoin y Litecoin, aunque cada una con sus particularidades 8 .
  • En el campo de las bases de datos de registro de nombres la cadena de bloques es usada para tener un sistema de notario de registro de nombres de tal forma que un nombre solo pueda ser utilizado para identificar el objeto que lo tiene efectivamente registrado. Es una alternativa al sistema tradicional de DNS. Por ejemplo es usada en Namecoin.
  • Uso como notario distribuido en distintos tipos de transacciones haciéndolas más seguras, baratas y rastreables. Por ejemplo se usa para sistemas de pago, transacciones bancarias (dificultando el lavado de dinero), envío de remesas y préstamos.
  • Es utilizado como base de plataformas descentralizadas que permiten soportar la creación de acuerdos de contrato inteligente entre pares. El objetivo de estas plataformas es permitir a una red de pares administrar sus propios contratos inteligentes creados por los usuarios. Primero se escribe un contrato mediante un código y se sube a la cadena de bloques mediante una transacción. Una vez en la cadena de bloques el contrato tiene una dirección desde la cual se puede interaccionar con él. Ejemplos de este tipo de plataformas son Ethereum y Eris

Clasificación

Las cadenas de bloques se pueden clasificar basándose en el acceso a los dos datos almacenados en la misma7 :

  • Cadena de bloques pública: Es aquella en la que no hay restricciones ni para leer los datos de la cadena de bloques (los cuales pueden haber sido cifrados) ni para enviar trasacciones para que sean incluidas en la cadena de bloques
  • Cadena de bloques privada: Es aquella en la que tanto los accesos a los datos de la cadena de bloque como el envío de transacciones para ser incluidas, están limitadas a una lista predefinida de entidades

Ambos tipos de cadenas deben ser considerados como casos extremos pudiendo haber casos intermedios.

Las cadenas de bloques se pueden clasificar basándose en los permisos para generar bloques en la misma7 :

  • Cadena de bloques sin permisos: Es aquella en la que no hay restricciones para que las entidades puedan procesar transacciones y crear bloques. Este tipo de cadenas de bloques necesitan tokens nativos para proveer incentivos que los usuarios mantengan el sistema. Ejemplos de tokens nativos son los nuevos bitcoins que se obtienen al construir un bloque y las comisiones de las transacciones. La cantidad recompensada por crear nuevos bloques es una buena medida de la seguridad de una cadena de bloques sin permisos.
  • Cadena de bloques con permisos: Es aquella en la que el procesamiento de transacciones está desarrollado por una predefinida lista de sujetos con identidades conocidas. Por ello generalmente no necesitan tokens nativos. Los tokens nativos son necesarios para proveer incentivos para los procesadores de transacciones. Por ello es típico que usen como protocolo de consenso prueba de participación

Las cadenas de bloques públicas pueden ser sin permisos (ej. Bitcoin) o con permisos (ej. cadenas laterales federadas9 . Las cadenas de bloques privadas tienen que ser con permisos 9 . Las cadenas de bloques con permisos no tienen que ser privadas ya que hay distintas formas de acceder a los datos de la cadena de bloques como por ejemplo7 :

  • Leer las transacciones de la cadena de bloques, quizás con algunas restricciones (Ejemplo un usuario puede tener acceso sólo a las transacciones en las que está involucrado directamente)
  • Proponer nuevas transacciones para la inclusión en la cadena de bloques.
  • Crear nuevos bloques de transacciones y añadirlo a la cadena de bloques.

Mientras que la tercera forma de acceso en las cadenas de bloques con permisos está restringida para cierto conjunto limitado de entidades, no es obvio que el resto de accesos a la cadena de bloques debería estar restringido. Por ejemplo una cadena de bloques para entidades financieras sería una con permisos pero podría7 :

  • Garantizar el acceso de lectura (quizá limitada) para transacciones y cabeceras de bloques para sus clientes con el objetivo de proveer una tecnológica, transparente y fiable forma de asegurar la seguridad de los depósitos de sus clientes.
  • Garantizar acceso de lectura completo a los reguladores para garantizar el necesario nivel de cumplimiento.
  • Proveer a todas las entidades con acceso a los datos de la cadena de bloques una descripción exhaustiva y rigurosa del protocolo, el cual debería contener explicaciones de todas las posibles interacciones con los datos de la cadena de bloques.

 

Cadena lateral

Una cadena lateral, en inglés sidechain, es una cadena de bloques que valida datos desde otra cadena de bloques a la que se llama principal. Su utilidad principal es poder aportar funcionalidades nuevas, las cuales pueden estar en periodo de pruebas, apoyándose en la confianza ofrecida por la cadena de bloques principal10 11 . Las cadenas laterales funcionan de forma similar a como hacían las monedas tradicionales con el patrón oro.12

Un ejemplo de cadena de bloques que usa cadenas laterales es Lisk13 . Debido a la popularidad de Bitcoin y la enorme fuerza de su red para dar confianza mediante su algoritmo de consenso por prueba de trabajo, se quiere aprovechar como cadena de bloques principal y construir cadenas laterales vinculadas que se apoyen en ella. Una cadena lateral vinculada es una cadena lateral cuyos activos pueden ser importados desde y hacia la otra cadena. Este tipo de cadenas se puede conseguir de las siguiente formas11 :

  • Vinculación federada, en inglés federated peg. Una cadena lateral federada es una cadena lateral en la que el consenso es alcanzado cuando cierto número de partes están de acuerdo (confíanza semicentralizada). Por tanto tenemos que tener confianza en ciertas entidades. Este es el tipo de cadena lateral Liquid, de código cerrado, propuesta por Blockstream14 .
  • Vinculación SPV, en inglés SPV peg donde SPV viene de Simplified Payment Verification. Usa pruebas SPV. Esencialmente una prueba SPV está compuesta de una lista de cabeceras de bloque que demuestran prueba de trabajo y una prueba criptográfica de que una salida fue creada en uno de los bloques de la lista. Esto permite a los verificadores chequea que cierta cantidad de trabajo ha sido realizada para la existencia de la salida. Tal prueba puede ser invalidada por otra prueba demostrando la existencia de una cadena con más trabajo la cual no ha incluido el bloque que creó la salida. Por tanto no se requiere confianza en terceras partes. Es la forma ideal. Para conseguirla sobre Bitcoin el algoritmo tiene que ser modificado y es dificil alcanzar el consenso para tal modificación. Por ello se usa con bitcoin vinculación federada como medida temporal

Referencias

  • An Integrated Reward and Reputation Mechanism for MCS Preserving Users’ Privacy. Cristian Tanas, Sergi Delgado-Segura, Jordi Herrera-Joancomartí. 4 de febrero de 2016. Data Privacy Management, and Security Assurance. 2016. pp 83-99
  1. Economist Staff (2015-10-31). «Blockchains: The great chain of being sure about things»The Economist. Consultado el 18 June 2016. «[Subtitle] The technology behind bitcoin lets people who do not know or trust each other build a dependable ledger. This has implications far beyond the crypto currency.»
  2. Morris, David Z. (2016-05-15). «Leaderless, Blockchain-Based Venture Capital Fund Raises $100 Million, And Counting»Fortune (magazine). Consultado el 2016-05-23.
  3. Popper, Nathan (2016-05-21). «A Venture Fund With Plenty of Virtual Capital, but No Capitalist»New York Times. Consultado el 2016-05-23.
  4. Brito, Jerry & Castillo, Andrea (2013). «Bitcoin: A Primer for Policymakers». Fairfax, VA: Mercatus Center, George Mason University. Consultado el 22 October 2013.
  5. Trottier, Leo (2016-06-18). «original-bitcoin» (self-published code collection). github. Consultado el 2016-06-18. «This is a historical repository of Satoshi Nakamoto's original bit coin sourcecode».
  6. «Blockchain»Investopedia. Consultado el 19 March 2016. «Based on the Bitcoin protocol, the blockchain database is shared by all nodes participating in a system.»
  7. a b c d e Public versus Private Blockchains. Part 1Public versus Private Blockchains. Part 2. BitFury Group in collaboration with Jeff Garzik. Octubre 2015
  8. «Particularidades Desarrollo Blockchain». Consultado el 7 Marzo 2017.
  9. Digital Assets on Public Blockchains. BitFury Group. Marzo 2016
  10. La revolución de la tecnología de las cadenas de bloques y su impacto en los sectores económicos. Ismael Santiago Moreno. Profesor Doctor de Finanzas. Universidad de Sevilla octubre 2016
  11. a b Enabling Blockchain Innovations with Pegged Sidechains. Adam Back et ali. 2014
  12. Cadenas laterales: el gran salto adelante. Majamalu el 11 abril, 2014 en Economía, Opinión
  13. Lisk libera la primera criptomoneda modular con cadenas laterales. Bitcoin PR Buzz. Mayo 2016
  14. Liquid Recap and FAQ. Johnny Dilley. Noviembre de 2015

Enlaces externos

Link: https://es.wikipedia.org

Picture of System Administrator

Calidad (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 12:49 AM
 

CONCEPTOS RELATIVOS A LA CALIDAD

Requisito: Necesidad o expectativa establecida, generalmente implícita u obligatoria.

Clase: Categoría o rango dado a diferentes requisitos de la calidad para productos, procesos o sistemas que tienen el mismo uso funcional.

Calidad: Grado en el que un conjunto de características inherentes cumple con los requisitos.

Capacidad: Aptitud de una organización, sistema o proceso para realizar un producto que cumple los requisitos para ese producto.

Satisfacción del cliente: Percepción del cliente sobre el grado en que se han cumplido sus requisitos.

Picture of System Administrator

Características (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 12:58 AM
 

CONCEPTOS RELATIVOS A LAS CARACTERÍSTICAS

Característica: Rasgo diferenciador.

Seguridad de funcionamiento: Término colectivo utilizado para describir el desempeño de la disponibilidad y los factores que la influencian: desempeño de la confiabilidad, de la capacidad de mantenimiento y del mantenimiento de apoyo.

Trazabilidad: Capacidad para seguir la historia, la aplicación o la localización de todo aquello que ésta bajo consideración.

Característica de la calidad: Característica inherente de un producto, proceso o sistema relacionado con un requisito.

Picture of System Administrator

CISO

by System Administrator - Monday, 13 February 2017, 9:39 PM
 

CISO (chief information security officer)

Posted by: Margaret Rouse | Contributor(s): Emily McLaughlin, Taina Teravainen

The CISO (chief information security officer) is a senior-level executive responsible for developing and implementing an information security program, which includes procedures and policies designed to protect enterprise communications, systems and assets from both internal and external threats. The CISO may also work alongside the chief information officer to procure cybersecurity products and services and to manage disaster recovery and business continuity plans.

The chief information security officer may also be referred to as the chief security architect, the security manager, the corporate security officer or the information security manager, depending on the company's structure and existing titles. While the CISO is also responsible for the overall corporate security of the company, which includes its employees and facilities, he or she may simply be called the chief security officer (CSO).

CISO role and responsibilities

Instead of waiting for a data breach or security incident, the CISO is tasked with anticipating new threats and actively working to prevent them from occurring. The CISO must work with other executives across different departments to ensure that security systems are working smoothly to reduce the organization's operational risks in the face of a security attack. 

The chief information security officer's duties may include conducting employee security awareness training, developing secure business and communication practices, identifying security objectives and metrics, choosing and purchasing security products from vendors, ensuring that the company is in regulatory compliance with the rules for relevant bodies, and enforcing adherence to security practices.

Other duties and responsibilities CISOs perform include ensuring the company's data privacy is secure, managing the Computer Security Incident Response Team and conducting electronic discovery and digital forensic investigations.

CISO qualifications and certifications

A CISO is typically an individual who is able to effectively lead and manage employees and who has a strong understanding of information technology and security, but who can also communicate complicated security concepts to technical and nontechnical employees. CISOs should have experience with risk management and auditing.

Many companies require CISOs to have advanced degrees in business, computer science or engineering, and to have extensive professional working experience in information technology. CISOs also typically have relevant certifications such as Certified Information Systems Auditor and Certified Information Security Manager, issued by ISACA, as well as Certified Information Systems Security Professional, offered by (ISC)2.

CISO salary

According to the U.S. Bureau of Labor Statistics, computer and information systems managers, including CISOs, earned a median annual salary of $131,600 as of May 2015. According to Salary.com, the annual median CISO salary is $197,362. CISO salaries appear to be increasing steadily, according to research from IT staffing firms. In 2016, IT staffing firm SilverBull reported the median CISO salary had reached $224,000. 

Continue Reading About CISO (chief information security officer)

Link: http://searchsecurity.techtarget.com

Related Terms

 

Picture of System Administrator

Cloud IoT and IT Security

by System Administrator - Thursday, 2 July 2015, 7:57 PM
 

Cloud IoT and IT Security

More organizations are deploying Internet of Things devices and platforms to improve efficiency, enhance customer service, open up new business opportunities and reap other benefits. But the IoT can expose enterprises to new security threats, with every connected object becoming a potential entry point for attackers.

This eBook will discuss:

  • What to expect from IoT security standardization efforts;
  • Whether current generation systems, like mobile device management software, will help;
  • How to approach networking to keep corporate systems secure; and 
  • How to make sure the cloud components of your IoT implementations are secure.

Please read the attached ebook.               

Picture of System Administrator

Cloud Mechanics: Delivering Performance in Shared Environments

by System Administrator - Monday, 22 December 2014, 9:17 PM
 

Cloud Mechanics: Delivering Performance in Shared Environments

By: VMTurbo

Expedient Data Centers, a leader in Managed and Data Center Services with locations from Cleveland to Memphis to Boston, unpacks the mechanics of how it consistently meets Service Level Agreements for its customers. This whitepaper explores how service providers use VMTurbo to provide consistent performance across all workloads, as well as the three roles a responsible managed service provider (MSP) takes in order to accomplish that directive.

Please read the attached whitepaper.

 

Picture of System Administrator

Cloud Orchestrator

by System Administrator - Thursday, 29 October 2015, 8:23 PM
 

Cloud Orchestrator

Posted by Margaret Rouse

A cloud orchestrator is software that manages the interconnections and interactions among cloud-based and on-premises business units. Cloud orchestrator products use workflows to connect various automated processes and associated resources. The products usually include a management portal.

To orchestrate something is to arrange various components so they achieve a desired result. In an IT context, this involves combining tasks into workflows so the provisioning and management of various IT components and their associated resources can be automated. This endeavor is more complex in a cloud environment because it involves interconnecting processes running across heterogeneous systems in multiple locations. 

Cloud orchestration products can simplify the intercomponent communication and connections to users and other apps and ensure that links are correctly configured and maintained. Such products usually include a Web-based portal so that orchestration can be managed through a single pane of glass.

When evaluating cloud orchestration products, it is recommended that administrators first map the workflows of the applications involved. This step will help the administrator visualize how complicated the internal workflow for the application is and how often information flows outside the set of app components. This, in turn, can help the administrator decide which type of orchestration product will help automate workflow best and meet business requirements in the most cost-effective manner.  

Orchestration, in an IT context, is the automation of tasks involved with managing and coordinating complex software and services. The endeavor is more complex in a cloud environment because it involves interconnecting processes running across heterogeneous systems in multiple locations. Processes and transactions have to cross multiple organizations, systems and firewalls.

The goal of cloud orchestration is to, insofar as is possible, automate the configuration, coordination and management of software and software interactions in such an environment. The process involves automating workflows required for service delivery. Tasks involved include managing server runtimes, directing the flow of processes among applications and dealing with exceptions to typical workflows.

Vendors of cloud orchestration products include Eucalyptus, Flexiant, IBM, Microsoft, VMware and V3 Systems.

The term “orchestration” originally comes from the study of music, where it refers to the arrangement and coordination of instruments for a given piece.

Continue Reading About cloud orchestrator

Related Terms

Dig Deeper on Cloud data integration and application integration

Link: http://searchcloudapplications.techtarget.com

Picture of System Administrator

Cloud vs. on-premises

by System Administrator - Tuesday, 2 May 2017, 10:26 PM
 

Cloud vs. on-premises: Finding the right balance

By Sandra Gittlen

The process of figuring out which apps work in the cloud vs. on-premises doesn't yield the same results for everyone.

Greg Downer, senior IT director at Oshkosh Corp., a manufacturer of specialty heavy vehicles in Oshkosh, Wisc., wishes he could tip the balance of on-premises vs. cloud more in the direction of the cloud, which currently accounts for only about 20% of his application footprint. However, as a contractor for the Department of Defense, his company is beholden to strict data requirements, including where data is stored.

"Cloud offerings have helped us deploy faster and reduce our data center infrastructure, but the main reason we don't do more in the cloud is because of strict DoD contract requirements for specific types of data," he says.

In Computerworld's Tech Forecast 2017 survey of 196 IT managers and leaders, 79% of respondents said they have a cloud project underway or planned, and 58% of those using some type of cloud-based system gave their efforts an A or B in terms of delivering business value.

Downer counts himself among IT leaders bullish on the cloud and its potential for positive results. "While we don't have a written cloud-first statement, when we do make new investments we look at what the cloud can offer," he says.

Oshkosh has moved some of its back-office systems, including those supporting human resources, legal and IT, to the cloud. He says most of the cloud migration has been from legacy systems to software as a service (SaaS). For instance, the organization uses ServiceNow's SaaS for IT and will soon use it for facilities management.

According to the Forecast report, a third of respondents plan to increase spending on SaaS in the next 12 months.

Cordell Schachter, CTO of New York City's Department of Transportation, says he allies with the 22% of survey respondents who plan to increase investments in a hybrid cloud computing environment. The more non-critical applications he moves out of the city's six-year-old data center, the more room he'll have to support innovative new projects such as the Connected Vehicle Pilot Deployment Program, a joint effort with the U.S. Department of Transportation's Intelligent Transportation Systems Joint Program Office.

The Connected Vehicle project, in the second year of a five-year pilot, aims to use dedicated short-range communication coupled with a network of in-vehicle and roadway sensors to automatically notify drivers of connected vehicles of traffic issues. "If there is an incident ahead of you, your car will either start braking on its own or you'll get a warning light saying there's a problem up ahead so you can avoid a crash," Schachter says. The program's intent is to reduce the more than 30,000 vehicle fatalities that occur in the U.S. each year.

Supporting that communication network and the data it generates will require more than the internal data center, though. Schachter says the effort will draw on a hybrid of on-premises and cloud-based applications and infrastructure. He expects to tap a combination of platform as a service, infrastructure as a service, and SaaS to get to the best of breed for each element of the program.

"We can use the scale of cloud providers and their expertise to do things we wouldn't be able to do internally," he says, adding that all providers must meet NYC DOT's expectations of "safer, faster, smarter and cheaper."

Apps saved for on-premises

In fact, Schachter has walled off only a few areas that aren't candidates for the cloud -- such as emergency services and email. "NYC DOT is one of the most sued entities in New York City, and we constantly need to search our corpus of emails. We have a shown a net positive by keeping that application on-premises to satisfy Freedom of Information Law requests as well as litigation," he says.

The City of Los Angeles also has its share of applications that are too critical to go into the cloud, according to Ted Ross, CIO and general manager of the city's Information Technology Agency. For instance, supervisory control and data acquisition (SCADA), 911 Dispatch, undercover police operations, traffic control and wastewater management are the types of data sets that will remain on-premises for the foreseeable future.

"The impact of an abuse is so high that we wouldn't consider these applications in our first round of cloud migrations. As you can imagine, it's critical that a hacker not gain access to release sewage into the ocean water or try to turn all streetlights green at the same time," he says.

The cloud does serve as an emergency backup to the $108 million state-of-the-art emergency operations center. "If anything happens to the physical facility, our software, mapping and other capabilities can quickly spin up in the cloud," he says, adding that Amazon Web Services and Microsoft Azure provide many compelling use cases.

The city, with more than 1,000 virtual servers on-premises, considers the cloud a cost-effective godsend. "We very much embrace the cloud because it provides an opportunity to lower costs, makes us more flexible and agile, offers off-site disaster recovery, empowers IT personnel, and provides a better user experience," he says.

SaaS is a gateway drug to other cloud services. Ted Ross, CIO, city of Los Angeles

As an early adopter of Google's Gmail in 2010, Ross appreciates the value of the cloud, so much so that in 2014, the city made cloud a primary business model, starting with SaaS, which he calls "a gateway drug to other cloud services."

Eventually, the city ventured into infrastructure as a service, including using "a lot of Amazon Web Services," which Ross describes as more invasive than SaaS and more in need of collaboration between the service provider and the network team. "You have to be prepared to have a shared security model and to take the necessary steps to enact it," he says. Cloud computing also requires additional network bandwidth to reduce latency and maximize performance, he adds.

Other reasons for saying no to the cloud

As much as Ross is a cloud promoter, he says he fully understands the 21% of respondents to Computerworld's Forecast survey who say they have no plans to move to the cloud. "I get worried when users simply want to spin up anything anywhere and are only concerned about functionality, not connectivity and security."

Ron Heinz, founder and managing director of venture capital firm Signal Peak Ventures, says there will always be a market for on-premises applications and infrastructure. For instance, one portfolio client that develops software for accountants found that 40% of its market don't want to move their workflow to the cloud.

Heinz attributes the hesitation to more mature accounting professionals and those with security concerns. "Everybody automatically assumes there is a huge migration to the cloud. But there will always be a segment that will never go the cloud as long as you have strong virtual private networks and strong remote access with encrypted channels," he says.

Greg Collins, founder and principal analyst at analyst firm Exact Ventures, has found clients usually stick with on-premises when they are still depreciating their servers and other gear. "They have the attitude 'if it ain't broke, don't fix it,'" he says.

Still, he also believes the cloud is still in the early days and will only grow as the installed base of on-premises equipment hits end of life.

Performance gains

"We have seen a significant shift in the last couple of years in the interest for public cloud," says Matthew L. Taylor, managing director of consulting firm Accenture Strategy. Accenture, a company of more than 394,000 employees, has most of its own applications hosted in the public cloud.

Many of his clients are not moving as fast. "I wouldn't say the majority of our clients' application loads are in the public cloud today; that's still the opportunity," he says.

Of the clients that have moved to the cloud, very few have gone back to on-premises. "If they did, it wasn't because the cloud-based capabilities were not ready; it was because the company wasn't ready and hadn't thought the migration, application or value case through," Taylor says, adding that others who floundered did so because they couldn't figure out how to wean off their legacy infrastructure and run it in tandem with the cloud.

Most of his clients have been surprised to find that lower service costs have not been the biggest benefit of the cloud. "In the end, savings don't come from technology tools, they come from operational shifts and performance gains," he says.

For instance, a bank in Australia that he wouldn't name moved a critical application to the cloud but had two other applications on-premises, causing performance problems. The performance problems arose because the cloud app relied heavily on the on-premises applications, so performance was slowed as they tried to communicate with one another. Once the bank moved all three applications to the cloud, it found the applications had never performed better, and downtime and maintenance improved.

Kas Naderi, senior vice president of Atlanticus Holdings Corp., a specialty finance company focused on underserved consumers in the U.S., U.K., Guam and Saipan, had a similar experience when the company "lifted and shifted" its entire application portfolio to the cloud. "Every one of our applications performed as good or better than in our data center, which had hardware that was ten years old," he says.

In 2014, the company took all existing applications and ran them "as is" in the cloud environment. Atlanticus relied on consulting firm DISYS to not only validate Atlanticus' migration approach, but also to help staff a 24-hour, "follow the sun" implementation. "They enabled us to accelerate our timeline," he says. In addition, DISYS, an Amazon Web Services partner, lent its expertise to explain what would and wouldn't work in Amazon's cloud.

Atlanticus deployed a federated cloud topology distributed among Amazon Web Services, Microsoft Azure, Zadara cloud storage, InContact Automatic Call Distribution, and Vonage phone system, with applications sitting where they operate best -- such as Microsoft Active Directory on Azure. The company front-ends Amazon Web Services with a private cloud that handles security tasks including intrusion detection/prevention and packet inspection. "There is an absolute need for private cloud services to encapsulate a level of security and control that might not be available in the public cloud," Naderi says.

In its next phase of cloud migration, Atlanticus will assess whether legacy applications have SaaS or other cloud-based alternatives that perform even better. In other words, the company took all its applications "as is," including legacy, and put them in the cloud. Now they are going to see if there are better alternatives to those legacy apps available to adopt.

Oshkosh ran a similar exercise and found that cloud-based SharePoint outperformed on-premises SharePoint and improved functionality. For instance, the company has been able to create a space where external suppliers can interact with internal employees, safely exchanging critical information. "That was challenging for on-premises," Downer says.

He adds: "We also are using various CRM cloud applications within some segments, and have started to meet niche business requirements on the shop floor with cloud solutions."

Staffing the cloud

As organizations move to the cloud, they sometimes harbor the misconception that migration means they need fewer IT staff. These IT leaders say that's not the case. Instead, they've gotten more value out of their skilled workforce by retraining them to handle the demands of cloud services.

Greg Downer, senior IT director at specialty vehicle manufacturer Oshkosh Corp.: "We retrained our legacy people, which went well. For instance, we trained our BMC Remedy administrators on the ServiceNow SaaS. We're not just using 10% to 20% of a large on-premises investment, but getting the full value of the platform subscription we are paying for."

Kas Naderi, senior vice president of technology, specialty finance company Atlanticus Holdings Corp.: "Our staff used to be extended beyond the normal 40-hour week, handling ad-hoc requests, emergencies, upgrades, security, etc. We were blessed to have a very flexible and high-IQ staff and were happy to shift their day-to-day responsibilities away from upkeep and maintenance to leadership of how to best leverage these cloud-based platforms for better quality of service. We have become a lot more religious on operating system upgrades and security postures and a lot more strategic on documentation and predictability of services. We went from racking and stacking and maintaining the data center to a business purpose."

Ted Ross, general manager of information technology and CIO, city of Los Angeles: "Moving to the cloud requires a sizeable skills change, but it's also a force multiplier that lets fewer hands do a lot more. We're not a start-up; we're a legacy enterprise. Our data center had a particular set of processes and its own ecosystem and business model. We want to continue that professionalism, but make the pivot to innovative infrastructure. We still have to be smart about data, making sure it's encrypted at rest, and working through controls. The cloud expands our ecosystem considerably, but of course we still don't want to allow critical information into the hands of the wrong people."-- Sandra Gittlen

Related:  

Link: http://www.computerworld.com

Picture of System Administrator

Cloud-Based Disaster Recovery on AWS

by System Administrator - Monday, 5 January 2015, 8:38 PM
 

Best Practices: Cloud-Based Disaster Recovery on AWS

This book explains Cloud-based Disaster Recovery in comparison to traditional DR, explains its benefits, discusses preparation tips, and provides an example of a globally recognized, highly successful Cloud DR deployment.

Please read the attached PDF

Using AWS for Disaster Recovery

by Jeff Barr

Disaster recovery (DR) is one of most important use cases that we hear from our customers. Having your own DR site in the cloud ready and on standby, without having to pay for the hardware, power, bandwidth, cooling, space and system administration and  quickly launch resources in cloud, when you really need it (when disaster strikes in your datacenter) makes the AWS cloud the perfect solution for DR. You can quickly recover from a disaster and ensure business continuity of your applications while keeping your costs down.

Disaster recovery is about preparing for and recovering from a disaster.  Any event that has a negative impact on your business continuity or finances could be termed a disaster.  This could be hardware or software failure, a network outage, a power outage, physical damage to a building like fire or flooding, human error, or some other significant disaster. 

In that regard, we are very excited to release Using AWS for Disaster Recovery Whitepaper. The paper highlights various AWS features and services that you can leverage for your DR processes and shows different architectural approaches on how to recover from a disaster. Depending on your Recovery Time Objective (RTO) and Recovery Point Objective (RPO) – two commonly used industry terms when building your DR strategy – you have the flexibility to choose the right approach that fits your budget. The approaches could be as minimum as backup and restore from the cloud or full-scale multi-site solution deployed in onsite and AWS with data replication and mirroring.

The paper further provides recommendations on how you can improve your DR plan and leverage the full potential of AWS for your Disaster Recovery processes. 

AWS cloud not only makes it cost-effective to do DR in the cloud but also makes it easy, secure and reliable. With APIs and right automation in place, you can fire up and test whether you DR solution really works (and do that every month, if you like) and be prepared ahead of time. You can reduce your recovery times by quickly provisioning pre-configured resources (AMIs) when you need them or cutover to already provisioned DR site (and then scaling gradually as you need). You can bake the necessary security best practices into an AWS CloudFormation template and provision the resources in an Amazon Virtual Private Cloud (VPC). All at the fraction of the cost of conventional DR. 

Link: https://aws.amazon.com

AWS Architecture Blog

 

Picture of System Administrator

Cloudlet

by System Administrator - Tuesday, 14 February 2017, 11:31 AM
 

Cloudlet

Posted by: Margaret Rouse | Contributor(s): Kathleen Casey

A cloudlet is a small-scale data center or cluster of computers designed to quickly provide cloud computing services to mobile devices, such as smartphones, tablets and wearable devices, within close geographical proximity.

The goal of a cloudlet is to increase the response time of applications running on mobile devices by using low latency, high-bandwidth wireless connectivity and by hosting cloud computing resources, such as virtual machines, physically closer to the mobile devices accessing them. This is intended to eliminate the wide area network (WAN) latency delays that can occur in traditional cloud computing models.

The cloudlet was specifically designed to support interactive and resource-intensive mobile applications, such as those for speech recognition, language processing, machine learning and virtual reality.

 

Key differences between a cloudlet and a public cloud data center

A cloudlet is considered a form of cloud computing because it delivers hosted services to users over a network. However, a cloudlet differs from a public cloud data center, such as those operated by public cloud providers like Amazon Web Services, in a number of ways.

First, a cloudlet is self-managed by the businesses or users that employ it, while a public cloud data center is managed full-time by a cloud provider. Second, a cloudlet predominantly uses a local area network (LAN) for connectivity, versus the public Internet. Thirdly, a cloudlet is employed by fewer, more localized users than a major public cloud service. Finally, a cloudlet contains only "soft state" copies of data, such as a cache copy, or code that is stored elsewhere.

The cloudlet prototype

A prototype implementation of a cloudlet was originally developed by Carnegie Mellon University as a research project, starting in 2009. The term cloudlet was coined by computer scientists Mahadev Satyanarayanan, Victor Bahl, Ramón Cáceres and Nigel Davies.

Continue Reading About cloudlet

Related Terms

Link: http://searchcloudcomputing.techtarget.com

Picture of System Administrator

Common Vulnerabilities and Exposures (CVE)

by System Administrator - Thursday, 30 April 2015, 11:07 PM
 

Common Vulnerabilities and Exposures (CVE)

Picture of System Administrator

Compliance Audit

by System Administrator - Saturday, 14 March 2015, 1:53 PM
 

Compliance Audit

Posted by Margaret Rouse

A compliance audit is a comprehensive review of an organization's adherence to regulatory guidelines. Independent accounting, security or IT consultants evaluate the strength and thoroughness of compliancepreparations. Auditors review security polices, user access controls and risk management procedures over the course of a compliance audit.

What, precisely, is examined in a compliance audit will vary depending upon whether an organization is a public or private company, what kind of data it handles and if it transmits or stores sensitive financial data. For instance, SOX requirements mean that any electronic communication must be backed up and secured with reasonable disaster recovery infrastructure. Healthcare providers that store or transmit e-health records, like personal health information, are subject to HIPAA requirements. Financial services companies that transmit credit card data are subject to PCI DSSrequirements. In each case, the organization must be able to demonstrate compliance by producing an audit trail, often generated by data from event log management software.

Compliance auditors will generally ask CIOs, CTOs and IT administrators a series of pointed questions over the course of an audit. These may include what users were added and when, who has left the company, whether user IDs were revoked and which IT administrators have access to critical systems. IT administrators prepare for compliance audits using event log managers and robust change management software to allow tracking and documentation authentication and controls in IT systems. The growing category of GRC (governance, risk management and compliance) software enables CIOs to quickly show auditors (and CEOs) that the organization is in compliance and will not be not subject to costly fines or sanctions.

 

Continue Reading About compliance audit

Related Terms

Link: http://searchcompliance.techtarget.com

Picture of System Administrator

Conformidad (CALIDAD)

by System Administrator - Thursday, 9 May 2013, 1:02 AM
 

CONCEPTOS RELATIVOS A LA CONFORMIDAD

Defecto: Incumplimiento de un requisito asociado a un uso previsto o especificado.

No conformidad: Incumplimiento de un requisito.

Conformidad: Cumplimiento de un requisito.

Liberación: Autorización para proseguir con la siguiente etapa de un proceso.

Acción preventiva: Acción tomada para eliminar la causa de una no conformidad potencial u otra situación potencialmente indeseable.

Acción correctiva: Acción tomada para eliminar la causa de una no conformidad detectada u otra situación indeseable.

Corrección: Acción tomada para eliminar una no conformidad detectada.

Reproceso: Acción tomada sobre un producto no conforme para que cumpla con los requisitos.

Reparación: Acción tomada sobre un producto no conforme para convertirlo en aceptable para su utilización prevista.

Reclasificación: Variación de la clase de un producto no conforme, de tal forma que sea conforme con requisitos que difieren de los iniciales.

Desecho: Acción tomada sobre un producto no conforme para impedir su uso inicialmente previsto.

Concesión: Autorización para utilizar o liberar un producto que no es conforme con los requisitos especificados.

Permiso de desviación: Autorización para apartarse de los requisitos originalmente especificados de un producto antes de su realización.

Picture of System Administrator

Connection Broker

by System Administrator - Monday, 9 March 2015, 3:07 AM
 

5 ways a Connection Broker Simplifies Hosted Environments

With all the moving parts to think about when moving resources into the data center, a connection broker might be the last thing on your mind.

Waiting until you've designed the rest of your data center to consider the connection broker can be detrimental to the overall usability of your system. 

This is why we've created our new eBook, which outlines five scenarios where including a connection broker into your design from the get-go can future-proof and improve your hosted desktop solution.  

Download our new eBook and learn about:

  • Supporting mixed virtual and physical environments
  • Migrating between virtualization and hosted desktop solutions
  • Supporting a wide range of users and use cases
  • And more!

Please read the attached whitepaper

Picture of System Administrator

Converged Infrastructure

by System Administrator - Monday, 30 November 2015, 5:20 PM
 

Achieve Your IT Vision With Converged Infrastructure

Whether you've already deployed a converged system or have future deployment plans, you can maximize that investment with automation. This paper outlines 4 steps to reduce your IT complexity with converged infrastructure so your team gains the freedom to innovate and drive bottom-line results.

 

Please read the attached whitepaper.

Picture of System Administrator

Converged Infrastructures Deliver the Full Value of Virtualization

by System Administrator - Sunday, 27 December 2015, 3:27 PM
 

Converged Infrastructures Deliver the Full Value of Virtualization

By Ravi Chalaka | Hitachi Data Systems

Satisfied with your virtualization efforts?
You could be.

How does an organization modernize IT and get more out of infrastructure resources? That’s a question many CIOs ask themselves. With hundreds or even thousands of physical hardware resources, increasing complexity and massive data growth, you need new, reliable ways to deliver IT services in an on-demand, flexible and scalable fashion. You also must address requests for faster delivery of business services, competition for resources and trade-offs between IT agility and vendor lock-in.

Please read the attached whitepaper.

Picture of System Administrator

CRM Handbook

by System Administrator - Thursday, 9 July 2015, 4:31 PM
 

 

Please read the attached handbook.

Picture of System Administrator

Crowdsourced Testing

by System Administrator - Wednesday, 29 March 2017, 9:40 PM
 

¿La solución para la rápida entrega de aplicaciones para móviles? Es una prueba de crowdsourcing

Crowdsourced Testing es una plataforma web que conecta empresas especializadas en desarrollo de software y sitios web con una red internacional de profesionales del aseguramiento de calidad (testers) que pueden probar sus productos para encontrar fallas y reportarlas de forma rápida y expedita para facilitar su corrección, donde el cliente son las empresas que pagan por este servicio y el usuario el grupo de testers encargado de las mejoras. Los testers de Crowdsourced Testing son trabajadores independientes que trabajan desde su casa, todos con experiencia previa en aseguramiento de calidad de productos informáticos. 

The solution to speedy mobile app delivery? It's crowdsourced testing

 

Sometimes you just need a lot of users playing with your app to find out how it's really working. Enter crowdsourced testing. It's the latest strategy to speed up your mobile dev.

At a time when the pressure to develop, test and release mobile apps quickly has never been more intense, the idea of crowdsourced testing is growing in popularity. The concept is simple: A crowdsourced testing company can offer thousands of testers in different locations around the world a wide swath of devices, and by literally throwing a "crowd" at the problem, testing that might take weeks with a small internal team can be done on a weekend, said Peter Blair, vice president of marketing at Applause. And it's an idea that has apparently caught hold. According to data from market research firm Gartner Group, there were 30 crowdsourced testing companies operating at the end of last year, offering fully vetted (qualified) testers, up from just 20 companies in 2015.

Priyanka Halder, director of quality assurance at HomeMe, is no stranger to crowdsourced testing. She participated in a number of "bug battles" at uTest, a software testing community that also offers crowdsourced testing opportunities. So when she joined the small startup HomeMe she immediately began thinking about a crowdsourced testing solution. 

"We're a pretty small company and we needed a larger number of people looking at our app and on a tight budget," she said. "This is the perfect model for us because we can't afford a big team on our site."

People just do things that no system, no automation and no engineer could ever predict they'd do." 

Peter Blairvice president of marketing, Applause

With crowdsourced testing it is all about the big team. Blair said Applause has over 250,000 fully vetted testers, most of whom are QA professionals with full-time jobs who do this on the side. These testers are located around the world, and are paired with "pretty much every mobile device you can think of," he said. So a crowdsourced customer wouldn't have to worry about having access to every single version of an Android phone, which Blair said is a huge selling point.

But the biggest issue, he said, is that companies are hungry to see how real users actually interface with their products. "People just do things that no system, no automation and no engineer could ever predict they'd do," he explained. "Customers who've used us just to augment their teams many times end up staying on because they like seeing the results of our exploratory testing," he said, and they can't get that information easily any other way.

Halder said she looked at a number of crowdsourced testing options before settling on Applause. The biggest plus for her was how easy it was to get the testing feedback and how mature the company's process was. "It can be a nightmare to coordinate how to get the information back from the testers. This ended up being a way for us to get more people actually using our app for less money and get all the feedback we need."

 

Next Steps

Link: http://searchsoftwarequality.techtarget.com

 

 

 

Picture of System Administrator

CROWDSOURCING FOR ENTERPRISE IT

by System Administrator - Tuesday, 14 July 2015, 6:45 PM
 

10 KEY QUESTIONS (AND ANSWERS) ON CROWDSOURCING FOR ENTERPRISE IT

A starting guide for augmenting technical teams with crowdsourced design, development and data science talent

A crowdsourcing platform is essentially an open marketplace for technical talent. The requirements, timelines, and economics behind crowdsourced projects are critical to successful outcomes. Varying crowdsourcing communities have an equal variety of payments being offered for open innovation challenges. Crowdsourcing is meritocratic - contributions are rewarded based on value. However, the cost-efficiencies of a crowdsourced model reside in the model's direct access to talent, not in the compensated value for that talent. Fair market value is expected for any work output. The major cost difference with between legacy sourcing models and a crowdsourcing model is (1) the ability to directly tap into technical expertise, and (2) that costs are NOT based around time or effort.

Please read the attached whitepaper.

Picture of System Administrator

Customer Journey Map

by System Administrator - Thursday, 21 September 2017, 7:00 PM
 

Mapa de viaje del cliente


Page: (Previous)   1  2  3  4  5  6  7  8  9  10  ...  63  (Next)
  ALL