Glosario KW | KW Glossary
Ontology Design | Diseño de Ontologías
Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL
C (DATA CENTER)
C (OPEN SOURCE)
C (WEB SERVICES)
CABLEADO: ELEMENTO CLAVE PARA EL CENTRO DE DATOS DEL FUTURO
El Centro de datos del futuro dependerá de cada aspecto de la red, incluyendo aquellas olvidadas bases de cableado, para funcionar a un nivel óptimo, mientras ocupa la mínima cantidad del valioso espacio. Los administradores y propietarios necesitan revisar cómo pueden usar la capa física para soportar el crecimiento en el futuro, o inversamente, ver cómo la capa física de cableado puede impedir dicho crecimiento. Siga leyendo para conocer la relevancia del cableado en el Centro de Datos del futuro.
Please read the attached whitepaper.
Cadena de Bloques
Formación de una cadena de bloques. La cadena mayor (negra) consiste del serie de bloques más larga del bloque de génesis (verde) al bloque actual. Bloques huerfanos (púrpura) existen fuera de la cadena mayor
Cadena de Bloques
Una cadena de bloques, también conocida por las siglas BC (del inglés Blockchain)1 2 3 4 5 es una base de datos distribuida, formada por cadenas de bloques diseñadas para evitar su modificación una vez que un dato ha sido publicado usando un sellado de tiempo confiable y enlazando a un bloque anterior.6 Por esta razón es especialmente adecuada para almacenar de forma creciente datos ordenados en el tiempo y sin posibilidad de modificación ni revisión. Este enfoque tiene diferentes aspectos:
El concepto de cadena de bloque fue aplicado por primera vez en 2009 como parte de Bitcoin.
Los datos almacenados en la cadena de bloques normalmente suelen ser transacciones (Ej. financieras) por eso es frecuente llamar a los datos transacciones. Sin embargo no es necesario que lo sean. Realmente podríamos considerar que lo que se registran son cambios atómicos del estado del sistema. Por ejemplo una cadena de bloques puede ser usada para estampillar documentos y securizarlos frente a alteraciones.7
El concepto de cadena de bloques es usado en los siguientes campos:
Las cadenas de bloques se pueden clasificar basándose en el acceso a los dos datos almacenados en la misma7 :
Ambos tipos de cadenas deben ser considerados como casos extremos pudiendo haber casos intermedios.
Las cadenas de bloques se pueden clasificar basándose en los permisos para generar bloques en la misma7 :
Las cadenas de bloques públicas pueden ser sin permisos (ej. Bitcoin) o con permisos (ej. cadenas laterales federadas9 . Las cadenas de bloques privadas tienen que ser con permisos 9 . Las cadenas de bloques con permisos no tienen que ser privadas ya que hay distintas formas de acceder a los datos de la cadena de bloques como por ejemplo7 :
Mientras que la tercera forma de acceso en las cadenas de bloques con permisos está restringida para cierto conjunto limitado de entidades, no es obvio que el resto de accesos a la cadena de bloques debería estar restringido. Por ejemplo una cadena de bloques para entidades financieras sería una con permisos pero podría7 :
Una cadena lateral, en inglés sidechain, es una cadena de bloques que valida datos desde otra cadena de bloques a la que se llama principal. Su utilidad principal es poder aportar funcionalidades nuevas, las cuales pueden estar en periodo de pruebas, apoyándose en la confianza ofrecida por la cadena de bloques principal10 11 . Las cadenas laterales funcionan de forma similar a como hacían las monedas tradicionales con el patrón oro.12
Un ejemplo de cadena de bloques que usa cadenas laterales es Lisk13 . Debido a la popularidad de Bitcoin y la enorme fuerza de su red para dar confianza mediante su algoritmo de consenso por prueba de trabajo, se quiere aprovechar como cadena de bloques principal y construir cadenas laterales vinculadas que se apoyen en ella. Una cadena lateral vinculada es una cadena lateral cuyos activos pueden ser importados desde y hacia la otra cadena. Este tipo de cadenas se puede conseguir de las siguiente formas11 :
CONCEPTOS RELATIVOS A LA CALIDAD
Requisito: Necesidad o expectativa establecida, generalmente implícita u obligatoria.
Clase: Categoría o rango dado a diferentes requisitos de la calidad para productos, procesos o sistemas que tienen el mismo uso funcional.
Calidad: Grado en el que un conjunto de características inherentes cumple con los requisitos.
Capacidad: Aptitud de una organización, sistema o proceso para realizar un producto que cumple los requisitos para ese producto.
Satisfacción del cliente: Percepción del cliente sobre el grado en que se han cumplido sus requisitos.
La gestión del conocimiento es un motor de cambio. Es más, es “la garantía para el cambio, la adaptabilidad/escalabilidad, el dinamismo y la creación de nuevos productos y servicios”.
Los hechos indican, pues, que el futuro del software dependerá de su capacidad de gestionar el conocimiento del Usuario Final y no en la reclusión del mismo respecto a un desarrollo cerrado (en código abierto o propietario). Para acentuar este comentario tenemos los contundentes hechos del avance de Linux y el derrumbe del modelo inicial de Internet, basado en “portales” de contenido “muerto” y cuyo retorno financiero simplemente no funcionó. ¿Quiénes han sobrevivido?. Los que facturan (Amazon, MedLine) o están asociados a empresas de software y/o proyectos subvencionados (no tomamos en cuenta aquellos que se deben a una imprescindible presencia en el web por razones estratégicas).
¿Qué tenemos entre manos?. Nuestro proyecto atiende tanto a los nuevos requerimientos tecnológicos como a las lógicas expectativas de quienes invertirán en esta idea y desean un retorno en el mediano plazo.
Las consultoras internacionales coinciden en tres grandes paradigmas que sustentan sin dudas la inversión en la tecnología KW:
Nosotros agregamos un cuarto paradigma, que ya es parte de las estrategias de varias multinacionales:
La experiencia nos ha confirmado una y otra vez que el Usuario Final se siente más complacido cuando adquiere una herramienta sólida que sea capaz de recibir en forma sencilla y efectiva su “know-how” y sobrevivir en gran forma a la obsolescencia tecnológica tan abrupta como la que tenemos actualmente.
Es cuestión de muy poco tiempo. En Latinoamérica el furor de los equipos refurbished/recertified está poniendo de moda la funcionalidad por encima de la potencia… una potencia que en el caso de las nuevos equipos, en los hechos, no se usa.
CONCEPTOS RELATIVOS A LAS CARACTERÍSTICAS
Característica: Rasgo diferenciador.
Seguridad de funcionamiento: Término colectivo utilizado para describir el desempeño de la disponibilidad y los factores que la influencian: desempeño de la confiabilidad, de la capacidad de mantenimiento y del mantenimiento de apoyo.
Trazabilidad: Capacidad para seguir la historia, la aplicación o la localización de todo aquello que ésta bajo consideración.
Característica de la calidad: Característica inherente de un producto, proceso o sistema relacionado con un requisito.
Posted by: Margaret Rouse
A case study, in a business context is a report of an organization's implementation of something, such as a practice, a product, a system or a service.
The case study can be thought of as a real-world test of how the implementation works, and how well it works. If documentation is sufficiently comprehensive, a case study should yield valuable information about the costs and benefits, both financial and otherwise. The organization itself and external parties can use case studies to gain more information about the specific implementation that can also help guide decisions about similar projects.
An organization might deploy, for example, a new business intelligence (BI) platform. A case study of its implementation would involve an exploration of each stage of the implementation, lessons learned along the way and the ultimate effects.
A case study is often compiled by an external party, perhaps for publication. Less formally, a case study may just be a content item, such as a brief article, discussing an organization's business implementation of something or a decision of some sort and one or more significant results.
Column database management system (CDMS) definition
Posted by Margaret Rouse
There are different types of CDBMS offerings, with the common defining feature being that data is stored by column (or column families) instead of as rows.
In a relational database, data access is always by row. Changing the focus from the row to the column can improve performance when large amounts of data are aggregated in a few columns. Generally speaking, a row-orientated focus is preferable for online transaction processing (OLTP) systems and a column-oriented focus is preferable for online analytical processing (OLAP) systems. Column stores are not useful for systems with wildly varying queries or supporting ACID transactions.
CDMSes are well-suited for use cases in which writes are uncommon and applications need to access a few columns of many rows at once. For example, column databases are well-suited for data marts that query large amounts of data aggregated for a small number of columns. The CDBMS can speed up analytical queries of the data in the data mart because it can focus just on the columns that need to be read and doesn’t need to read through thousands of rows.
Column stores are also useful when data has an expiration date because it is possible to set up a column so it will expire automatically after a certain date. Because the data stored in columns is typically similar, and the columns are stored adjacent to one another, some CDBMSes can compressdata and help storage capacity to be used more efficiently.
The concept of a column store is not new and variations of the idea have been implemented as part of relational databases in the past. NoSQL and relational column stores both focus on the column as the unit of storage, but NoSQL column stores permits columns to differ across column family rows, which is not permitted in a relational database.
A CDMS may also be called a columnar database management system, column-oriented database management system or column store database management system.
CHANGES FOR JAVA: LAMBDAS IN JAVA 8
CHANGES FOR JAVA: LAMBDAS IN JAVA 8
by Jason Tee
EMBRACING LAMBDA AND JAVA 8? HOW THE JAVA LANDSCAPE WILL CHANGE IN 2014
While the latest changes that have been introduced into the Java language are more iterative than transformative, the sheer size of the Java world means there’s so much going on that it really can be tough to keep up. Fortunately, TheServerSide had a chance to check in recently with a number of high profile Java experts, including Kirk Pepperdine and Adam Bien, to see what they like about the current state of Java, what their concerns are for the future, and what the short term challenges and achievements the Java community can expect throughout 2014. Here are a few of the insights they shared...
Please read the attached eGuide.
CISO (chief information security officer)
Posted by: Margaret Rouse | Contributor(s): Emily McLaughlin, Taina Teravainen
The CISO (chief information security officer) is a senior-level executive responsible for developing and implementing an information security program, which includes procedures and policies designed to protect enterprise communications, systems and assets from both internal and external threats. The CISO may also work alongside the chief information officer to procure cybersecurity products and services and to manage disaster recovery and business continuity plans.
The chief information security officer may also be referred to as the chief security architect, the security manager, the corporate security officer or the information security manager, depending on the company's structure and existing titles. While the CISO is also responsible for the overall corporate security of the company, which includes its employees and facilities, he or she may simply be called the chief security officer (CSO).
CISO role and responsibilities
Instead of waiting for a data breach or security incident, the CISO is tasked with anticipating new threats and actively working to prevent them from occurring. The CISO must work with other executives across different departments to ensure that security systems are working smoothly to reduce the organization's operational risks in the face of a security attack.
The chief information security officer's duties may include conducting employee security awareness training, developing secure business and communication practices, identifying security objectives and metrics, choosing and purchasing security products from vendors, ensuring that the company is in regulatory compliance with the rules for relevant bodies, and enforcing adherence to security practices.
Other duties and responsibilities CISOs perform include ensuring the company's data privacy is secure, managing the Computer Security Incident Response Team and conducting electronic discovery and digital forensic investigations.
CISO qualifications and certifications
A CISO is typically an individual who is able to effectively lead and manage employees and who has a strong understanding of information technology and security, but who can also communicate complicated security concepts to technical and nontechnical employees. CISOs should have experience with risk management and auditing.
Many companies require CISOs to have advanced degrees in business, computer science or engineering, and to have extensive professional working experience in information technology. CISOs also typically have relevant certifications such as Certified Information Systems Auditor and Certified Information Security Manager, issued by ISACA, as well as Certified Information Systems Security Professional, offered by (ISC)2.
According to the U.S. Bureau of Labor Statistics, computer and information systems managers, including CISOs, earned a median annual salary of $131,600 as of May 2015. According to Salary.com, the annual median CISO salary is $197,362. CISO salaries appear to be increasing steadily, according to research from IT staffing firms. In 2016, IT staffing firm SilverBull reported the median CISO salary had reached $224,000.
Continue Reading About CISO (chief information security officer)
CISSP: Certified Information Systems Security Professional
Certified Information Systems Security Professional (CISSP)
Posted by Margaret Rouse
The Certified Information Systems Security Professional (CISSP) is an information security certification that was developed by the International Information Systems Security Certification Consortium, also known as (ISC)².
The Certified Information Systems Security Professional (CISSP) is an information security certification that was developed by the International InformationSystems Security Certification Consortium, also known as (ISC)².
The Certified Information Systems Security Professional (CISSP) exam is designed to ensure that someone handling computer security for a company or client has mastered a standardized body of knowledge. The six-hour exam, which asks 250 questions, certifies security professionals in ten different areas:
The exam is designed for professionals with a minimum of 3-5 years of experience.
Esta arquitectura consiste básicamente en un cliente que realiza peticiones a otro programa (el servidor) que le da respuesta. Aunque esta idea se puede aplicar a programas que se ejecutan sobre una sola computadora, es más ventajosa en un sistema operativo multiusuario distribuido a través de una red de computadoras. La capacidad de proceso está repartida entre los clientes y los servidores. Son muy importantes las ventajas de tipo organizativo debidas a la centralización de la gestión de la información y la separación de responsabilidades, lo que facilita y clarifica el diseño del sistema. La separación entre cliente y servidor es una separación de tipo lógico, donde el servidor no se ejecuta necesariamente sobre una sola máquina ni es necesariamente un sólo programa. Los tipos específicos de servidores incluyen los servidores web, los servidores de archivo, los servidores del correo, etc. Mientras que sus propósitos varían de unos servicios a otros, la arquitectura básica seguirá siendo la misma. Una disposición muy común son los sistemas multicapa en los que el servidor se descompone en diferentes programas que pueden ser ejecutados por diferentes computadoras aumentando así el grado de distribución del sistema. La arquitectura cliente-servidor sustituye a la arquitectura monolítica en la que no hay distribución, tanto a nivel físico como a nivel lógico.
Terms related to cloud computing, including definitions about on-demand, distributed computing and words and phrases about software-as-a-service, infrastructure-as-a-service and storage-as-a-service.
Cloud IoT and IT Security
Cloud IoT and IT Security
More organizations are deploying Internet of Things devices and platforms to improve efficiency, enhance customer service, open up new business opportunities and reap other benefits. But the IoT can expose enterprises to new security threats, with every connected object becoming a potential entry point for attackers.
This eBook will discuss:
Please read the attached ebook.
Cloud Mechanics: Delivering Performance in Shared Environments
Cloud Mechanics: Delivering Performance in Shared Environments
Expedient Data Centers, a leader in Managed and Data Center Services with locations from Cleveland to Memphis to Boston, unpacks the mechanics of how it consistently meets Service Level Agreements for its customers. This whitepaper explores how service providers use VMTurbo to provide consistent performance across all workloads, as well as the three roles a responsible managed service provider (MSP) takes in order to accomplish that directive.
Please read the attached whitepaper.
Posted by Margaret Rouse
A cloud orchestrator is software that manages the interconnections and interactions among cloud-based and on-premises business units. Cloud orchestrator products use workflows to connect various automated processes and associated resources. The products usually include a management portal.
To orchestrate something is to arrange various components so they achieve a desired result. In an IT context, this involves combining tasks into workflows so the provisioning and management of various IT components and their associated resources can be automated. This endeavor is more complex in a cloud environment because it involves interconnecting processes running across heterogeneous systems in multiple locations.
Cloud orchestration products can simplify the intercomponent communication and connections to users and other apps and ensure that links are correctly configured and maintained. Such products usually include a Web-based portal so that orchestration can be managed through a single pane of glass.
When evaluating cloud orchestration products, it is recommended that administrators first map the workflows of the applications involved. This step will help the administrator visualize how complicated the internal workflow for the application is and how often information flows outside the set of app components. This, in turn, can help the administrator decide which type of orchestration product will help automate workflow best and meet business requirements in the most cost-effective manner.
Orchestration, in an IT context, is the automation of tasks involved with managing and coordinating complex software and services. The endeavor is more complex in a cloud environment because it involves interconnecting processes running across heterogeneous systems in multiple locations. Processes and transactions have to cross multiple organizations, systems and firewalls.
The goal of cloud orchestration is to, insofar as is possible, automate the configuration, coordination and management of software and software interactions in such an environment. The process involves automating workflows required for service delivery. Tasks involved include managing server runtimes, directing the flow of processes among applications and dealing with exceptions to typical workflows.
Vendors of cloud orchestration products include Eucalyptus, Flexiant, IBM, Microsoft, VMware and V3 Systems.
The term “orchestration” originally comes from the study of music, where it refers to the arrangement and coordination of instruments for a given piece.
Continue Reading About cloud orchestrator
Dig Deeper on Cloud data integration and application integration
Cloud vs. on-premises
Cloud vs. on-premises: Finding the right balance
The process of figuring out which apps work in the cloud vs. on-premises doesn't yield the same results for everyone.
Greg Downer, senior IT director at Oshkosh Corp., a manufacturer of specialty heavy vehicles in Oshkosh, Wisc., wishes he could tip the balance of on-premises vs. cloud more in the direction of the cloud, which currently accounts for only about 20% of his application footprint. However, as a contractor for the Department of Defense, his company is beholden to strict data requirements, including where data is stored.
"Cloud offerings have helped us deploy faster and reduce our data center infrastructure, but the main reason we don't do more in the cloud is because of strict DoD contract requirements for specific types of data," he says.
In Computerworld's Tech Forecast 2017 survey of 196 IT managers and leaders, 79% of respondents said they have a cloud project underway or planned, and 58% of those using some type of cloud-based system gave their efforts an A or B in terms of delivering business value.
Downer counts himself among IT leaders bullish on the cloud and its potential for positive results. "While we don't have a written cloud-first statement, when we do make new investments we look at what the cloud can offer," he says.
Oshkosh has moved some of its back-office systems, including those supporting human resources, legal and IT, to the cloud. He says most of the cloud migration has been from legacy systems to software as a service (SaaS). For instance, the organization uses ServiceNow's SaaS for IT and will soon use it for facilities management.
According to the Forecast report, a third of respondents plan to increase spending on SaaS in the next 12 months.
Cordell Schachter, CTO of New York City's Department of Transportation, says he allies with the 22% of survey respondents who plan to increase investments in a hybrid cloud computing environment. The more non-critical applications he moves out of the city's six-year-old data center, the more room he'll have to support innovative new projects such as the Connected Vehicle Pilot Deployment Program, a joint effort with the U.S. Department of Transportation's Intelligent Transportation Systems Joint Program Office.
The Connected Vehicle project, in the second year of a five-year pilot, aims to use dedicated short-range communication coupled with a network of in-vehicle and roadway sensors to automatically notify drivers of connected vehicles of traffic issues. "If there is an incident ahead of you, your car will either start braking on its own or you'll get a warning light saying there's a problem up ahead so you can avoid a crash," Schachter says. The program's intent is to reduce the more than 30,000 vehicle fatalities that occur in the U.S. each year.
Supporting that communication network and the data it generates will require more than the internal data center, though. Schachter says the effort will draw on a hybrid of on-premises and cloud-based applications and infrastructure. He expects to tap a combination of platform as a service, infrastructure as a service, and SaaS to get to the best of breed for each element of the program.
"We can use the scale of cloud providers and their expertise to do things we wouldn't be able to do internally," he says, adding that all providers must meet NYC DOT's expectations of "safer, faster, smarter and cheaper."
Apps saved for on-premises
In fact, Schachter has walled off only a few areas that aren't candidates for the cloud -- such as emergency services and email. "NYC DOT is one of the most sued entities in New York City, and we constantly need to search our corpus of emails. We have a shown a net positive by keeping that application on-premises to satisfy Freedom of Information Law requests as well as litigation," he says.
The City of Los Angeles also has its share of applications that are too critical to go into the cloud, according to Ted Ross, CIO and general manager of the city's Information Technology Agency. For instance, supervisory control and data acquisition (SCADA), 911 Dispatch, undercover police operations, traffic control and wastewater management are the types of data sets that will remain on-premises for the foreseeable future.
"The impact of an abuse is so high that we wouldn't consider these applications in our first round of cloud migrations. As you can imagine, it's critical that a hacker not gain access to release sewage into the ocean water or try to turn all streetlights green at the same time," he says.
The cloud does serve as an emergency backup to the $108 million state-of-the-art emergency operations center. "If anything happens to the physical facility, our software, mapping and other capabilities can quickly spin up in the cloud," he says, adding that Amazon Web Services and Microsoft Azure provide many compelling use cases.
The city, with more than 1,000 virtual servers on-premises, considers the cloud a cost-effective godsend. "We very much embrace the cloud because it provides an opportunity to lower costs, makes us more flexible and agile, offers off-site disaster recovery, empowers IT personnel, and provides a better user experience," he says.
As an early adopter of Google's Gmail in 2010, Ross appreciates the value of the cloud, so much so that in 2014, the city made cloud a primary business model, starting with SaaS, which he calls "a gateway drug to other cloud services."
Eventually, the city ventured into infrastructure as a service, including using "a lot of Amazon Web Services," which Ross describes as more invasive than SaaS and more in need of collaboration between the service provider and the network team. "You have to be prepared to have a shared security model and to take the necessary steps to enact it," he says. Cloud computing also requires additional network bandwidth to reduce latency and maximize performance, he adds.
Other reasons for saying no to the cloud
As much as Ross is a cloud promoter, he says he fully understands the 21% of respondents to Computerworld's Forecast survey who say they have no plans to move to the cloud. "I get worried when users simply want to spin up anything anywhere and are only concerned about functionality, not connectivity and security."
Ron Heinz, founder and managing director of venture capital firm Signal Peak Ventures, says there will always be a market for on-premises applications and infrastructure. For instance, one portfolio client that develops software for accountants found that 40% of its market don't want to move their workflow to the cloud.
Heinz attributes the hesitation to more mature accounting professionals and those with security concerns. "Everybody automatically assumes there is a huge migration to the cloud. But there will always be a segment that will never go the cloud as long as you have strong virtual private networks and strong remote access with encrypted channels," he says.
Greg Collins, founder and principal analyst at analyst firm Exact Ventures, has found clients usually stick with on-premises when they are still depreciating their servers and other gear. "They have the attitude 'if it ain't broke, don't fix it,'" he says.
Still, he also believes the cloud is still in the early days and will only grow as the installed base of on-premises equipment hits end of life.
"We have seen a significant shift in the last couple of years in the interest for public cloud," says Matthew L. Taylor, managing director of consulting firm Accenture Strategy. Accenture, a company of more than 394,000 employees, has most of its own applications hosted in the public cloud.
Many of his clients are not moving as fast. "I wouldn't say the majority of our clients' application loads are in the public cloud today; that's still the opportunity," he says.
Of the clients that have moved to the cloud, very few have gone back to on-premises. "If they did, it wasn't because the cloud-based capabilities were not ready; it was because the company wasn't ready and hadn't thought the migration, application or value case through," Taylor says, adding that others who floundered did so because they couldn't figure out how to wean off their legacy infrastructure and run it in tandem with the cloud.
Most of his clients have been surprised to find that lower service costs have not been the biggest benefit of the cloud. "In the end, savings don't come from technology tools, they come from operational shifts and performance gains," he says.
For instance, a bank in Australia that he wouldn't name moved a critical application to the cloud but had two other applications on-premises, causing performance problems. The performance problems arose because the cloud app relied heavily on the on-premises applications, so performance was slowed as they tried to communicate with one another. Once the bank moved all three applications to the cloud, it found the applications had never performed better, and downtime and maintenance improved.
Kas Naderi, senior vice president of Atlanticus Holdings Corp., a specialty finance company focused on underserved consumers in the U.S., U.K., Guam and Saipan, had a similar experience when the company "lifted and shifted" its entire application portfolio to the cloud. "Every one of our applications performed as good or better than in our data center, which had hardware that was ten years old," he says.
In 2014, the company took all existing applications and ran them "as is" in the cloud environment. Atlanticus relied on consulting firm DISYS to not only validate Atlanticus' migration approach, but also to help staff a 24-hour, "follow the sun" implementation. "They enabled us to accelerate our timeline," he says. In addition, DISYS, an Amazon Web Services partner, lent its expertise to explain what would and wouldn't work in Amazon's cloud.
Atlanticus deployed a federated cloud topology distributed among Amazon Web Services, Microsoft Azure, Zadara cloud storage, InContact Automatic Call Distribution, and Vonage phone system, with applications sitting where they operate best -- such as Microsoft Active Directory on Azure. The company front-ends Amazon Web Services with a private cloud that handles security tasks including intrusion detection/prevention and packet inspection. "There is an absolute need for private cloud services to encapsulate a level of security and control that might not be available in the public cloud," Naderi says.
In its next phase of cloud migration, Atlanticus will assess whether legacy applications have SaaS or other cloud-based alternatives that perform even better. In other words, the company took all its applications "as is," including legacy, and put them in the cloud. Now they are going to see if there are better alternatives to those legacy apps available to adopt.
Oshkosh ran a similar exercise and found that cloud-based SharePoint outperformed on-premises SharePoint and improved functionality. For instance, the company has been able to create a space where external suppliers can interact with internal employees, safely exchanging critical information. "That was challenging for on-premises," Downer says.
He adds: "We also are using various CRM cloud applications within some segments, and have started to meet niche business requirements on the shop floor with cloud solutions."
Staffing the cloud
As organizations move to the cloud, they sometimes harbor the misconception that migration means they need fewer IT staff. These IT leaders say that's not the case. Instead, they've gotten more value out of their skilled workforce by retraining them to handle the demands of cloud services.
Greg Downer, senior IT director at specialty vehicle manufacturer Oshkosh Corp.: "We retrained our legacy people, which went well. For instance, we trained our BMC Remedy administrators on the ServiceNow SaaS. We're not just using 10% to 20% of a large on-premises investment, but getting the full value of the platform subscription we are paying for."
Kas Naderi, senior vice president of technology, specialty finance company Atlanticus Holdings Corp.: "Our staff used to be extended beyond the normal 40-hour week, handling ad-hoc requests, emergencies, upgrades, security, etc. We were blessed to have a very flexible and high-IQ staff and were happy to shift their day-to-day responsibilities away from upkeep and maintenance to leadership of how to best leverage these cloud-based platforms for better quality of service. We have become a lot more religious on operating system upgrades and security postures and a lot more strategic on documentation and predictability of services. We went from racking and stacking and maintaining the data center to a business purpose."
Ted Ross, general manager of information technology and CIO, city of Los Angeles: "Moving to the cloud requires a sizeable skills change, but it's also a force multiplier that lets fewer hands do a lot more. We're not a start-up; we're a legacy enterprise. Our data center had a particular set of processes and its own ecosystem and business model. We want to continue that professionalism, but make the pivot to innovative infrastructure. We still have to be smart about data, making sure it's encrypted at rest, and working through controls. The cloud expands our ecosystem considerably, but of course we still don't want to allow critical information into the hands of the wrong people."-- Sandra Gittlen
Cloud-Based Disaster Recovery on AWS
Best Practices: Cloud-Based Disaster Recovery on AWS
This book explains Cloud-based Disaster Recovery in comparison to traditional DR, explains its benefits, discusses preparation tips, and provides an example of a globally recognized, highly successful Cloud DR deployment.
Using AWS for Disaster Recovery
Disaster recovery (DR) is one of most important use cases that we hear from our customers. Having your own DR site in the cloud ready and on standby, without having to pay for the hardware, power, bandwidth, cooling, space and system administration and quickly launch resources in cloud, when you really need it (when disaster strikes in your datacenter) makes the AWS cloud the perfect solution for DR. You can quickly recover from a disaster and ensure business continuity of your applications while keeping your costs down.
Disaster recovery is about preparing for and recovering from a disaster. Any event that has a negative impact on your business continuity or finances could be termed a disaster. This could be hardware or software failure, a network outage, a power outage, physical damage to a building like fire or flooding, human error, or some other significant disaster.
In that regard, we are very excited to release Using AWS for Disaster Recovery Whitepaper. The paper highlights various AWS features and services that you can leverage for your DR processes and shows different architectural approaches on how to recover from a disaster. Depending on your Recovery Time Objective (RTO) and Recovery Point Objective (RPO) – two commonly used industry terms when building your DR strategy – you have the flexibility to choose the right approach that fits your budget. The approaches could be as minimum as backup and restore from the cloud or full-scale multi-site solution deployed in onsite and AWS with data replication and mirroring.
The paper further provides recommendations on how you can improve your DR plan and leverage the full potential of AWS for your Disaster Recovery processes.
AWS cloud not only makes it cost-effective to do DR in the cloud but also makes it easy, secure and reliable. With APIs and right automation in place, you can fire up and test whether you DR solution really works (and do that every month, if you like) and be prepared ahead of time. You can reduce your recovery times by quickly provisioning pre-configured resources (AMIs) when you need them or cutover to already provisioned DR site (and then scaling gradually as you need). You can bake the necessary security best practices into an AWS CloudFormation template and provision the resources in an Amazon Virtual Private Cloud (VPC). All at the fraction of the cost of conventional DR.
AWS Architecture Blog
A cloudlet is a small-scale data center or cluster of computers designed to quickly provide cloud computing services to mobile devices, such as smartphones, tablets and wearable devices, within close geographical proximity.
The goal of a cloudlet is to increase the response time of applications running on mobile devices by using low latency, high-bandwidth wireless connectivity and by hosting cloud computing resources, such as virtual machines, physically closer to the mobile devices accessing them. This is intended to eliminate the wide area network (WAN) latency delays that can occur in traditional cloud computing models.
The cloudlet was specifically designed to support interactive and resource-intensive mobile applications, such as those for speech recognition, language processing, machine learning and virtual reality.
Key differences between a cloudlet and a public cloud data center
A cloudlet is considered a form of cloud computing because it delivers hosted services to users over a network. However, a cloudlet differs from a public cloud data center, such as those operated by public cloud providers like Amazon Web Services, in a number of ways.
First, a cloudlet is self-managed by the businesses or users that employ it, while a public cloud data center is managed full-time by a cloud provider. Second, a cloudlet predominantly uses a local area network (LAN) for connectivity, versus the public Internet. Thirdly, a cloudlet is employed by fewer, more localized users than a major public cloud service. Finally, a cloudlet contains only "soft state" copies of data, such as a cache copy, or code that is stored elsewhere.
The cloudlet prototype
A prototype implementation of a cloudlet was originally developed by Carnegie Mellon University as a research project, starting in 2009. The term cloudlet was coined by computer scientists Mahadev Satyanarayanan, Victor Bahl, Ramón Cáceres and Nigel Davies.
Continue Reading About cloudlet
A command, in this context, is a specific order from a user to the computer's operating system or to an application to perform a service, such as "Show me all my files" or "Run this program for me." Although Windows PowerShell includes more than two hundred basic core cmdlets, administrators can also write their own cmdlets and share them.
A cmdlet, which is expressed as a verb-noun pair, has a .ps1 extension. Each cmdlet has a help file that can be accessed by typing Get-Help <cmdlet-Name> -Detailed. The detailed view of the cmdlet help file includes a description of the cmdlet, the command syntax, descriptions of the parameters and an example that demonstrate the use of the cmdlet.
Popular basic cmdlets include:
Common Vulnerabilities and Exposures (CVE)
Common Vulnerabilities and Exposures (CVE)