Glosario KW | KW Glossary
Ontology Design | Diseño de Ontologías
Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL
C (DATA CENTER)
C (OPEN SOURCE)
C (WEB SERVICES)
CABLEADO: ELEMENTO CLAVE PARA EL CENTRO DE DATOS DEL FUTURO
El Centro de datos del futuro dependerá de cada aspecto de la red, incluyendo aquellas olvidadas bases de cableado, para funcionar a un nivel óptimo, mientras ocupa la mínima cantidad del valioso espacio. Los administradores y propietarios necesitan revisar cómo pueden usar la capa física para soportar el crecimiento en el futuro, o inversamente, ver cómo la capa física de cableado puede impedir dicho crecimiento. Siga leyendo para conocer la relevancia del cableado en el Centro de Datos del futuro.
Please read the attached whitepaper.
Cadena de Bloques
Formación de una cadena de bloques. La cadena mayor (negra) consiste del serie de bloques más larga del bloque de génesis (verde) al bloque actual. Bloques huerfanos (púrpura) existen fuera de la cadena mayor
Cadena de Bloques
Una cadena de bloques, también conocida por las siglas BC (del inglés Blockchain)1 2 3 4 5 es una base de datos distribuida, formada por cadenas de bloques diseñadas para evitar su modificación una vez que un dato ha sido publicado usando un sellado de tiempo confiable y enlazando a un bloque anterior.6 Por esta razón es especialmente adecuada para almacenar de forma creciente datos ordenados en el tiempo y sin posibilidad de modificación ni revisión. Este enfoque tiene diferentes aspectos:
El concepto de cadena de bloque fue aplicado por primera vez en 2009 como parte de Bitcoin.
Los datos almacenados en la cadena de bloques normalmente suelen ser transacciones (Ej. financieras) por eso es frecuente llamar a los datos transacciones. Sin embargo no es necesario que lo sean. Realmente podríamos considerar que lo que se registran son cambios atómicos del estado del sistema. Por ejemplo una cadena de bloques puede ser usada para estampillar documentos y securizarlos frente a alteraciones.7
El concepto de cadena de bloques es usado en los siguientes campos:
Las cadenas de bloques se pueden clasificar basándose en el acceso a los dos datos almacenados en la misma7 :
Ambos tipos de cadenas deben ser considerados como casos extremos pudiendo haber casos intermedios.
Las cadenas de bloques se pueden clasificar basándose en los permisos para generar bloques en la misma7 :
Las cadenas de bloques públicas pueden ser sin permisos (ej. Bitcoin) o con permisos (ej. cadenas laterales federadas9 . Las cadenas de bloques privadas tienen que ser con permisos 9 . Las cadenas de bloques con permisos no tienen que ser privadas ya que hay distintas formas de acceder a los datos de la cadena de bloques como por ejemplo7 :
Mientras que la tercera forma de acceso en las cadenas de bloques con permisos está restringida para cierto conjunto limitado de entidades, no es obvio que el resto de accesos a la cadena de bloques debería estar restringido. Por ejemplo una cadena de bloques para entidades financieras sería una con permisos pero podría7 :
Una cadena lateral, en inglés sidechain, es una cadena de bloques que valida datos desde otra cadena de bloques a la que se llama principal. Su utilidad principal es poder aportar funcionalidades nuevas, las cuales pueden estar en periodo de pruebas, apoyándose en la confianza ofrecida por la cadena de bloques principal10 11 . Las cadenas laterales funcionan de forma similar a como hacían las monedas tradicionales con el patrón oro.12
Un ejemplo de cadena de bloques que usa cadenas laterales es Lisk13 . Debido a la popularidad de Bitcoin y la enorme fuerza de su red para dar confianza mediante su algoritmo de consenso por prueba de trabajo, se quiere aprovechar como cadena de bloques principal y construir cadenas laterales vinculadas que se apoyen en ella. Una cadena lateral vinculada es una cadena lateral cuyos activos pueden ser importados desde y hacia la otra cadena. Este tipo de cadenas se puede conseguir de las siguiente formas11 :
CONCEPTOS RELATIVOS A LA CALIDAD
Requisito: Necesidad o expectativa establecida, generalmente implícita u obligatoria.
Clase: Categoría o rango dado a diferentes requisitos de la calidad para productos, procesos o sistemas que tienen el mismo uso funcional.
Calidad: Grado en el que un conjunto de características inherentes cumple con los requisitos.
Capacidad: Aptitud de una organización, sistema o proceso para realizar un producto que cumple los requisitos para ese producto.
Satisfacción del cliente: Percepción del cliente sobre el grado en que se han cumplido sus requisitos.
La gestión del conocimiento es un motor de cambio. Es más, es “la garantía para el cambio, la adaptabilidad/escalabilidad, el dinamismo y la creación de nuevos productos y servicios”.
Los hechos indican, pues, que el futuro del software dependerá de su capacidad de gestionar el conocimiento del Usuario Final y no en la reclusión del mismo respecto a un desarrollo cerrado (en código abierto o propietario). Para acentuar este comentario tenemos los contundentes hechos del avance de Linux y el derrumbe del modelo inicial de Internet, basado en “portales” de contenido “muerto” y cuyo retorno financiero simplemente no funcionó. ¿Quiénes han sobrevivido?. Los que facturan (Amazon, MedLine) o están asociados a empresas de software y/o proyectos subvencionados (no tomamos en cuenta aquellos que se deben a una imprescindible presencia en el web por razones estratégicas).
¿Qué tenemos entre manos?. Nuestro proyecto atiende tanto a los nuevos requerimientos tecnológicos como a las lógicas expectativas de quienes invertirán en esta idea y desean un retorno en el mediano plazo.
Las consultoras internacionales coinciden en tres grandes paradigmas que sustentan sin dudas la inversión en la tecnología KW:
Nosotros agregamos un cuarto paradigma, que ya es parte de las estrategias de varias multinacionales:
La experiencia nos ha confirmado una y otra vez que el Usuario Final se siente más complacido cuando adquiere una herramienta sólida que sea capaz de recibir en forma sencilla y efectiva su “know-how” y sobrevivir en gran forma a la obsolescencia tecnológica tan abrupta como la que tenemos actualmente.
Es cuestión de muy poco tiempo. En Latinoamérica el furor de los equipos refurbished/recertified está poniendo de moda la funcionalidad por encima de la potencia… una potencia que en el caso de las nuevos equipos, en los hechos, no se usa.
CONCEPTOS RELATIVOS A LAS CARACTERÍSTICAS
Característica: Rasgo diferenciador.
Seguridad de funcionamiento: Término colectivo utilizado para describir el desempeño de la disponibilidad y los factores que la influencian: desempeño de la confiabilidad, de la capacidad de mantenimiento y del mantenimiento de apoyo.
Trazabilidad: Capacidad para seguir la historia, la aplicación o la localización de todo aquello que ésta bajo consideración.
Característica de la calidad: Característica inherente de un producto, proceso o sistema relacionado con un requisito.
Posted by: Margaret Rouse
A case study, in a business context is a report of an organization's implementation of something, such as a practice, a product, a system or a service.
The case study can be thought of as a real-world test of how the implementation works, and how well it works. If documentation is sufficiently comprehensive, a case study should yield valuable information about the costs and benefits, both financial and otherwise. The organization itself and external parties can use case studies to gain more information about the specific implementation that can also help guide decisions about similar projects.
An organization might deploy, for example, a new business intelligence (BI) platform. A case study of its implementation would involve an exploration of each stage of the implementation, lessons learned along the way and the ultimate effects.
A case study is often compiled by an external party, perhaps for publication. Less formally, a case study may just be a content item, such as a brief article, discussing an organization's business implementation of something or a decision of some sort and one or more significant results.
Column database management system (CDMS) definition
Posted by Margaret Rouse
There are different types of CDBMS offerings, with the common defining feature being that data is stored by column (or column families) instead of as rows.
In a relational database, data access is always by row. Changing the focus from the row to the column can improve performance when large amounts of data are aggregated in a few columns. Generally speaking, a row-orientated focus is preferable for online transaction processing (OLTP) systems and a column-oriented focus is preferable for online analytical processing (OLAP) systems. Column stores are not useful for systems with wildly varying queries or supporting ACID transactions.
CDMSes are well-suited for use cases in which writes are uncommon and applications need to access a few columns of many rows at once. For example, column databases are well-suited for data marts that query large amounts of data aggregated for a small number of columns. The CDBMS can speed up analytical queries of the data in the data mart because it can focus just on the columns that need to be read and doesn’t need to read through thousands of rows.
Column stores are also useful when data has an expiration date because it is possible to set up a column so it will expire automatically after a certain date. Because the data stored in columns is typically similar, and the columns are stored adjacent to one another, some CDBMSes can compressdata and help storage capacity to be used more efficiently.
The concept of a column store is not new and variations of the idea have been implemented as part of relational databases in the past. NoSQL and relational column stores both focus on the column as the unit of storage, but NoSQL column stores permits columns to differ across column family rows, which is not permitted in a relational database.
A CDMS may also be called a columnar database management system, column-oriented database management system or column store database management system.
CHANGES FOR JAVA: LAMBDAS IN JAVA 8
CHANGES FOR JAVA: LAMBDAS IN JAVA 8
by Jason Tee
EMBRACING LAMBDA AND JAVA 8? HOW THE JAVA LANDSCAPE WILL CHANGE IN 2014
While the latest changes that have been introduced into the Java language are more iterative than transformative, the sheer size of the Java world means there’s so much going on that it really can be tough to keep up. Fortunately, TheServerSide had a chance to check in recently with a number of high profile Java experts, including Kirk Pepperdine and Adam Bien, to see what they like about the current state of Java, what their concerns are for the future, and what the short term challenges and achievements the Java community can expect throughout 2014. Here are a few of the insights they shared...
Please read the attached eGuide.
CISO (chief information security officer)
Posted by: Margaret Rouse | Contributor(s): Emily McLaughlin, Taina Teravainen
The CISO (chief information security officer) is a senior-level executive responsible for developing and implementing an information security program, which includes procedures and policies designed to protect enterprise communications, systems and assets from both internal and external threats. The CISO may also work alongside the chief information officer to procure cybersecurity products and services and to manage disaster recovery and business continuity plans.
The chief information security officer may also be referred to as the chief security architect, the security manager, the corporate security officer or the information security manager, depending on the company's structure and existing titles. While the CISO is also responsible for the overall corporate security of the company, which includes its employees and facilities, he or she may simply be called the chief security officer (CSO).
CISO role and responsibilities
Instead of waiting for a data breach or security incident, the CISO is tasked with anticipating new threats and actively working to prevent them from occurring. The CISO must work with other executives across different departments to ensure that security systems are working smoothly to reduce the organization's operational risks in the face of a security attack.
The chief information security officer's duties may include conducting employee security awareness training, developing secure business and communication practices, identifying security objectives and metrics, choosing and purchasing security products from vendors, ensuring that the company is in regulatory compliance with the rules for relevant bodies, and enforcing adherence to security practices.
Other duties and responsibilities CISOs perform include ensuring the company's data privacy is secure, managing the Computer Security Incident Response Team and conducting electronic discovery and digital forensic investigations.
CISO qualifications and certifications
A CISO is typically an individual who is able to effectively lead and manage employees and who has a strong understanding of information technology and security, but who can also communicate complicated security concepts to technical and nontechnical employees. CISOs should have experience with risk management and auditing.
Many companies require CISOs to have advanced degrees in business, computer science or engineering, and to have extensive professional working experience in information technology. CISOs also typically have relevant certifications such as Certified Information Systems Auditor and Certified Information Security Manager, issued by ISACA, as well as Certified Information Systems Security Professional, offered by (ISC)2.
According to the U.S. Bureau of Labor Statistics, computer and information systems managers, including CISOs, earned a median annual salary of $131,600 as of May 2015. According to Salary.com, the annual median CISO salary is $197,362. CISO salaries appear to be increasing steadily, according to research from IT staffing firms. In 2016, IT staffing firm SilverBull reported the median CISO salary had reached $224,000.
Continue Reading About CISO (chief information security officer)
CISSP: Certified Information Systems Security Professional
Certified Information Systems Security Professional (CISSP)
Posted by Margaret Rouse
The Certified Information Systems Security Professional (CISSP) is an information security certification that was developed by the International Information Systems Security Certification Consortium, also known as (ISC)².
The Certified Information Systems Security Professional (CISSP) is an information security certification that was developed by the International InformationSystems Security Certification Consortium, also known as (ISC)².
The Certified Information Systems Security Professional (CISSP) exam is designed to ensure that someone handling computer security for a company or client has mastered a standardized body of knowledge. The six-hour exam, which asks 250 questions, certifies security professionals in ten different areas:
The exam is designed for professionals with a minimum of 3-5 years of experience.
Esta arquitectura consiste básicamente en un cliente que realiza peticiones a otro programa (el servidor) que le da respuesta. Aunque esta idea se puede aplicar a programas que se ejecutan sobre una sola computadora, es más ventajosa en un sistema operativo multiusuario distribuido a través de una red de computadoras. La capacidad de proceso está repartida entre los clientes y los servidores. Son muy importantes las ventajas de tipo organizativo debidas a la centralización de la gestión de la información y la separación de responsabilidades, lo que facilita y clarifica el diseño del sistema. La separación entre cliente y servidor es una separación de tipo lógico, donde el servidor no se ejecuta necesariamente sobre una sola máquina ni es necesariamente un sólo programa. Los tipos específicos de servidores incluyen los servidores web, los servidores de archivo, los servidores del correo, etc. Mientras que sus propósitos varían de unos servicios a otros, la arquitectura básica seguirá siendo la misma. Una disposición muy común son los sistemas multicapa en los que el servidor se descompone en diferentes programas que pueden ser ejecutados por diferentes computadoras aumentando así el grado de distribución del sistema. La arquitectura cliente-servidor sustituye a la arquitectura monolítica en la que no hay distribución, tanto a nivel físico como a nivel lógico.
Terms related to cloud computing, including definitions about on-demand, distributed computing and words and phrases about software-as-a-service, infrastructure-as-a-service and storage-as-a-service.
Cloud IoT and IT Security
Cloud IoT and IT Security
More organizations are deploying Internet of Things devices and platforms to improve efficiency, enhance customer service, open up new business opportunities and reap other benefits. But the IoT can expose enterprises to new security threats, with every connected object becoming a potential entry point for attackers.
This eBook will discuss:
Please read the attached ebook.
Cloud Mechanics: Delivering Performance in Shared Environments
Cloud Mechanics: Delivering Performance in Shared Environments
Expedient Data Centers, a leader in Managed and Data Center Services with locations from Cleveland to Memphis to Boston, unpacks the mechanics of how it consistently meets Service Level Agreements for its customers. This whitepaper explores how service providers use VMTurbo to provide consistent performance across all workloads, as well as the three roles a responsible managed service provider (MSP) takes in order to accomplish that directive.
Please read the attached whitepaper.
Posted by Margaret Rouse
A cloud orchestrator is software that manages the interconnections and interactions among cloud-based and on-premises business units. Cloud orchestrator products use workflows to connect various automated processes and associated resources. The products usually include a management portal.
To orchestrate something is to arrange various components so they achieve a desired result. In an IT context, this involves combining tasks into workflows so the provisioning and management of various IT components and their associated resources can be automated. This endeavor is more complex in a cloud environment because it involves interconnecting processes running across heterogeneous systems in multiple locations.
Cloud orchestration products can simplify the intercomponent communication and connections to users and other apps and ensure that links are correctly configured and maintained. Such products usually include a Web-based portal so that orchestration can be managed through a single pane of glass.
When evaluating cloud orchestration products, it is recommended that administrators first map the workflows of the applications involved. This step will help the administrator visualize how complicated the internal workflow for the application is and how often information flows outside the set of app components. This, in turn, can help the administrator decide which type of orchestration product will help automate workflow best and meet business requirements in the most cost-effective manner.
Orchestration, in an IT context, is the automation of tasks involved with managing and coordinating complex software and services. The endeavor is more complex in a cloud environment because it involves interconnecting processes running across heterogeneous systems in multiple locations. Processes and transactions have to cross multiple organizations, systems and firewalls.
The goal of cloud orchestration is to, insofar as is possible, automate the configuration, coordination and management of software and software interactions in such an environment. The process involves automating workflows required for service delivery. Tasks involved include managing server runtimes, directing the flow of processes among applications and dealing with exceptions to typical workflows.
Vendors of cloud orchestration products include Eucalyptus, Flexiant, IBM, Microsoft, VMware and V3 Systems.
The term “orchestration” originally comes from the study of music, where it refers to the arrangement and coordination of instruments for a given piece.
Continue Reading About cloud orchestrator
Dig Deeper on Cloud data integration and application integration
Cloud vs. on-premises
Cloud vs. on-premises: Finding the right balance
The process of figuring out which apps work in the cloud vs. on-premises doesn't yield the same results for everyone.
Greg Downer, senior IT director at Oshkosh Corp., a manufacturer of specialty heavy vehicles in Oshkosh, Wisc., wishes he could tip the balance of on-premises vs. cloud more in the direction of the cloud, which currently accounts for only about 20% of his application footprint. However, as a contractor for the Department of Defense, his company is beholden to strict data requirements, including where data is stored.
"Cloud offerings have helped us deploy faster and reduce our data center infrastructure, but the main reason we don't do more in the cloud is because of strict DoD contract requirements for specific types of data," he says.
In Computerworld's Tech Forecast 2017 survey of 196 IT managers and leaders, 79% of respondents said they have a cloud project underway or planned, and 58% of those using some type of cloud-based system gave their efforts an A or B in terms of delivering business value.
Downer counts himself among IT leaders bullish on the cloud and its potential for positive results. "While we don't have a written cloud-first statement, when we do make new investments we look at what the cloud can offer," he says.
Oshkosh has moved some of its back-office systems, including those supporting human resources, legal and IT, to the cloud. He says most of the cloud migration has been from legacy systems to software as a service (SaaS). For instance, the organization uses ServiceNow's SaaS for IT and will soon use it for facilities management.
According to the Forecast report, a third of respondents plan to increase spending on SaaS in the next 12 months.
Cordell Schachter, CTO of New York City's Department of Transportation, says he allies with the 22% of survey respondents who plan to increase investments in a hybrid cloud computing environment. The more non-critical applications he moves out of the city's six-year-old data center, the more room he'll have to support innovative new projects such as the Connected Vehicle Pilot Deployment Program, a joint effort with the U.S. Department of Transportation's Intelligent Transportation Systems Joint Program Office.
The Connected Vehicle project, in the second year of a five-year pilot, aims to use dedicated short-range communication coupled with a network of in-vehicle and roadway sensors to automatically notify drivers of connected vehicles of traffic issues. "If there is an incident ahead of you, your car will either start braking on its own or you'll get a warning light saying there's a problem up ahead so you can avoid a crash," Schachter says. The program's intent is to reduce the more than 30,000 vehicle fatalities that occur in the U.S. each year.
Supporting that communication network and the data it generates will require more than the internal data center, though. Schachter says the effort will draw on a hybrid of on-premises and cloud-based applications and infrastructure. He expects to tap a combination of platform as a service, infrastructure as a service, and SaaS to get to the best of breed for each element of the program.
"We can use the scale of cloud providers and their expertise to do things we wouldn't be able to do internally," he says, adding that all providers must meet NYC DOT's expectations of "safer, faster, smarter and cheaper."
Apps saved for on-premises
In fact, Schachter has walled off only a few areas that aren't candidates for the cloud -- such as emergency services and email. "NYC DOT is one of the most sued entities in New York City, and we constantly need to search our corpus of emails. We have a shown a net positive by keeping that application on-premises to satisfy Freedom of Information Law requests as well as litigation," he says.
The City of Los Angeles also has its share of applications that are too critical to go into the cloud, according to Ted Ross, CIO and general manager of the city's Information Technology Agency. For instance, supervisory control and data acquisition (SCADA), 911 Dispatch, undercover police operations, traffic control and wastewater management are the types of data sets that will remain on-premises for the foreseeable future.
"The impact of an abuse is so high that we wouldn't consider these applications in our first round of cloud migrations. As you can imagine, it's critical that a hacker not gain access to release sewage into the ocean water or try to turn all streetlights green at the same time," he says.
The cloud does serve as an emergency backup to the $108 million state-of-the-art emergency operations center. "If anything happens to the physical facility, our software, mapping and other capabilities can quickly spin up in the cloud," he says, adding that Amazon Web Services and Microsoft Azure provide many compelling use cases.
The city, with more than 1,000 virtual servers on-premises, considers the cloud a cost-effective godsend. "We very much embrace the cloud because it provides an opportunity to lower costs, makes us more flexible and agile, offers off-site disaster recovery, empowers IT personnel, and provides a better user experience," he says.
As an early adopter of Google's Gmail in 2010, Ross appreciates the value of the cloud, so much so that in 2014, the city made cloud a primary business model, starting with SaaS, which he calls "a gateway drug to other cloud services."
Eventually, the city ventured into infrastructure as a service, including using "a lot of Amazon Web Services," which Ross describes as more invasive than SaaS and more in need of collaboration between the service provider and the network team. "You have to be prepared to have a shared security model and to take the necessary steps to enact it," he says. Cloud computing also requires additional network bandwidth to reduce latency and maximize performance, he adds.
Other reasons for saying no to the cloud
As much as Ross is a cloud promoter, he says he fully understands the 21% of respondents to Computerworld's Forecast survey who say they have no plans to move to the cloud. "I get worried when users simply want to spin up anything anywhere and are only concerned about functionality, not connectivity and security."
Ron Heinz, founder and managing director of venture capital firm Signal Peak Ventures, says there will always be a market for on-premises applications and infrastructure. For instance, one portfolio client that develops software for accountants found that 40% of its market don't want to move their workflow to the cloud.
Heinz attributes the hesitation to more mature accounting professionals and those with security concerns. "Everybody automatically assumes there is a huge migration to the cloud. But there will always be a segment that will never go the cloud as long as you have strong virtual private networks and strong remote access with encrypted channels," he says.
Greg Collins, founder and principal analyst at analyst firm Exact Ventures, has found clients usually stick with on-premises when they are still depreciating their servers and other gear. "They have the attitude 'if it ain't broke, don't fix it,'" he says.
Still, he also believes the cloud is still in the early days and will only grow as the installed base of on-premises equipment hits end of life.
"We have seen a significant shift in the last couple of years in the interest for public cloud," says Matthew L. Taylor, managing director of consulting firm Accenture Strategy. Accenture, a company of more than 394,000 employees, has most of its own applications hosted in the public cloud.
Many of his clients are not moving as fast. "I wouldn't say the majority of our clients' application loads are in the public cloud today; that's still the opportunity," he says.
Of the clients that have moved to the cloud, very few have gone back to on-premises. "If they did, it wasn't because the cloud-based capabilities were not ready; it was because the company wasn't ready and hadn't thought the migration, application or value case through," Taylor says, adding that others who floundered did so because they couldn't figure out how to wean off their legacy infrastructure and run it in tandem with the cloud.
Most of his clients have been surprised to find that lower service costs have not been the biggest benefit of the cloud. "In the end, savings don't come from technology tools, they come from operational shifts and performance gains," he says.
For instance, a bank in Australia that he wouldn't name moved a critical application to the cloud but had two other applications on-premises, causing performance problems. The performance problems arose because the cloud app relied heavily on the on-premises applications, so performance was slowed as they tried to communicate with one another. Once the bank moved all three applications to the cloud, it found the applications had never performed better, and downtime and maintenance improved.
Kas Naderi, senior vice president of Atlanticus Holdings Corp., a specialty finance company focused on underserved consumers in the U.S., U.K., Guam and Saipan, had a similar experience when the company "lifted and shifted" its entire application portfolio to the cloud. "Every one of our applications performed as good or better than in our data center, which had hardware that was ten years old," he says.
In 2014, the company took all existing applications and ran them "as is" in the cloud environment. Atlanticus relied on consulting firm DISYS to not only validate Atlanticus' migration approach, but also to help staff a 24-hour, "follow the sun" implementation. "They enabled us to accelerate our timeline," he says. In addition, DISYS, an Amazon Web Services partner, lent its expertise to explain what would and wouldn't work in Amazon's cloud.
Atlanticus deployed a federated cloud topology distributed among Amazon Web Services, Microsoft Azure, Zadara cloud storage, InContact Automatic Call Distribution, and Vonage phone system, with applications sitting where they operate best -- such as Microsoft Active Directory on Azure. The company front-ends Amazon Web Services with a private cloud that handles security tasks including intrusion detection/prevention and packet inspection. "There is an absolute need for private cloud services to encapsulate a level of security and control that might not be available in the public cloud," Naderi says.
In its next phase of cloud migration, Atlanticus will assess whether legacy applications have SaaS or other cloud-based alternatives that perform even better. In other words, the company took all its applications "as is," including legacy, and put them in the cloud. Now they are going to see if there are better alternatives to those legacy apps available to adopt.
Oshkosh ran a similar exercise and found that cloud-based SharePoint outperformed on-premises SharePoint and improved functionality. For instance, the company has been able to create a space where external suppliers can interact with internal employees, safely exchanging critical information. "That was challenging for on-premises," Downer says.
He adds: "We also are using various CRM cloud applications within some segments, and have started to meet niche business requirements on the shop floor with cloud solutions."
Staffing the cloud
As organizations move to the cloud, they sometimes harbor the misconception that migration means they need fewer IT staff. These IT leaders say that's not the case. Instead, they've gotten more value out of their skilled workforce by retraining them to handle the demands of cloud services.
Greg Downer, senior IT director at specialty vehicle manufacturer Oshkosh Corp.: "We retrained our legacy people, which went well. For instance, we trained our BMC Remedy administrators on the ServiceNow SaaS. We're not just using 10% to 20% of a large on-premises investment, but getting the full value of the platform subscription we are paying for."
Kas Naderi, senior vice president of technology, specialty finance company Atlanticus Holdings Corp.: "Our staff used to be extended beyond the normal 40-hour week, handling ad-hoc requests, emergencies, upgrades, security, etc. We were blessed to have a very flexible and high-IQ staff and were happy to shift their day-to-day responsibilities away from upkeep and maintenance to leadership of how to best leverage these cloud-based platforms for better quality of service. We have become a lot more religious on operating system upgrades and security postures and a lot more strategic on documentation and predictability of services. We went from racking and stacking and maintaining the data center to a business purpose."
Ted Ross, general manager of information technology and CIO, city of Los Angeles: "Moving to the cloud requires a sizeable skills change, but it's also a force multiplier that lets fewer hands do a lot more. We're not a start-up; we're a legacy enterprise. Our data center had a particular set of processes and its own ecosystem and business model. We want to continue that professionalism, but make the pivot to innovative infrastructure. We still have to be smart about data, making sure it's encrypted at rest, and working through controls. The cloud expands our ecosystem considerably, but of course we still don't want to allow critical information into the hands of the wrong people."-- Sandra Gittlen
Cloud-Based Disaster Recovery on AWS
Best Practices: Cloud-Based Disaster Recovery on AWS
This book explains Cloud-based Disaster Recovery in comparison to traditional DR, explains its benefits, discusses preparation tips, and provides an example of a globally recognized, highly successful Cloud DR deployment.
Using AWS for Disaster Recovery
Disaster recovery (DR) is one of most important use cases that we hear from our customers. Having your own DR site in the cloud ready and on standby, without having to pay for the hardware, power, bandwidth, cooling, space and system administration and quickly launch resources in cloud, when you really need it (when disaster strikes in your datacenter) makes the AWS cloud the perfect solution for DR. You can quickly recover from a disaster and ensure business continuity of your applications while keeping your costs down.
Disaster recovery is about preparing for and recovering from a disaster. Any event that has a negative impact on your business continuity or finances could be termed a disaster. This could be hardware or software failure, a network outage, a power outage, physical damage to a building like fire or flooding, human error, or some other significant disaster.
In that regard, we are very excited to release Using AWS for Disaster Recovery Whitepaper. The paper highlights various AWS features and services that you can leverage for your DR processes and shows different architectural approaches on how to recover from a disaster. Depending on your Recovery Time Objective (RTO) and Recovery Point Objective (RPO) – two commonly used industry terms when building your DR strategy – you have the flexibility to choose the right approach that fits your budget. The approaches could be as minimum as backup and restore from the cloud or full-scale multi-site solution deployed in onsite and AWS with data replication and mirroring.
The paper further provides recommendations on how you can improve your DR plan and leverage the full potential of AWS for your Disaster Recovery processes.
AWS cloud not only makes it cost-effective to do DR in the cloud but also makes it easy, secure and reliable. With APIs and right automation in place, you can fire up and test whether you DR solution really works (and do that every month, if you like) and be prepared ahead of time. You can reduce your recovery times by quickly provisioning pre-configured resources (AMIs) when you need them or cutover to already provisioned DR site (and then scaling gradually as you need). You can bake the necessary security best practices into an AWS CloudFormation template and provision the resources in an Amazon Virtual Private Cloud (VPC). All at the fraction of the cost of conventional DR.
AWS Architecture Blog
A cloudlet is a small-scale data center or cluster of computers designed to quickly provide cloud computing services to mobile devices, such as smartphones, tablets and wearable devices, within close geographical proximity.
The goal of a cloudlet is to increase the response time of applications running on mobile devices by using low latency, high-bandwidth wireless connectivity and by hosting cloud computing resources, such as virtual machines, physically closer to the mobile devices accessing them. This is intended to eliminate the wide area network (WAN) latency delays that can occur in traditional cloud computing models.
The cloudlet was specifically designed to support interactive and resource-intensive mobile applications, such as those for speech recognition, language processing, machine learning and virtual reality.
Key differences between a cloudlet and a public cloud data center
A cloudlet is considered a form of cloud computing because it delivers hosted services to users over a network. However, a cloudlet differs from a public cloud data center, such as those operated by public cloud providers like Amazon Web Services, in a number of ways.
First, a cloudlet is self-managed by the businesses or users that employ it, while a public cloud data center is managed full-time by a cloud provider. Second, a cloudlet predominantly uses a local area network (LAN) for connectivity, versus the public Internet. Thirdly, a cloudlet is employed by fewer, more localized users than a major public cloud service. Finally, a cloudlet contains only "soft state" copies of data, such as a cache copy, or code that is stored elsewhere.
The cloudlet prototype
A prototype implementation of a cloudlet was originally developed by Carnegie Mellon University as a research project, starting in 2009. The term cloudlet was coined by computer scientists Mahadev Satyanarayanan, Victor Bahl, Ramón Cáceres and Nigel Davies.
Continue Reading About cloudlet
A command, in this context, is a specific order from a user to the computer's operating system or to an application to perform a service, such as "Show me all my files" or "Run this program for me." Although Windows PowerShell includes more than two hundred basic core cmdlets, administrators can also write their own cmdlets and share them.
A cmdlet, which is expressed as a verb-noun pair, has a .ps1 extension. Each cmdlet has a help file that can be accessed by typing Get-Help <cmdlet-Name> -Detailed. The detailed view of the cmdlet help file includes a description of the cmdlet, the command syntax, descriptions of the parameters and an example that demonstrate the use of the cmdlet.
Popular basic cmdlets include:
Common Vulnerabilities and Exposures (CVE)
Common Vulnerabilities and Exposures (CVE)
Cómo detectar y mitigar técnicas avanzadas de evasión de malware
Cómo detectar y mitigar técnicas avanzadas de evasión de malware
por Nick Lewis
Mientras existan objetivos que explotar y dinero que hacer, el malware seguirá avanzando.
Para seguir siendo relevantes y recibir sus pagos,los autores del malware adoptarán técnicas de evasión avanzadas e incluirán nuevas características para satisfacer las peticiones de sus clientes, para que los ataques usando malware puedan ser más eficaces y rentables. Hay muchos casos de malware cada vez más sofisticado en los últimos meses, incluyendo a Zeus pasando de 32 bits a 64 bits y el avance del malware iBanking para apuntar a los dispositivos Android.
Además de las nuevas características de malware, hay una idea relativamente nueva en torno a "vivir de la tierra", donde los atacantes utilizan herramientas incorporadas o legítimas para evitar que sus ataques sean detectados por el software antimalware. El malware Poweliks es el más reciente ejemplo de esto.
En este consejo, voy a discutir los avances más recientes de malware y los controles necesarios en la empresa para detectarlos y controlarlos.
El TROJ_POWELIKS.A o Poweliks es un malware sin archivo diseñado para descargar otros ejemplares de malware que controlarán el sistema comprometido. Poweliks requiere un vector de infección inicial separado para comprometer el sistema local e instalar el malware, el cual, según se ha informado, es un archivo de Word malicioso. Después de la infección inicial, el malware se instala y se almacena en el registro como una librería de enlace dinámico (DLL) codificado que se extrae y se inyecta en los procesos dllhost.exe legítimos que corren en un sistema, el que luego lo ejecutará.
Si bien almacenar una DLL en el registro no es un método común de instalación de malware en un punto final, hace que sea más difícil detectar el malware, porque no todas las herramientas antimalware comprueban el registro. Sin embargo, para las herramientas que sí comprueban el registro, encontrar una clave de registro con una cantidad significativa de datos sin duda sería algo por lo cual emitir una alertar. El malware Poweliks también ejecuta comandos PowerShell para completar el ataque. Los comandos PowerShell podrían haber sido utilizados para evitar la detección usando herramientas legítimas, ya que PowerShell está instalado en la mayoría de sistemas y tiene la funcionalidad avanzada para interactuar con el sistema operativo que es necesaria para completar el ataque.
Otros malware también han seguido haciendo avances para seguir siendo rentables para los creadores de malware. El maduro malware Zeus continúa incorporando nuevas funciones; la funcionalidad más recientemente reportada agregada a él era un ataque de ingeniería social mejorado donde el malware parodiaba un mensaje de advertencia del navegador para conseguir que el usuario instalara el software malicioso. Del mismo modo, iBanking.Android ha añadido una nueva funcionalidad donde utiliza software de seguridad falso para conseguir que el usuario instale el software malicioso. A continuación, roba mensajes SMS utilizados en la autenticación de dos factores.
Controles empresariales necesarios para detectar y controlar malware avanzado
La detección de malware avanzado se puede hacer de muchas maneras diferentes. El malware de múltiples etapas, como Poweliks, y los ataques de varias etapas podrían dar a las empresas más tiempo para detectar el malware ya que cada paso requiere tiempo; sin embargo, cada paso podría no ser necesariamente detectado porque los pasos individuales por sí mismos podrían no ser maliciosos.
En el ejemplo de Poweliks, su aspecto de varias etapas puede ser difícil de detectar cuando pasa cada etapa individual, pero la correlación de todas las etapas y acciones puede ayudar a detectar y mitigar la actividad maliciosa.
Por ejemplo, mientras las secuencias de comandos de PowerShell son útiles para los administradores de sistemas o usuarios acreditados, pocos usuarios finales los desarrollan y utilizan. La detección de comandos PowerShell maliciosos es difícil porque hay muchos usos corporativos legítimos de las funciones de PowerShell. Sin embargo, para scripts de PowerShell utilizados por los usuarios finales, los administradores de sistemas pueden requerir que la secuencia de comandos sea firmada antes de la ejecución; esto ayudaría a bloquear la ejecución de scripts maliciosos por cualquier malware. Aunque esta política no detendría a un atacante dedicado, podría elevar el nivel lo suficiente para frustrarlos e impedir un ataque.
Aunque la detección del aspecto PowerShell del malware Poweliks puede ser difícil, detectar su infraestructura de comando y control y las conexiones de red podría ser más fácil. El blog de TrendMicro menciona una IP específica que puede ser utilizada como un indicador de compromiso para que una empresa pueda monitorear su red por cualquier conexión a la IP e investigar cada conexión. El monitoreo de las conexiones de red anómalas también podría ayudar a identificar un sistema comprometido que requiere investigación adicional. Esto podría incluir la observación de los registros NetFlow para ver qué sistemas son los más conversadores hacia IPs o sistemas externos con un importante número de intentos de autenticación fallidos.
El recientemente modificado malware Zeus y el malware iBanking.Android pueden ser identificados a través de pasos similares a los utilizados para identificar Poweliks, ya que dependen de la conciencia sobre la seguridad. La variante de Zeus puede ser detectada por el control de la red sobre conexiones hacia la IP de comando y control; iBanking.Android se puede detectar mediante el uso de una herramienta antimalware móvil que analiza el sistema en busca de archivos maliciosos.
Tenga en cuenta que la detección es solo una parte de un control efectivo de malware en la empresa. La respuesta rigurosa a los incidentes relacionados con el malware es fundamental para minimizar los efectos de un sistema comprometido.
No debe ser ninguna sorpresa que el malware seguirá avanzando y automatizando algunas de sus técnicas de ataque manuales más eficaces. Conforme las medidas de defensa de la empresa contra el malware se vuelven más sofisticados, el malware, inevitablemente, encontrará nuevos métodos para sortearlos. Esto requerirá una atención constante de las empresas con el fin de controlar y mitigar los ataques potenciales. Los controles y tecnologías de seguridad de las empresas tendrán que ser revisados constantemente para asegurarse de que son eficaces contra los ataques actuales. El cambio de programas y controles de seguridad cuando se descubren nuevos ataques o vulnerabilidades es esencial para permanecer delante de la curva.
También es fundamental para una empresa no solo evaluar la forma en que gestiona sus sistemas, sino también evaluar la gestión de sus sistemas para decidir si ciertas funcionalidades –tales como los scripts de PowerShell– pueden introducir potencialmente nuevos riesgos en su entorno y requerirán políticas adicionales orientadas a prevenir vulnerabilidades de ser explotadas.
Sobre el autor: Nick Lewis, CISSP, es el ex oficial de seguridad de la información de la Universidad de Saint Louis. Nick recibió una maestría en ciencias en seguridad de la información por la Universidad de Norwich en 2005 y en telecomunicaciones por la Universidad Estatal de Michigan en 2002. Antes de incorporarse a la Universidad de Saint Louis en 2011, Nick trabajó en la Universidad de Michigan y en el Hospital de Niños de Boston, el principal hospital de enseñanza pediátrica de la Escuela de Medicina de Harvard, asícomo para Internet2 y la Universidad Estatal de Michigan.
Más noticias y tutoriales
TÉRMINOS DE GLOSARIO RELACIONADOS
Término relacionado de nuestro diccionario de informática en línea.
Posted by Margaret Rouse
A compliance audit is a comprehensive review of an organization's adherence to regulatory guidelines. Independent accounting, security or IT consultants evaluate the strength and thoroughness of compliancepreparations. Auditors review security polices, user access controls and risk management procedures over the course of a compliance audit.
Continue Reading About compliance audit
“Ahora el seguro médico ACME SA es el más 'seguro' del planeta. Nuestro sistema de información detecta en tiempo real los errores de prescripción y despacho de medicamentos, y le alerta sobre cualquier acción preventiva para evitar potenciales problemas… mejorando su calidad de vida. Esto es así gracias a la integración de 2 millones de componentes desarrollados por el DKW MEDICINE INSTITUTE OF HARVARD UNIVERSITY en el sistema informático de atención primaria, administración hospitalaria y farmacia.”
También hablamos de los tres ámbitos para sembrar la “semilla KW”: universitario, redes sociales y el B2B/B2C tradicional. El primero validará segundo y tercero. Muchas empresas financiarán el desarrollo en el ambiente universitario, creando una nueva cadena de valor que beneficiará a ambos.
Los estudiantes universitarios podrán elaborar sus tesis de grado construyendo contenidos HKW y DKW en vez de escribir sendos documentos que se volverán amarillos y obsoletos en la biblioteca de la Universidad. Las investigaciones en las cátedras serán 100% reusables, generando contenidos “marca registrada” que les prestigiarán y, ¿porqué no?, serán una fuente de ingresos legítimos y universales por derechos de copyright.
El Hogar Digital es un escenario de enorme fertilidad para la tecnología KW, quizás la que más rápido popularice y masifique su uso.
Lo interesante es que, para construir productos “KW Compatible”, los fabricantes utilizarán esta tecnología en sus propias líneas de producción y estrategias comerciales.
Cualquier mensaje “de venta” para la tecnología KW deberá estar centrada en el protagonismo posible que el Usuario Final tendrá con ella.
CONCEPTOS RELATIVOS A LA CONFORMIDAD
Defecto: Incumplimiento de un requisito asociado a un uso previsto o especificado.
No conformidad: Incumplimiento de un requisito.
Conformidad: Cumplimiento de un requisito.
Liberación: Autorización para proseguir con la siguiente etapa de un proceso.
Acción preventiva: Acción tomada para eliminar la causa de una no conformidad potencial u otra situación potencialmente indeseable.
Acción correctiva: Acción tomada para eliminar la causa de una no conformidad detectada u otra situación indeseable.
Corrección: Acción tomada para eliminar una no conformidad detectada.
Reproceso: Acción tomada sobre un producto no conforme para que cumpla con los requisitos.
Reparación: Acción tomada sobre un producto no conforme para convertirlo en aceptable para su utilización prevista.
Reclasificación: Variación de la clase de un producto no conforme, de tal forma que sea conforme con requisitos que difieren de los iniciales.
Desecho: Acción tomada sobre un producto no conforme para impedir su uso inicialmente previsto.
Concesión: Autorización para utilizar o liberar un producto que no es conforme con los requisitos especificados.
Permiso de desviación: Autorización para apartarse de los requisitos originalmente especificados de un producto antes de su realización.
5 ways a Connection Broker Simplifies Hosted Environments
With all the moving parts to think about when moving resources into the data center, a connection broker might be the last thing on your mind.
Waiting until you've designed the rest of your data center to consider the connection broker can be detrimental to the overall usability of your system.
This is why we've created our new eBook, which outlines five scenarios where including a connection broker into your design from the get-go can future-proof and improve your hosted desktop solution.
Download our new eBook and learn about:
Please read the attached whitepaper
Consejos para el desarrollo de aplicaciones
Consejos para el desarrollo de aplicaciones
El ciclo de vida de desarrollo de aplicaciones es cada vez más corto. ¿Cómo conseguir desarrollos seguros y probados antes de salir a producción?
Las pruebas de software siempre pueden benefi¬ciarse de un enfoque más estructurado. El método científico no es en realidad un conjunto de méto¬dos, sino un conjunto más amplio de principios rectores. Como vera en este contenido, hay muchas similitudes entre los experimentos hechos por científicos y las pruebas de software hechas por desarrolladores.
En este handbook aprenderá sobre:
Por favor lea el documento adjunto...
Up, Up and Away: Java App Development Heads to the Cloud
Cloud computing has changed the way software is being developed. Learn how smart organizations are using the cloud to save money and make production and security more efficient.
Please read the attached whitepaper...
Posted by: Margaret Rouse
Content marketing is the publication of material designed to promote a brand, usually through a more oblique and subtle approach than that of traditional push advertising. Content marketing is most effective when it provides the consumer with accurate and unbiased information, the publisher with additional content and the advertiser with a larger audience and ultimately, a stronger brand.
On the internet, content marketing campaigns involve publishing custom content on specific destination sites the target audience respects and visits often. During the campaign, the advertiser creates custom content that is tightly aligned with the publisher’s website and editorial mission. The goal is to provide prospective customers with an integrated user experience (UX) that encourages engagement and interest in the brand. The challenge is to ensure the content is topically relevant and meets the audience's needs. If the content is simply a thinly veiled sales-pitch, it risks turning the buyer off.
Content marketing can be delivered through a variety of media, including television and magazines, and take a lot of different forms, including articles, infographics, videos and online games. The strategy may be referred to by several different names, including infomercial, sponsored content or native advertising. Whatever the label, content marketing is often integrated in such a way that it doesn't stand out from other material served by the host.
Although native advertising might not look like marketing, the content should explicitly state that it was provided by the advertiser. The Federal Trade Commission (FTC) guidelines for all advertising emphasizes transparency and includes stipulations that advertising claims must be truthful and supported by evidence. The more content marketing is similar in format and topic to the publisher's editorial content, the more important a disclosure is, in order to prevent deception.
Joe Pulizzi explains how large enterprise organizations implement content marketing:
Achieve Your IT Vision With Converged Infrastructure
Whether you've already deployed a converged system or have future deployment plans, you can maximize that investment with automation. This paper outlines 4 steps to reduce your IT complexity with converged infrastructure so your team gains the freedom to innovate and drive bottom-line results.
Please read the attached whitepaper.
Converged Infrastructures Deliver the Full Value of Virtualization
Converged Infrastructures Deliver the Full Value of Virtualization
By Ravi Chalaka | Hitachi Data Systems
Satisfied with your virtualization efforts?
How does an organization modernize IT and get more out of infrastructure resources? That’s a question many CIOs ask themselves. With hundreds or even thousands of physical hardware resources, increasing complexity and massive data growth, you need new, reliable ways to deliver IT services in an on-demand, flexible and scalable fashion. You also must address requests for faster delivery of business services, competition for resources and trade-offs between IT agility and vendor lock-in.
Please read the attached whitepaper.
Copyleft is the idea and the specific stipulation when distributing software that the user will be able to copy it freely, examine and modify the source code, and redistribute the software to others (free or priced) as long as the redistributed software is also passed along with the copyleft stipulation. The term was originated by Richard Stallman and the Free Software Foundation. Copyleft favors the software user's rights and convenience over the commercial interests of the software makers. It also reflects the belief that freer redistribution and modification of software would encourage users to make improvements to it. ("Free software" is not the same as freeware, which is usually distributed with copyright restrictions.)
Stallman and his adherents do not object to the price or profit aspects of creation and redistribution of software - only to the current restrictions placed on who can use how many copies of the software and how and whether the software can be modified and redistributed.
The de facto collaboration that developed and refined Unix and other collegially-developed programs led the FSF to the idea of "free" software and copyleft. In 1983, the FSF began developing a "free software" project that would both demonstrate the concept while providing value to users. The project was called GNU, an operating system similar to a Unix system. GNU and its various components are currently available and are distributed with copyleft stipulations. Using GNU components, the popular Linux system is also issued with a copyleft.
RELATED GLOSSARY TERMS: Hardy Heron (Ubuntu 8.04 LTS Server Edition) , high-performance computing (HPC), Open Directory Project (ODP), LiveDistro, Yellowdog Updater, Modified (YUM), BSD (Berkeley Software Distribution) , shell, Free Software Foundation (FSF) , Tcl/Tk (Tool Command Language), open source beer
This was last updated in September 2005
Copyright is the ownership of an intellectual property within the limits prescribed by a particular nation's or international law. In the United States, for example, the copyright law provides that the owner of a property has the exclusive right to print, distribute, and copy the work, and permission must be obtained by anyone else to reuse the work in these ways. Copyright is provided automatically to the author of any original work covered by the law as soon as the work is created. The author does not have to formally register the work, although registration makes the copyright more visible. (See Circular 66, "Copyright Registration for Online Works," from the U.S Copyright Office.) Copyright extends to unpublished as well as published works. The U.S. law extends copyright for 50 years beyond the life of the author. For reviews and certain other purposes, the "fair use" of a work, typically a quotation or paragraph, is allowed without permission of the author.
The Free Software Foundation fosters a new concept called copyleft in which anyone can freely reuse a work as long as they in turn do not try to restrict others from using their reuse.
EditPros, an editing and marketing communications firm, has allowed us to reprint below an article about copyright as it applies to the Internet.
Are You Violating Copyright on the Internet?
The Internet, inarguably one of the most remarkable developments in international communication and information access, is fast becoming a lair of copyright abuse. The notion of freedom of information and the ease of posting, copying and distributing messages on the Internet may have created a false impression that text and graphic materials on World Wide Web sites, postings in "usenet" news groups, and messages distributed through e-mail lists and other electronic channels are exempt from copyright statutes.
In the United States, copyright is a protection provided under title 17 of the U.S. Code, articulated in the 1976 Copyright Act. Copyright of a creative work extends 50 years beyond the lifespan of its author or designer. Works afforded copyright protection include literature, journalistic reports, musical compositions, theatrical scripts, choreography, artistic matter, architectural designs, motion pictures, computer software, multimedia digital creations, and audio and video recordings. Copyright protection encompasses Web page textual content, graphics, design elements, as well as postings on discussion groups. Canada's Intellectual and Industrial Property Law, Great Britain's Copyright, Designs and Patents Act of 1988, and legislation in other countries signatory to the international Berne Convention copyright principles provide similar protections.
Generally speaking, facts may not be copyrighted; but content related to presentation, organization and conclusions derived from facts certainly can be. Never assume that anything is in the "public domain" without a statement to that effect. Here are some copyright issues important to companies, organizations and individuals.
Handling of External Links
Even though links are addresses and are not subject to copyright regulations, problems can arise in their presentation. If your Web site is composed using frames, and linked sites appear as a window within your frame set, you may be creating the deceptive impression that the content of the linked site is yours. Use HTML coding to ensure that linked external sites appear in their own window, clearly distinct from your site. Incidentally, you may wish to disavow responsibility for the content of sites to which you provide links.
Work for Hire
While copyright ordinarily belongs to the author, copyright ownership of works for hire belong to the employer. The U.S. Copyright Act of 1976 provides two definitions of a work for hire: 1. a work prepared by an employee within the scope of his or her employment; or 2. a work specially ordered or commissioned for use as a contribution to a collective work, as a part of a motion picture or other audiovisual work, as a translation, as a supplementary work, as a compilation, as an instructional text, as a test, as answer material for a test, or as an atlas, if the parties expressly agree in a written instrument signed by them that the work shall be considered a work made for hire. U.S. Copyright Office documentation further states, "Copyright in each separate contribution to a periodical or other collective work is distinct from copyright in the collective work as a whole and vests initially with the author of the contribution."
Just as making bootleg tapes of recorded music and photocopying books are illegal activities, printing and distributing contents of Web pages or discussion group postings may constitute copyright infringement. And companies may be liable for such activities conducted by their employees using company computing or photocopying equipment. However, the law does not necessarily prohibit downloading files or excerpting and quoting materials. The doctrine of fair use preserves your right to reproduce works or portions of works for certain purposes, notably education, analysis and criticism, parody, research and journalistic reporting. The amount of the work excerpted and the implications of your use on the marketability or value of the works are considerations in determining fair use. Works that are not fixed in a tangible form, such as extemporaneous speeches, do not qualify for copyright protection. Titles of works, and improvisational musical or choreographic compositions that have not been annotated, likewise cannot be copyrighted. Names of musical groups, slogans and short phrases may gain protection as trademarks when registered through the U.S. Patent & Trademark Office.
Protecting Your Own Works
Although copyright automatically applies to any creative work you produce, you can strengthen your legal copyright protection by registering works with the U.S. Copyright Office. Doing so establishes an official record of your copyright, and must be done before filing an infringement civil lawsuit in Federal district court. Registration costs $20. For information, visit the Copyright Office Web site or call (202) 707-3000; TTY is (202) 707-6737.
If you appoint an independent Web developer to create and maintain your Web site, make sure through written agreement that you retain the copyright to your Web content.
Place a copyright notice on each of your Web pages and other published materials. Spell out the word "Copyright" or use the encircled "c" symbol, along with the year of publication and your name, as shown in this example:
Copyright 1998 EditPros marketing communications If you're concerned about copyright protection in other nations, add: "All rights reserved."
How to Stay Legal
If you'd like to share the contents of an interesting Web page with your company employees, describe the page and tell them the URL address of the Web site so they can look for themselves. And if the latest edition of a business newspaper contains an article you'd like to distribute to your 12 board members, either ask the publication for permission to make copies, or buy a dozen copies of the newspaper. Retention of value through sales of that newspaper, after all, is what copyright law is intended to protect.
The United States Copyright Office contains an explanation of American copyright basics and a list of frequently asked questions, as well as the complete text of the United States Copyright Act of 1976. Topics include copyright ownership and transfer, copyright notice, and copyright infringement and remedies. The site is maintained by the U.S. Library of Congress.
Most of the material in this definition/topic was reprinted from an EditPros newsletter with their permission. EditPros is a writing, editing, and publishing management firm in Davis, California with their own Web site.
RELATED GLOSSARY TERMS: FERPA (Family Educational Rights and Privacy Act of 1974),Electrohippies Collective, Carnivore, lawful interception (LI), cypherpunk, Information Awareness Office (IAO), lifestyle polygraph, Electronic Signatures in Global and National Commerce Act (e-signature bill), cyberstalking, I-SPY Act -- Internet Spyware Prevention Act of 2005 (H.R. 744)
This was last updated in September 2005
Posted by Margaret Rouse
Cowboy coding describes an undisciplined approach to software development that allows individual programmers to make up their own rules.
Cowboy coding is programming lingo for an approach tosoftware development that gives programmers almost complete control over the development process. In this context, cowboy is a synonym for maverick -- an independent rebel who makes his own rules.
An organization might permit cowboy coding because there are not enough resources to commit to the design phase or a project deadline is looming. Sometimes cowboy coding is permitted because of a misguided attempt to stimulate innovation or because communication channels fail and there is little or no business stakeholder involvement or managerial oversight. An individual developer or small team might be given only a minimal description of requirements and no guidance regarding how these objectives should be achieved. They are free to select frameworks, coding languages, libraries, technologies and other build tools as they see fit.
The cowboy approach to coding typically focuses on quick fixes and getting a working product into production as quickly as possible. There is nodocumentation or formal process for quality assurance testing, as required by continuous integration and other Agile software developmentmethodologies. Instead of producing lean, well-written code, cowboy code often has errors that cause failures upon deployment or make it difficult to maintain over time. Integrating the various components of the code may also be a challenge since with cowboy coding there are no agreed-upon best practices to provide continuity.
Continue Reading About cowboy coding
Creating and Testing Your IT Recovery Plan
Creating and Testing Your IT Recovery Plan
Regular tests of your IT disaster recovery plan can mean the difference between a temporary inconvenience or going out of business.
Testing at least once per month is important to maintain engineering best practices, to comply with stringent standards for data protection and recovery, and to gain confidence and peace of mind. In the midst of disaster is not the time to determine the flaws in your backup and recovery system. Backup alone is useless without the ability to efficiently recover, and technologists know all too well that the only path from “ought to work” to “known to work” is through testing.
A recent study found that only 16 percent of companies test their disaster recovery plan each month, with over half testing just once or twice per year, if ever. Adding to the concern, almost one – third of tests resulted in failure.
The reasons cited for infrequent testing include the usual litany of tight budgets, disruption to employees and customers, interruption of sales and revenue, and of course the scarcity of time. This survey covered mostly large enterprises, and the challenges are even greater for smaller firms. According to the survey findings1
Yet new systems have arrived that allow daily automated testing of full recovery, putting such assurances in reach of every business. Backup without rapid recovery and testing will soon be as obsolete as buildings without sprinklers or cars without seatbelts.
Please read the attached whitepaper.
Not Creating a Disaster Recovery Plan Could Cost You Everything
Disaster recovery planning is a very large topic, with just one part being about backing up and recovering your data. To give you a real life example of what I mean by saying that data backup and recovery is just part of an overall disaster recovery plan, I will refer to a recent posting on Reddit. The post talks about how the System Admin gets a ticket saying that the power is out in their office in Kiev and that the UPS battery is down to 13%. In response, the technician at the office simply shuts down the gear. The next day they received a news report that basically stated that the entire building, that was once their Kiev office, was no longer functional as fire and collapsed floors had completely devastated it. The System Admin ends his post by asking how is your disaster recovery plan, and have you tested it.
When you start thinking about planning out your disaster recovery plan, you need to think about completely unrealistic disasters, along with the normal types of disaster crisis scenarios. If you have a disaster recovery plan already in place, does it take into account what happens if the office is completely destroyed or is inaccessible? How about multiple points of connectivity? When was the last time that your disaster recovery plan was actually tested?
When to Test Your Disaster Recovery Plan
It is a good practice to update and test your disaster recovery plan whenever large changes are made. What happens when you have everything set the way you want it and nothing huge has changed? My suggestion is to treat it like your smoke detector; twice a year when the time changes and you change the batteries in your smoke detectors, test your entire disaster recovery plan. Testing that plan should include asking yourself questions and exploring “what if” scenarios like: what happens if Bob, the main System Admin goes missing or dies by the proverbial bus that hunts down System Admins, or what happens if the building is on fire and everything inside is gone, or what happens if the cloud service you rely on for production/backup/disaster recovery suddenly closes its doors. All of these things needs to be accounted for along with many other scenarios in order to be able to recover from a disaster and continue running your business.
Not Making Time for Disaster Recovery Could Cost You
It seems like one of the hardest things to do is to make the time to either create or test your disaster recovery plan. Most of the time it seems like it comes down to time, and not having enough time is the biggest excuse given for not creating or testing a disaster recovery plan. This issue of time almost always comes down to priorities. When creating or testing your disaster recovery plan is too low on your priority list, it simply never gets done.
One of the best ways to go about pushing up the priority of disaster recovery is simply to think about how much each minute, hour, day, and week of downtime will cost the company. For instance, say an hour of downtime on the company website costs the company $3000 in lost e-commerce revenue. Now multiply that over hours or even days and your talking about huge potential losses that could have been avoided. Plus, that is not even factoring in the potential revenue loss of new customers who may not even consider your company after not being able to read about your company/products or the negative affect it has on the company image. The costs, even in this small scale disaster scenery, add up quickly.
The reality is, if you think data loss won’t happen to your company think again. 74% of companies have experienced data loss at the workplace and 32% of companies take several days to recover from the loss of data. The scary truth is 16% of companies that experience data loss never recover. When you think in terms of the potential cost to the company, it should help you prioritize your disaster recovery planning and testing along with justifying the costs of both the planning, infrastructure, and testing.
I think Benjamin Franklin said it best when he stated “If you fail to plan, you plan to fail.” When it comes to disaster recovery, failing to have a plan is a sure-fire way to set the company up for failure in the event of a disaster, and it could cost the company everything.
If you liked this post, subscribe to our RSS feed
Creative Commons (COPYRIGHT)
Part of the Open source glossary:
Creative Commons is a nonprofit organization that offers copyright licenses for digital work.
No registration is necessary to use the Creative Commons licenses. Instead, content creators select which of the organization's six licenses best meets their goals, then tag their work so that others know under which terms and conditions the work is released. Users can search the CreativeCommons.org website for creative works such as music, videos, academic writing, code or images to use commercially or to modify, adapt or build upon.
The six categories of licenses offered are:
This was last updated in July 2013
Contributor(s): Emily McLaughlin
Customer Relationship Management
La administración de la relación con los clientes, CRM (Customer Relationship Management), es parte de una estrategia de negocio centrada en el cliente. Una parte fundamental de su idea es, precisamente, la de recopilar la mayor cantidad de información posible sobre ellos y así poder dar valor a la oferta. Esto se refiere a poder brindarles soluciones que se adecuen perfectamente a sus necesidades, sin generarles nuevas. Por lo tanto, el nombre CRM hace referencia a una estrategia de negocio basada principalmente en la satisfacción de los clientes. También se identifican con esta sigla los sistemas informáticos que le dan soporte.
El concepto más cercano es Marketing Relacional. También tiene que ver con: Clienting, Marketing 1x1,Marketing Directo, etc.
Please read the attached handbook.
¿La solución para la rápida entrega de aplicaciones para móviles? Es una prueba de crowdsourcing
Crowdsourced Testing es una plataforma web que conecta empresas especializadas en desarrollo de software y sitios web con una red internacional de profesionales del aseguramiento de calidad (testers) que pueden probar sus productos para encontrar fallas y reportarlas de forma rápida y expedita para facilitar su corrección, donde el cliente son las empresas que pagan por este servicio y el usuario el grupo de testers encargado de las mejoras. Los testers de Crowdsourced Testing son trabajadores independientes que trabajan desde su casa, todos con experiencia previa en aseguramiento de calidad de productos informáticos.
The solution to speedy mobile app delivery? It's crowdsourced testing
Sometimes you just need a lot of users playing with your app to find out how it's really working. Enter crowdsourced testing. It's the latest strategy to speed up your mobile dev.
At a time when the pressure to develop, test and release mobile apps quickly has never been more intense, the idea of crowdsourced testing is growing in popularity. The concept is simple: A crowdsourced testing company can offer thousands of testers in different locations around the world a wide swath of devices, and by literally throwing a "crowd" at the problem, testing that might take weeks with a small internal team can be done on a weekend, said Peter Blair, vice president of marketing at Applause. And it's an idea that has apparently caught hold. According to data from market research firm Gartner Group, there were 30 crowdsourced testing companies operating at the end of last year, offering fully vetted (qualified) testers, up from just 20 companies in 2015.
Priyanka Halder, director of quality assurance at HomeMe, is no stranger to crowdsourced testing. She participated in a number of "bug battles" at uTest, a software testing community that also offers crowdsourced testing opportunities. So when she joined the small startup HomeMe she immediately began thinking about a crowdsourced testing solution.
"We're a pretty small company and we needed a larger number of people looking at our app and on a tight budget," she said. "This is the perfect model for us because we can't afford a big team on our site."
People just do things that no system, no automation and no engineer could ever predict they'd do."
Peter Blairvice president of marketing, Applause
With crowdsourced testing it is all about the big team. Blair said Applause has over 250,000 fully vetted testers, most of whom are QA professionals with full-time jobs who do this on the side. These testers are located around the world, and are paired with "pretty much every mobile device you can think of," he said. So a crowdsourced customer wouldn't have to worry about having access to every single version of an Android phone, which Blair said is a huge selling point.
But the biggest issue, he said, is that companies are hungry to see how real users actually interface with their products. "People just do things that no system, no automation and no engineer could ever predict they'd do," he explained. "Customers who've used us just to augment their teams many times end up staying on because they like seeing the results of our exploratory testing," he said, and they can't get that information easily any other way.
Halder said she looked at a number of crowdsourced testing options before settling on Applause. The biggest plus for her was how easy it was to get the testing feedback and how mature the company's process was. "It can be a nightmare to coordinate how to get the information back from the testers. This ended up being a way for us to get more people actually using our app for less money and get all the feedback we need."
CROWDSOURCING FOR ENTERPRISE IT
10 KEY QUESTIONS (AND ANSWERS) ON CROWDSOURCING FOR ENTERPRISE IT
A starting guide for augmenting technical teams with crowdsourced design, development and data science talent
A crowdsourcing platform is essentially an open marketplace for technical talent. The requirements, timelines, and economics behind crowdsourced projects are critical to successful outcomes. Varying crowdsourcing communities have an equal variety of payments being offered for open innovation challenges. Crowdsourcing is meritocratic - contributions are rewarded based on value. However, the cost-efficiencies of a crowdsourced model reside in the model's direct access to talent, not in the compensated value for that talent. Fair market value is expected for any work output. The major cost difference with between legacy sourcing models and a crowdsourcing model is (1) the ability to directly tap into technical expertise, and (2) that costs are NOT based around time or effort.
Please read the attached whitepaper.
Customer Journey Map
Mapa de viaje del cliente
Customer Service Model
Mastering the Modern Customer Service Model
by Wheelhouse Enterprises
Perfecting your in-house customer service system has never been easy until now. The cloud has made customer service tools readily available and revolutionized how they are implemented. Our newest white paper details the tools necessary for the most modern, up-to-date customer service tools for your organization. Whether you're looking for specific tools for your contact center or CRM, we have you covered.
Please read the attached whitepaper.