Glosario KW | KW Glossary
Ontology Design | Diseño de Ontologías
Leave outdated traditional two factor authentication in the past. Mobile authentication transforms a smartphone into a mobile smart credential. Now it's possible for users to securely identify themselves by simply clicking "ok" on their device, eliminating the hassle of a tradition hard token or physical cards.
Enterprise Resource Planning
Enterprise resource planning (ERP) is an industry term for the broad set of activities that helps an organization manage its business.
An important goal oF ERP is to facilitate the flow of information so business decisions can be data-driven. ERP software suites are built to collect and organize data from various levels of an organization to provide management with insight into key performance indicators (KPIs) in real time.
ERP software modules can help an organization's administrators monitor supply chain, inventory, purchasing, finance, product lifecycle, projects, human resources and other mission-critical components of a business through a web portal or series of interconnected executive dashboards. In order for an ERP software deployment to be useful, however, it needs to be integrated with other software systems the organization uses. For this reason, deployment of a new ERP system in-house can involve considerable business process analysis, employee retraining and back-end information technology (IT) support for database integration, business intelligence and reporting.
Legacy ERP systems tend to be architected as large, complex homogeneous systems which do not lend themselves easily to a software-as-a-service (SaaS) delivery model. As more companies begin to store data in the cloud, however, ERP vendors are responding with cloud-based services to perform some functions of ERP -- particularly those relied upon by mobile users. An ERP implementation that uses both on-premises ERP software and cloud ERP services is called two-tiered ERP.
Cómo desplegar escritorios eventuales y no morir en el intento
Los escenarios son múltiples y generalmente desafiantes: tercerización de tareas, centros de contacto con muchos operadores involucrados en tareas específicas, offshoring, asociaciones eventuales con terceras partes para llevar adelante proyectos de duración limitada… El factor común en todos estos escenarios es la necesidad de desplegar escritorios, pero sin que ese despliegue implique riesgos para la seguridad o plantee dificultades a la hora de escalar. Los ejecutivos de VMware proponen una solución.
En la medida en que las empresas eligen ocuparse de su core business y manejar el resto de las tareas como operaciones separadas o remotas, la infraestructura necesaria para soportar dichas operaciones de hace más compleja. Los retos especiales que enfrentan las compañías por la subcontratación o deslocalización son consecuencia de la distribución geográfica, el tipo de usuario, la sensibilidad de la información, niveles de servicio (SLA) o los costos de operación. Outsourcing (tercerización), Offshoring (traslado de las operaciones a otros países) y la capacidad de poder escalar y desescalar la infraestructura de acuerdo a los proyectos en danza (recursos humanos que son contratados para proyectos específicos o a demanda del negocio), son algunos de los escenarios más frecuentes en estos casos, y el factor común de todos ellos es que necesitan del despliegue flexible, económico y seguro de escritorios.
El Business Process Desktop puede ayudar a las empresas a satisfacer las demandas de estos escenarios. Es un ambiente de trabajo para usuarios con una o muy pocas tareas o actividades específicas. Esta clase de despliegues ayuda a minimizar el riesgo proveniente de retos tales como:
Para atender las necesidades de este tipo de clientes, VMware propone algunas soluciones concretas:
El Business Process Desktop de VMware habilita a las organizaciones que sufren los retos arriba detallados para que puedan incrementar la seguridad y la cumplimentación de regulaciones al centralizar la información crítica del negocio. Al mismo tiempo simplifica y centraliza la gestión de los desktops, bajando el costo de operación. A su vez permite alcanzar y superar los niveles de SLA, al asegurar acceso rápido e ininterrumpido a los datos y las aplicaciones por parte de los usuarios finales a través de la WAN. Finalmente, puede asegurar que los escritorios son resguardados (backup, recuperación de desastres) y que pueden ser desplegados bajo demanda (as a Service) lo cual va en línea con la cambiante dinámica de los negocios.
Evaluating the different types of DBMS products
Evaluating the different types of DBMS products
Expert contributor Craig S. Mullins describes the types of database management system products on the market and outlines their strengths and weaknesses.
by: Craig S. Mullins
The database management system (DBMS) is the heart of today's operational and analytical business systems. Data is the lifeblood of the organization and the DBMS is the conduit by which data is stored, managed, secured and served to applications and users. But there are many different forms and types of DBMS products on the market, and each offers its own strengths and weaknesses.
Relational databases, or RDBMSes, became the norm in IT more than 30 years ago as low-cost servers became powerful enough to make them widely practical and relatively affordable. But some shortcomings became more apparent in the Web era and with the full computerization of business and much of daily life. Today, IT departments trying to process unstructured data or data sets with a highly variable structure may also want to considerNoSQL technologies. Applications that require high-speed transactions and rapid response rates, or that perform complex analytics on data in real time or near real time, can benefit from in-memory databases. And some IT departments will want to consider combining multiple database technologies for some processing needs.
The DBMS is central to modern applications, and choosing the proper database technology can affect the success or failure of your IT projects and systems. Today's database landscape can be complex and confusing, so it is important to understand the types and categories of DBMSes, along with when and why to use them. Let this document serve as your roadmap.
DBMS categories and models
Until relatively recently, the RDBMS was the only category of DBMS worth considering. But the big data trend has brought new types of worthy DBMS products that compete well with relational software for certain use cases. Additionally, an onslaught of new technologies and capabilities are being added to DBMS products of all types, further complicating the database landscape.
The RDBMS: However, the undisputed leader in terms of revenue and installed base continues to be the RDBMS. Based on the sound mathematics of set theory, relational databases provide data storage, access and protection with reasonable performance for most applications, whether operational or analytical in nature. For more than three decades, the primary operational DBMS has been relational, led by industry giants such as Oracle, Microsoft (SQL Server) and IBM (DB2). The RDBMS is adaptable to most use cases and reliable; it also has been bolstered by years of use in industry applications at Fortune 500 (and smaller) companies. Of course, such stability comes at a cost: RDBMS products are not cheap.
Support for ensuring transactional atomicity, consistency, isolation and durability -- collectively known as the ACID properties -- is a compelling feature of the RDBMS. ACID compliance guarantees that all transactions are completed correctly or that a database is returned to its previous state if a transaction fails to go through.
Given the robust nature of the RDBMS, why are other types of database systems gaining popularity? Web-scale data processing and big data requirements challenge the capabilities of the RDBMS. Although RDBMSes can be used in these realms, DBMS offerings with more flexible schemas, less rigid consistency models and reduced processing overhead can be advantageous in a rapidly changing and dynamic environment. Enter the NoSQL DBMS.
The NoSQL DBMS: Where the RDBMS requires a rigidly defined schema, aNoSQL database permits a flexible schema, in which every data element need not exist for every entity. For loosely defined data structures that may also evolve over time, a NoSQL DBMS can be a more practical solution.
Another difference between NoSQL and relational DBMSes is how data consistency is provided. The RDBMS can ensure the data it stores is always consistent. Most NoSQL DBMS products offer a more relaxed, eventually consistent approach (though some provide varying consistency models that can enable full ACID support). To be fair, most RDBMS products also offer varying levels of locking, consistency and isolation that can be used to implement eventual consistency, and many NoSQL DBMS products are adding options to support full ACID compliance.
So NoSQL addresses some of the problems encountered by RDBMS technologies, making it simpler to work with large amounts of sparse data. Data is considered to be sparse when not every element is populated and there is a lot of "empty space" between actual values. For example, think of a matrix with many zeroes and only a few actual values.
But while certain types of data and use cases can benefit from the NoSQL approach, using NoSQL databases can come at the price of eliminating transactional integrity, flexible indexing and ease of querying. Further complicating the issue is that NoSQL is not a specific type of DBMS, but a broad descriptor of four primary categories of different DBMS offerings:
Each of these types of NoSQL DBMS uses a different data model with different strengths, weaknesses and use cases to consider. A thorough evaluation of NoSQL DBMS technology requires more in-depth knowledge of each NoSQL category, along with the data and application needs that must be supported by the DBMS.
The in-memory DBMS: One last major category of DBMS to consider is thein-memory DBMS (IMDBMS), sometimes referred to as a main memory DBMS. An IMDBMS relies mostly on memory to store data, as opposed to disk-based storage.
The primary use case for the IMDBMS is to improve performance. Because the data is maintained in memory, as opposed to on a disk storage device, I/O latency is greatly reduced. Mechanical disk movement, seek time and transfer to a buffer can be eliminated because the data is immediately accessible in memory.
An IMDBMS can also be optimized to access data in memory, as opposed to a traditional DBMS that is optimized to access data from disk. IMDBMS products can reduce overhead because the internal algorithms usually are simpler, with fewer CPU instructions.
A growing category of DBMS is the multi-model DBMS, which supports more than one type of storage engine. Many NoSQL offerings support more than one data model -- for example, document and key-value. RDBMS products are evolving to support NoSQL capabilities, such as adding a column store engine to their relational core.
Other DBMS categories exist, but are not as prevalent as relational, NoSQL and in-memory:
As you examine the DBMS landscape, you will inevitably encounter many additional issues that require consideration. At the top of that list is platform support. The predominant computing environments today are Linux, Unix, Windows and the mainframe. Not every DBMS is supported on each of these platforms.
Another consideration is vendor support. Many DBMS offerings are open source, particularly in the NoSQL world. The open source approach increases flexibility and reduces initial cost of ownership. However, open source software lacks support unless you purchase a commercial distribution. Total cost of ownership can also be higher when you factor in the related administration, support and ongoing costs.
You might also choose to reduce the pain involved in acquisition and support by using a database appliance or deploying in the cloud. A database appliance is a preinstalled DBMS sold on hardware that is configured and optimized for database applications. Using an appliance can dramatically reduce the cost of implementation and support because the software and hardware are designed to work together.
Implementing your databases in the cloud goes one step further. Instead of implementing a DBMS at your shop, you can contract with a cloud database service provider to implement your databases using the provider's service.
The next step
If your site is considering a DBMS, it's important to determine your specific needs as well as examine the leading DBMS products in each category discussed here. Doing so will require additional details on each of the different types of DBMS, as well as a better understanding of the specific use cases for which each database technology is optimized. Indeed, there are many variables that need to be evaluated to ensure you make a wise decision when procuring database management system software.
About the author:
Learn about some database management rules of thumb from author Craig S. Mullins
See why consultant William McKnight says you should give some thought toin-memory databases
This was first published in January 2015
Dig deeper on Database management system (DBMS) software and technology
Posted by: Margaret Rouse
An exit strategy is a planned approach to terminating a situation in a way that will maximize benefit and/or minimize damage.
The idea of having a strategic approach can be applied to exiting any type of situation but the term is most often used in a business context in reference to partnerships, investments or jobs.
Understanding the most graceful exit strategy for establishing partnerships should be part of due diligence and vetting potential suppliers and service providers. In cloud services, for example, termination or early-withdrawal fees, cancellation notification and data extraction are just a few of the factors to be considered.
An entrepreneur's plan for exiting a startup might include selling the company at a profit or running the business as long as the return on investment (ROI) is attractive and simply terminating it when that ceases to be the case. In the stock market, an exit strategy might include a stop-loss order that instigates a sale when the value of a stock drops below a specified price.
In an employment context, exit strategies are becoming increasingly important not just for corporate executives but for all employees. People change jobs much more frequently than they did in the past, whether voluntarily or involuntarily through firing, downsizing oroutsourcing. An employee's exit strategy might include negotiating a severance agreement,, updating a resume, maintaining lists of potentially helpful contacts and saving enough money to cover a period of unemployment.
See also: supplier risk management
CONTINUE READING ABOUT EXIT STRATEGY
PEOPLE WHO READ THIS ALSO READ...
Posted by: Margaret Rouse
An extension strategy is a practice used to increase the market share for a given product or service and thus keep it in the maturity phase of the marketing product lifecycle rather than going into decline.
Extension strategies include rebranding, price discounting and seeking new markets. Rebranding is the creation of a new look and feel for an established product in order to differentiate the product from its competitors. At its simplest, rebranding may consist of creating updated packaging to change the perception of the product.
Continue Reading About extension strategy
People Who Read This Also Read...
FIDO (Fast Identity Online) definition
FIDO (Fast Identity Online) definition
Posted by: Margaret Rouse
FIDO (Fast ID Online) is a set of technology-agnostic security specifications for strong authentication. FIDO is developed by the FIDO Alliance, a non-profit organization formed in 2012.
FIDO specifications support multifactor authentication (MFA) and public key cryptography. A major benefit of FIDO-compliant authentication is the fact that users don't need to use complex passwords, deal with complex strong password rules and or go through recovery procedures when they forget a password. Unlike password databases, FIDO stores personally identifying information (PII) such as biometric authentication data locally on the user's device to protect it. FIDO's local storage of biometrics and other personal identification is intended to ease user concerns about personal data stored on an external server in the cloud. By abstracting the protocol implementation with application programming interfaces (APIs), FIDO also reduces the work required for developers to create secure logins for mobile clients running different operating systems (OSes) on different types of hardware.
FIDO supports the Universal Authentication Framework (UAF) protocol and the Universal Second Factor (U2F) protocol. With UAF, the client device creates a new key pair during registration with an online service and retains the private key; the public key is registered with the online service. During authentication, the client device proves possession of the private key to the service by signing a challenge, which involves a user–friendly action such as providing a fingerprint, entering a PIN or speaking into a microphone. With U2F, authentication requires a strong second factor such as a Near Field Communication (NFC) tap or USB security token.
The history of the FIDO Alliance
In 2007, PayPal was trying to increase security by introducing MFA to its customers in the form of its one-time password (OTP) key fob: Secure Key. Although Secure Key was effective, adoption rates were low -- it was generally used only by few security-conscious individuals. The key fob complicated authentication, and most users just didn't feel the need to use it.
In talks exploring the idea of integrating fingerscanning technology into PayPal, Ramesh Kesanupalli (then CTO of Validity Sensors) spoke to Michael Barrett (then PayPal's CISO). It was Barrett’s opinion that an industry standard was needed that could support all authentication hardware. Kesanupalli set out from there to bring together industry peers with that end in mind. The FIDO Alliance was founded as the result of meetings between the group. The Alliance went public in February 2013 and since that time many companies become members, including Google, ARM, Bank of America, Master Card, Visa, Microsoft, Samsung, LG, Dell and RSA. Microsoft has announced the inclusion of FIDO for authentication in Windows 10.
The proliferation of smartphones and other mobile devices continue to call for standards that support multifactor authentication. Methods such as biometrics are being incorporated into smartphones and PCs to prevent identity theft. Today a variety of products exist on the market ranging from the EMC RSA Authentication Manager, Symantec Verisign VIP, CA Strong Authentication, and Vasco Identikey Digipass.
Continue Reading About FIDO (Fast Identity Online)
For enterprise file sync-and-share, security is king
IT should rest easy about where their data lives in the consumerization age, but there's no one-size-fits-all approach to reaching that peace of mind.
The thought of data ending up in the wrong hands can keep IT admins awake at night.
When it comes to enterprise file sync-and-share options, IT can take many different approaches to secure access on all devices.
Security should be top of mind when considering an enterprise file sync-and-share platform, said James Gordon, first VP of IT and operations at Needham Bank in its namesake city in Massachusetts.
"[Security] is the end-all, be-all," he said. "When the IT admin doesn't have the apps that people perceive they need to do their job efficiently on their device, you've created this yin-yang symbol of give and take, or fighting back."
Vendors small and large offer products for data collaboration, sharing and storage both through cloud and on-premises installations. Enterprises can also secure third-party apps through an enterprise mobility management (EMM) platform. Here's how three companies take each of these individual approaches.
On-premises and encryption security
U.S. companies now have a secure enterprise file sync-and-share option previously only available across the Atlantic.
Two years ago, Berger Group, a financial advisory organization with companies based in Italy and Switzerland, began to look for a secure way to store, transfer and edit confidential documents between its two companies and third parties like clients and legal counsel.
Berger Group researched data loss prevention vendors but found implementation and maintenance would require additional staff and changes to its infrastructure it couldn't afford, said Claudio Ciapetti, Berger Group's controller and IT operations manager.
[Security] is the end-all, be-all.
first vice president of IT, Needham Bank
Eventually the company found Boole Server, an enterprise file sync-and-share vendor based in Milan, Italy, with an on-premises product that provides encryption for data in transit, at rest, within applications and even when in use. In addition to 256-bit Advanced Encryption Standard, Boole Server uses a proprietary algorithm that applies a 2048-bit random encryption key to each file.
Enterprises hold the encryption keys for Boole Server, unlike some cloud-based enterprise file sync-and-share competitors such as Dropbox and Amazon Zocalo.
With Boole Server, Berger Group maintains ownership of its files even when accessed outside the company and sets restrictions on actions like copying, pasting and printing.
"We set it up to make sure a third party can connect with our server to look at documentation and make amendments, but still leave the document in our server," Ciapetti said.
Boole Server recently launched its product offerings in the U.S. after previously only being available in Europe. Boole Server is available in three versions: Small to Medium Business (SMB), Corporate and Enterprise. Storage space is capped at 1 TB for SMB and unlimited for Corporate and Enterprise. Enterprise customers receive an unlimited number of guest and user profiles per license while Corporate is capped at 1,000 and SMB at 150. Boole Server is available as a onetime purchase starting at $10,000 for SMB and Corporate and $25,000 for Enterprise, which includes two server licenses.
Securing highly regulated industries
Security is even more important in highly regulated industries, and one enterprise file sync-and-share company builds its products specifically for those industries.
Comfort Care Services Ltd., based in Slough, England, provides support for adults with mental health and learning disabilities to help them integrate back into communities after leaving hospital care. As recently as three years ago there were 15 corporate-issued laptops in the whole company and most other business was conducted on paper, said Gee Bafhtiar, director of IT operations at Comfort Care Services.
"It's cumbersome and takes an amazing amount of time to get data from one place to another," Bafhtiar said.
Comfort Care Services began its technological turnaround by implementing desktop virtualization from Terminal Service Plus, but still needed a quicker and more secure option for document editing, sharing and collaborating with external users.
When a patient sought to join Comfort Care Services, it previously took upward of a month to complete paperwork that involved sending medical records and support plans back and forth between the patient, Comfort Care Services and government commissioning bodies. While the company continues to use Terminal Service Plus, only internal users access the system.
The company considered Box and Citrix for enterprise file sync-and-share but found neither offered the granular control for auditing capabilities Comfort Care Services required, Bafhtiar said. Enter Workshare, which focuses on secure collaboration products and applications for highly regulated industries such as legal, government, finance and healthcare. The London-based company also allows customers to hold encryption keys.
Comfort Care Services uses Workshare Connect, a cloud application providing collaboration and file sharing among employees and outside parties with permitted access. It found Workshare Connect afforded more of the granular controls around access to specific internal and external users and tracking changes to documents it could not find with other platforms, Bafhtiar said.
At first, Comfort Care Services couldn't conduct remote wipes of files in Workshare if a device was lost or stolen or if an employee left the company. Workshare later added that capability.
"There's always a compromise that needs to be made but we found that we had to do a lot less compromising with Workshare," Bafhtiar said.
Through Workshare, Comfort Care Services can release an individual document to anybody it chooses by inviting them in and giving them access to that document for a limited amount of time. The company can see what changes are made and who made them for security and auditability.
Comfort Care Services has simplified documentation processing and cut the approval time for new patients in half. Employees can use Web and mobile versions of the Workshare app on laptops and mobile devices to securely edit and share documents.
Workshare is available in four formats that range from $30 to $175 per user per year. The formats include Protect for metadata removal and policies, Compare for document version management, Connect for secure file collaboration and Workshare Pro 8, which combines the other three formats into one platform.
EMM platforms secure cloud apps, repositories
Yet another option for IT is using file sync-and-share options directly from EMM platforms. Some of these include Citrix's ShareFile, AirWatch by VMware's Secure Content Locker, Good Technology's Secure Mobility Solution and MobileIron's Docs@Work.
MobileIron recently updated Docs@Work to allow companies to connect with cloud services including Box, Dropbox, Microsoft Office 365 and SharePoint Online. Users can search, download and save documents across all of those different services directly within the Docs@Work browser. From there, documents can be edited both locally on the device and remotely through the browser.
Secure Content Locker and ShareFile, by comparison, allow companies to integrate with content repositories for file access. ShareFile uses Personal Cloud Connectors to access Box, Dropbox, Google Drive and OneDrive accounts and allows users to edit files stored in content repositories like SharePoint and EMC Documentum.
MobileIron wants to ensure a consistent user experience across platforms with the update, said Needham Bank's Gordon, whose bank uses Docs@Work along with the rest of MobileIron's EMM platform.
Docs@Work helps Needham Bank employees securely access files within SharePoint on their iOS, Android and Windows Phone devices. It allows Gordon's IT department to keep track of which users access files and logs the access time.
"You're authenticating not only the user but the device, because the users are already enrolled with MobileIron certificates," Gordon said.
The connection of Docs@Work with these cloud applications is the first part of an overall security content and collaboration platform that MobileIron currently has in development and would like to roll out to customers within the next year. This includes file-level encryption for files located in those cloud platforms, the company said.
Docs@Work is not available as a standalone product and can only be purchased as part of the Gold and Platinum bundles of MobileIron's EMM platform. AirWatch Secure Content Locker and ShareFile, by comparison, are available standalone. There is no additional cost for existing MobileIron customers, and list pricing for the bundles starts at $4 per device per month.
Flash Storage for Database Workloads
Flash Storage for Database Workloads
Flash storage technologies can immediately address the performance and I/O latency problems encountered by many database deployments.
Please read the attached whitepapers.
Fuzz Testing (Fuzzing)
Fuzz Testing (Fuzzing)
Fuzz testing or fuzzing is a software testing technique used to discover coding errors and security loopholes in software, operating systems or networks by inputting massive amounts of random data, called fuzz, to the system in an attempt to make it crash. If a vulnerability is found, a tool called a fuzz tester (or fuzzer), indicates potential causes. Fuzz testing was originally developed by Barton Miller at the University of Wisconsin in 1989.
Fuzzers work best for problems that can cause a program to crash, such as buffer overflow, cross-site scripting, denial of service attacks, format bugs and SQL injection. These schemes are often used by malicious hackers intent on wreaking the greatest possible amount of havoc in the least possible time. Fuzz testing is less effective for dealing with security threats that do not cause program crashes, such as spyware, some viruses, worms, Trojans and keyloggers.
Fuzz testing is simple and offers a high benefit-to-cost ratio. Fuzz testing can often reveal defects that are overlooked when software is written and debugged. Nevertheless, fuzz testing usually finds only the most serious faults. Fuzz testing alone cannot provide a complete picture of the overall security, quality or effectiveness of a program in a particular situation or application. Fuzzers are most effective when used in conjunction with extensive black box testing, beta testing and other proven debugging methods.
"Fuzz testing is most useful for software that accepts input documents, images, videos or files that can carry harmful content. These are the serious bugs that it's worth investing to prevent." - David Molnar
Continue Reading About fuzz testing (fuzzing)
Posted by Margaret Rouse
Group think (also spelled groupthink) is a phenomenon that occurs when group's need for consensus supercedes the judgment of individual group members.
Group think (also spelled groupthink) is a phenomenon that occurs when group's need for consensus supercedes the judgment of individual group members. Group think often occurs when there is a time constraint and individuals put aside personal doubts so a project can move forward or when one member of the group dominates the decision-making process.
In a group think scenario, consensus is often derived by social pressures or by work flow processes that cannot accommodate change. Group thinking, which carries a negative connotation, can be contrasted with collaboration, a scenario in which individual group members are encouraged to be creative, speak out and weigh many options before arriving at a consensus.
In acceding to group think, group members often choose not to explore alternative solutions as part of the decision-making process, either because it is easier not to go with the flow or because they do not want to be perceived as troublemakers and lose status within the group. As such, group think can blind individuals from considering future consequences, warnings and risks that result from their choices.
Continue Reading About Group Think
Guide to Retirement Income
Guide to Retirement Income
by Fisher Investments
Please read the attached paper.
H - 1B
H - 1B
Publicado por Margaret Rouse
Traducido automáticamente con Google
H-1B es una clasificación de visas de Servicio de Inmigración de los Estados Unidos que permite a los empleadores contratar a trabajadores extranjeros altamente calificados que poseen la aplicación teórica y práctica de un cuerpo de conocimiento especializado. El solicitante debe tener una licenciatura o el equivalente en la especialidad.
Además de las ocupaciones de especialidad en campos como la ciencia, la medicina, la salud, la educación, la tecnología de la información y las empresas, la visa también se aplica a los extranjeros que buscan realizar servicios de mérito y capacidad excepcional relacionados con un Departamento de Defensa (DOD) Proyecto de desarrollo, o servicios como un modelo de moda de distinguido mérito o capacidad.
Para ser elegible para una visa H-1B, un extranjero debe tener un patrocinador del empleador. Se requiere que el empleador declare o demuestre que un trabajador de los Estados Unidos no será desplazado por el solicitante H-1B y que presentará una solicitud ante los Servicios de Inmigración y Ciudadanía de los Estados Unidos (USCIS) en nombre del extranjero. En 2015, dos tercios de las peticiones concedidas en 2015 eran para empleados en ocupaciones relacionadas con la informática.
Las leyes actuales limitan el número anual de trabajadores extranjeros calificados que pueden obtener una nueva visa a 65.000 con un tope de 20.000 adicionales bajo la exención de grado avanzado H-1B. Los empleados extranjeros en las organizaciones de investigación del gobierno, el instituto de educación superior o la organización de investigación sin fines de lucro pueden estar exentos de la tapa.
Las solicitudes de nuevas visas se aceptan cada año el 1 de abril. Si el número de solicitudes supera el tope aprobado por el Congreso después de cinco días, un proceso de selección computarizado (a veces denominado lotería) selecciona 20,000 solicitudes de grado avanzado del grupo solicitante . Los solicitantes que no son aceptados se agregan al pool regular y el proceso de selección por computadora continúa hasta que se hayan concedido 65,000 visas adicionales.
La duración de la estancia permitida por una visa H-1B es de hasta tres años, pero las prórrogas son permitidas. Los titulares de visas H-1B que quieran seguir trabajando en los Estados Unidos después de seis años, pero que no han obtenido la residencia permanente, deben vivir fuera de los Estados Unidos por un año antes de solicitar una nueva visa H-1B. La duración máxima de la visa H-1B es de diez años para trabajos excepcionales del Departamento de Defensa de los Estados Unidos.
En 2017, proyectos de ley para reformar el programa H-1B fueron introducidos tanto en la Cámara como en el Senado.
Hadoop 2 and YARN
Hadoop 2 definition
Posted by: Margaret Rouse
Apache Hadoop 2 (Hadoop 2.0) is the second iteration of the Hadoop framework for distributed data processing.
Hadoop 2 adds support for running non-batch applications through the introduction of YARN, a redesigned cluster resource manager that eliminates Hadoop's sole reliance on the MapReduce programming model. Short for Yet Another Resource Negotiator, YARN puts resource management and job scheduling functions in a separate layer beneath the data processing one, enabling Hadoop 2 to run a variety of applications. Overall, the changes made in Hadoop 2 position the framework for wider use in big data analytics and other enterprise applications. For example, it is now possible to run event processing as well as streaming, real-time and operational applications. The capability to support programming frameworks other than MapReduce also means that Hadoop can serve as a platform for a wider variety of analytical applications.
Hadoop 2 also includes new features designed to improve system availability and scalability. For example, it introduced an Hadoop Distributed File System (HDFS) high-availability (HA) feature that brings a new NameNode architecture to Hadoop. Previously, Hadoop clusters had one NameNode that maintained a directory tree of HDFS files and tracked where data was stored in a cluster. The Hadoop 2 high-availability scheme allows users to configure clusters with redundant NameNodes, removing the chance that a lone NameNode will become a single point of failure (SPoF) within a cluster. Meanwhile, a new HDFS federation capability lets clusters be built out horizontally with multiple NameNodes that work independently but share a common data storage pool, offering better compute scaling as compared to Apache Hadoop 1.x.
Hadoop 2 also added support for Microsoft Windows and a snapshot capability that makes read-only point-in-time copies of a file system available for data backup and disaster recovery (DR). In addition, the revision offers all-important binary compatibility with existing MapReduce applications built for Hadoop 1.x releases.
Apache Hadoop YARN (Yet Another Resource Negotiator) definition
Posted by: Margaret Rouse
Apache Hadoop YARN (Yet Another Resource Negotiator) is a cluster management technology.
YARN is one of the key features in the second-generation Hadoop 2 version of the Apache Software Foundation's open source distributed processing framework. Originally described by Apache as a redesigned resource manager, YARN is now characterized as a large-scale, distributed operating system for big data applications.
In 2012, YARN became a sub-project of the larger Apache Hadoop project. Sometimes called MapReduce 2.0, YARN is a software rewrite that decouples MapReduce's resource management and scheduling capabilities from the data processing component, enabling Hadoop to support more varied processing approaches and a broader array of applications. For example, Hadoop clusters can now run interactive querying and streaming data applications simultaneously with MapReduce batch jobs. The original incarnation of Hadoop closely paired the Hadoop Distributed File System (HDFS) with the batch-oriented MapReduce programming framework, which handles resource management and job scheduling on Hadoop systems and supports the parsing and condensing of data sets in parallel.
YARN combines a central resource manager that reconciles the way applications use Hadoop system resources with node manager agents that monitor the processing operations of individual cluster nodes. Running on commodity hardware clusters, Hadoop has attracted particular interest as a staging area and data store for large volumes of structured and unstructured data intended for use in analytics applications. Separating HDFS from MapReduce with YARN makes the Hadoop environment more suitable for operational applications that can't wait for batch jobs to finish.
See also: yacc (yet another compiler compiler)
Please read the attached whitepaper about "The Hitchhiker's Guide to Hadoop 2"
How to move from Web to mobile business apps
How to move from Web to mobile business apps
by Amy Reichert
Moving from Web to mobile business apps goes beyond reaching out to customers. Mobile apps extend enterprise application functionality to mobile workers.
Solid mobile business apps offer only a subset of features available in enterprise Web applications. When moving from Web to mobile apps, a development team's biggest challenge is deciding which features to develop for mobile apps and how to deliver them. With the right set of functions in place, mobile business apps drive productivity, delight users and easily provide ROI.
It's difficult to create mobile business apps that are useful to users, even after the majority of features are stripped away. It's critical that the mobile application performs basic features -- and the more features it can handle, the better.
In order to present a mobile option quickly, many corporations attempt to outsource mobile application development. If you choose this road, be sure to explicitly define the requirements. Otherwise, you might find that your mobile business apps lack key functionality.
Determine the five main features required
The first rule of thumb is not to reduce a mobile user's productivity. In other words, figure out which application features customers use or need most in a mobile device. Remember, some features may not translate well, so they may require additional development effort and creativity to provide the most value.
For example, a mobile version of an electronic health record (EHR) application for hospital physicians needs access to nearly all of the EHR's features. Physicians need the ability to enter and edit patient data, such as current medications, as well as view existing information. Features related to patient insurance, however, may not be as important while doctors are making their rounds.
Determine which features provide the core functionality of the application and reproduce them in the mobile version. Reproducing a full application is unrealistic, so it's critical to pick the application's top functional features.
Include offline features in your mobile app
The ability to work offline is a required feature for many mobile business apps. Mobile connectivity has improved over the years, but it's not perfect. Users may not be able to connect to the network for various reasons. Don't rely on your end users having steady Internet connections -- even for the duration of a single session.
Mobile applications that provide offline features allow users to continue working in the application even though the application is not connected. Users can store work until they are able to reconnect. This is similar to saving work on a laptop, and then connecting to upload or send data to another location.
An example is allowing a physician to create orders for a patient and cache them in a file until they are ready or able to connect and update the record. Users can save email or text documents and place them in "draft" status until they connect. In this manner, users can continue working and save their work to upload for another time.
Provide configuration options
Another consideration when planning to move an application from Web to mobile is how many configuration options to retain. Development design needs to narrow down the available options similar to features. Determine which configuration options customers use the most and which match up with the features selected for the mobile application version. Make sure a feature isn't included without its configuration options.
Similar to features, not all configuration options are necessary. However, to avoid reducing the mobile application's usefulness, it is critical to include configuration settings related to the included features. Providing useful and valuable features is essential for the success of mobile business apps.
Streamline the user experience
The user experience on a mobile device is different than Web applications. But it's not just because the screen is smaller. Without proper planning, smaller screen sizes can force users to scroll excessively or click through too many screens -- both of which are distractions to avoid.
More importantly, mobile business apps need to be simple to understand and learn. Try to keep the mobile version visually similar to the original Web version, using consistent wording and iconography. Try to keep menu options in the same order to prevent users from having to go on a treasure hunt to find them. The simpler it is for the end user, the more productive they will be.
If a software development team needs to move an application from Web to mobile, it's valuable to take the time and determine which application features need to be present in the mobile version, and then create a development plan and timeline. Keep the end users in mind and how they plan to use the mobile version. Many times, slapping together a mobile version that only allows users to view data or records is not useful enough. Build what the application customers need rather than being restricted by available development time. The essential mobile application must be feature rich, function in similar ways to the Web app, and -- above all -- not create additional work or negatively affect a user's productivity.
HPE Hyper Converged
Credit: Peter Sayer
HPE unveils a new SimpliVity appliance
by Peter Sayer
After the OmniCube comes the HPE SimpliVity 380 with OmniStack
Two months after acquiring SimpliVity for US$650 million, Hewlett Packard Enterprise is beginning to reshape the company's converged infrastructure offering in its own image.
SimpliVity’s hyperconverged infrastructure appliance, the OmniCube, replaces storage switches, cloud gateways, high-availability shared storage, and appliances for backup and deduplication, WAN optimization, and storage caching. The company also offers OmniStack, the software powering the OmniCube, packaged for other vendors’ hardware.
Now HPE has qualified that software on its workhorse ProLiant DL380 server and will sell it as the snappily titled HPE SimpliVity 380 with OmniStack, Mark Linesch, the vice president for global strategy and operations of HPE's enterprise group, said Tuesday at the Cebit trade show in Hanover, Germany.
SimpliVity's website already lists the 380 among the product options, alongside versions of OmniStack tailored for Dell PowerEdge, Lenovo System x, and Cisco UCS servers for which it provided first-line support, handing off hardware matters to the vendors.
The website still lists the OmniCube for sale, too.
Linesch said HPE will continue to provide the same support for that hardware as SimpliVity did, although going forward, it hopes to see more customers on the ProLiant version.
SimpliVity used to guarantee that the OmniCube would offer a 90 percent capacity saving across production and backup storage while improving application performance, and HPE will offer the same guarantee for the SimpliVity 380, Linesch said.
Three versions of the SimpliVity 380 are available, with five, nine or 12 SSDs of a capacity of 1.9 terabytes each. The servers have dual Intel E5-2600 v4 (Broadwell) processors, and customers can configure them with up to 44 cores. Depending on how much RAM is ordered, usable memory will range from about 140 GB to 1.4 TB. Depending on the configuration requested by the customer, the total cost will vary between $26,000 and $100,000, an HPE spokesman said.
Last November, HPE released a software update for another converged appliance built on the ProLiant 380, the Hyper Converged 380. Building on the existing stack of VMware virtualization software and HPE management tools, the update added integrated analytics and multi-tenant workspaces to simplify the management of servers as a single resource pool.
HPE's two hyperconvergence product lines will undergo some convergence of their own at some point in the future, combining the best features of the SimpliVity 380 and theHyper Converged 380 into a new product line, Linesch said. However, HPE will continue to sell the existing products, at least according to the slide he showed.
This story has been corrected to give the correct capacity for the SSDs in the eighth paragraph.
IBM Predictive Customer Intelligence
IBM Predictive Customer Intelligence
Create personalized, relevant customer experiences with a focus on driving new revenue.
Please read the attached whitepaper.