Glosario KW | KW Glossary

Ontology Design | Diseño de Ontologías

Currently sorted By last update descending Sort chronologically: By last update | By creation date

Page:  1  2  3  4  5  6  7  8  9  10  ...  23  (Next)
ALL

Data Citizen

Posted by: Margaret Rouse

A data citizen is an employee who relies on digital information to make business decisions and perform job responsibilities.

In the early days of computing, it took a specialist with a strong background in data science to mine structured data for information. Today, business intelligence (BI) tools allow employees at every level of an organization to run ad hoc reports on the fly. Changes in how data can be analyzed and visualized allow workers who have no background in mathematics, statistics or programming be able to make data-driven decisions.

In both a government and data context, however, citizenship comes with responsibilities as well as rights. For example, a citizen who has been granted the right of free speech also has the responsibility to obey federal, state and local laws -- and an employee who has been granted the right to access corporate data also has a responsibility to support the company's data governance policies.

As data citizens increasingly expect more transparent, accessible and trustworthy data from their employers, it has become more important than ever for the rights and responsibilities of both parties to be defined and enforced through policy. To that end, data governance initiatives generally focus on high-level policies and procedures, while data stewardship initiatives focus on maintaining agreed-upon data definitions and formats, identifying data quality issues and ensuring that business users adhere to specified standards.

In addition to enforcing the data citizen's right to easily access trustworthy data, governance controls ensure that data is used in a consistent manner across the enterprise. To support ongoing compliance with external government regulations, as well as internal data policies, audit procedures should also be included in the controls.

Knowledge Management for Financial Services

The financial services industry faces many demanding challenges, from cost containment, to changing regulations and cybersecurity threats. To succeed in this increasingly complex and competitive environment requires having the right people, processes and technologies in place to ensure information is shared and acted upon properly and effectively. Download this special report to learn about the latest knowledge management technologies and strategies.

Gemelos digitales: Simulación de productos y procesos para mejorar la calidad y el diseño

Por Dick Weisinger

Un gemelo digital es la representación virtual de un producto, objeto o proceso de la vida real. El gemelo digital se usa para prueba y simulación. Hace posible rastrear y simular diferentes condiciones, dinámicas y cambios de entorno. La precisión y la facilidad con la que se pueden probar y observar los gemelos digitales es mucho mejor que el monitoreo de campo y condición. El uso de un gemelo virtual o digital significa que los problemas pueden ser identificados y prevenidos proactivamente, reduciendo el tiempo de inactividad, y que es posible una planificación más precisa para el futuro.

Los gemelos digitales son útiles, por ejemplo, en la industria manufacturera. Un informe de ABI Research revela que los fabricantes ven un gran potencial en el uso de gemelos digitales. El informe encontró que el 83 por ciento de los fabricantes han comenzado a investigar la tecnología y el 29 por ciento planea probar el uso dentro del próximo año.  Research and Markets estima que el mercado gemelo digital global crecerá a \$ 15,66 mil millones para 2023.

El uso de gemelos digitales también se usa en las áreas de automatización del flujo de trabajo, mediciones de calidad del servicio, CRM y gestión del conocimiento.

John Vickers , gerente de fabricación avanzada de la NASA, dijo que "la visión definitiva para el gemelo digital es crear, probar y construir nuestros equipos en un entorno virtual. Solo cuando lo llevamos a donde se ajusta a nuestros requisitos, lo fabricamos físicamente. Entonces queremos que esa construcción física se vincule a su gemelo digital a través de sensores para que el gemelo digital contenga toda la información que podríamos tener al inspeccionar la construcción física ".

Pierce Owen , analista principal de ABI Research, dijo que "la idea de emparejar la tecnología ha existido durante décadas. Ahora, el aprendizaje automático, las simulaciones basadas en la física avanzada y el modelado CAD han ampliado las ventajas de los gemelos digitales para beneficiar casos de uso en todas las industrias con activos de alto valor o de misión crítica ".

Gemelo digital en la industria de la energía

POR EUGENIO RODRÍGUEZ

Hace cuatro años, cuando el presidente y CEO de General Electric, Steve Bolze, fue a Davos como copresidente del Comité de Energía, nadie estaba interesado en el concepto de gemelo digital. “Le gusta decir que lo habrían echado fuera si hablaba de la idea”, nos comenta Sham Chotai, Director de Tecnología de GE Power & Water.

En 2017, ese escepticismo ha desaparecido casi por completo. El concepto de un gemelo digital funciona tal y como suena. Utilizando datos e inteligencia, las empresas pueden obtener representaciones digitales de sus sistemas físicos semejantes, permitiéndoles comprender, predecir y optimizar el rendimiento de sus negocios. “Es una representación de alta fidelidad de sus activos”, dice Chotai, “un modelo virtual de máquinas que utilizan datos y análisis”.

Para los pioneros del internet de consumo, como Google, Amazon o Apple, el empleo de datos mediante este modo no es nada nuevo. “Piensa en empresas como Amazon, donde tienes una representación digital de ti mismo”, dice Chotai, donde los datos pueden revelar “patrones de compra e intereses diferentes como el tipo de libros que lees”.

A pesar de que el internet industrial ha sido relativamente lento en aceptación, los beneficios podrían ser enormes. “Aquí estamos hablando de dispositivos, máquinas, sistemas, procesos de fabricación. Cualquier proceso que un industrial puede usar”, dice Chotai.

La prevención de fallos de las máquinas es una de las aplicaciones más obvias y potencialmente transformadoras que el gemelo digital podría tener para el sector de la energía. Recientemente, GE ha pilotado un concepto de parque eólico digital que utiliza un sistema de simulación y modelado en tiempo real para predecir los problemas y optimizar la producción de energía diaria. La compañía afirma que podría aumentar la producción de energía de un parque en un 20%.

“En parques eólicos hemos implementado muchos sensores de bajo coste en equipos muy pesados ​​y queremos poder analizar esa información”, dice Chotai. “Se espera que estas turbinas operen durante un período de varias décadas en algunos de los ambientes más duros en los que una máquina puede ser sometida. Si van a fallar, queremos saberlo lo más pronto posible.”

En el sector eólico, el gemelo digital también puede ofrecer una serie de ventajas más allá de la propia eficiencia. Como nos explica Chotai; “Es una fuente de energía muy poco fiable. No es como la nuclear, el gas o el carbón donde se sabe la energía de salida que vas a obtener. Con el parque eólico digital estamos ayudando a hacer previsiones diarias, reaccionar y hacer frente mejor a mercados integrados de alta verticalidad como Norteamérica y Europa, donde existen múltiples fuentes de combustible”.

“Se trata de gestionar esas energías renovables, señalando al mercado lo que se puede producir y estabilizar la red a través de diferentes fuentes de combustible. Ese es el tipo de tecnología que un gemelo digital nos permite poner en juego “.

Y el gemelo digital no es sólo para turbinas, sino para “todas las restantes partes de una central eléctrica”, explica Chotai. “Por ejemplo, un compresor podría fallar. En el caso de una planta nuclear, puede causar un cierre y costar millones de dólares para traer a la planta de nuevo en línea. Usando el concepto del gemelo digital combinado con el aprendizaje profundo de la máquina, somos capaces de predecir de 30 a 60 días antes que un compresor fallará.”

Los gemelos digitales también proporcionan beneficios ambientales. “Si tienes una planta de carbón o una planta de gas y quieres ahorrar combustible, no sólo estarás proporcionando un beneficio económico usando este sistema, sino que también estarás proporcionando un beneficio social”, dice Chotai.

“Los beneficios que hemos visto con el carbón, por ejemplo, son increíblemente significativos. Hemos mejorado la eficiencia de la planta en un 10%, lo que significa que necesita 80,000 toneladas menos de carbón para una instalación de este tipo de 10MW. Si se escala en toda la industria es como no necesitar una potencia del tamaño de Europa Occidental, o la eliminación de 300 millones de coches de la carretera. Ese es el tamaño y la escala de la que estamos hablando.”

A medida que la industria de la energía se vuelve más complicada y los equipos de trabajo cambian, Chotai afirma que el concepto del gemelo digital probablemente atraerá a más y más usuarios. “En los viejos tiempos, uno tenía un ingeniero que escuchaba una turbina y sabía que algo estaba mal sólo con oírlo”, dice. “Hoy en día, los millennials usan auriculares con cancelación de ruido escuchando música. Ellos piensan muy diferente y trabajan muy diferente. Y no es como antes cuando había generación de energía centralizada. Ahora tenemos municipios, universidades y otras instituciones que también están generando energía. Muchos de nuestros clientes están viendo este nivel de complejidad y saben que es muy difícil para un solo humano ser capaz de manejarlo. Es por eso que hay tanto interés en el gemelo digital”.

Dark Social | Social Oscuro

Social oscuro es un término utilizado por los especialistas en marketing y optimización de motores de búsqueda (SEO) para describir las referencias de sitios web que son difíciles de rastrear.

El tráfico social oscuro no parece tener una fuente específica, lo que crea un desafío para las empresas que están tratando de supervisar las referencias de sitios web y la actividad de los medios de comunicación social. La mayoría de las veces el tráfico oscuro es el resultado de la gente que comparte los enlaces del sitio web a través de correo electrónico , mensajes de texto y chats privados Debido a que los vínculos sociales oscuros no tienen código de seguimiento agregado automáticamente a sus URL , no es posible saber cómo el visitante del sitio web encontró el contenido.

El término "social oscuro" fue acuñado por Alexis C. Madrigal, editor senior de The Atlantic, en un artículo de 2012. Según Madrigal, datos de la firma de analítica Web Chartbeat revelaron que el 56,5% del tráfico social del Atlántico provenía de referencias oscuras. Cuando Chartbeat analizó un conjunto más amplio de sitios web, esa estadística aumentó a casi el 69%.

Mientras que Madrigal originalmente no creía que las aplicaciones móviles desempeñaran un papel importante en la oscuridad social, en una actualización de 2014 sobre el tema, explicó que las aplicaciones móviles de Facebook, así como otras aplicaciones móviles, parecen estar detrás de la mayor parte del tráfico social oscuro de hoy. Este hallazgo podría representar un problema nuevo: las plataformas de redes sociales como Facebook pueden realmente tener el mayor poder sobre el tráfico social, pero las aplicaciones móviles dificultan el seguimiento y el análisis.

El ascenso de la oscuridad social: Todo lo que necesitas saber

por Jack Simpson

Se le perdonaría pensar que el término "social oscuro" se refiere a algún tipo de encuentro demoníaco durante el cual los asistentes se deleitan con la sangre para complacer a sus señores.

Si bien potencialmente molesto para los gestores de medios de comunicación social, oscuro social es algo menos siniestro que el anterior.

Simplemente se refiere al intercambio social que no puede ser rastreado con precisión, es decir, el material que no es recogido por las plataformas de análisis web.

En este post voy a explicar más a fondo lo que significa social oscuro, por qué importa, y si hay algo que los vendedores pueden hacer al respecto.

Si alguien hace clic en un vínculo a su sitio desde una plataforma social abierta como Twitter, Facebook o LinkedIn, su plataforma de análisis le dirá exactamente de dónde proviene esa referencia (en teoría).

Sin embargo, la gente comparte cada vez más enlaces a través de aplicaciones de mensajería privadas como WhatsApp o Snapchat y continúa compartiendo plataformas como correo electrónico o SMS.

Piense en ello: encontrará un artículo interesante, simplemente copie y pase el enlace en una aplicación de mensajería y pulse enviar.

Millones de personas lo hacen todos los días, enviando mucho tráfico a los editores. Pero los enlaces compartidos de esta manera carecen de etiquetas de referencia, por lo que cuando el destinatario haga clic en ella su visita se mostrará como tráfico "directo".

Que es un poco injusto, porque no es realmente tráfico directo, es decir, es poco probable que alguien escriba 'https://econsultancy.com/blog/67108-is-sms-the-most- underrated-and-overlooked-dark-social -channel "en su navegador.

Pero no se puede esperar con razón una plataforma de análisis para saber la diferencia.

Social oscuro es esencialmente el tráfico que se agrupa en el tráfico directo en su plataforma de análisis, pero en realidad proviene de remisiones untrackable.

Estos son algunos de los canales responsables del oscuro tráfico social:

• Algunas aplicaciones móviles nativas : Facebook, Instagram, etc.
• Correo electrónico : para proteger la privacidad de los usuarios, las referencias no se pasan.
• Aplicaciones de mensajería - WhatsApp, WeChat, Facebook Messenger, etc.
• Navegación segura : si hace clic en HTTPS a HTTP, la referencia no se transmitirá.

¿Por qué eso importa?

Según un estudio de RadiumOne , casi el 70% de todas las referencias en línea provienen de la oscura social a nivel mundial. Para el Reino Unido, esta cifra aumenta hasta el 75%.

Por supuesto que es un estudio a partir de 2014, pero si algo me gustaría argumentar el tema sólo podría haber llegado a ser aún más frecuente desde entonces con el creciente uso de aplicaciones de mensajería privada.

Esto significa que una gran parte del tráfico de referencia es extremadamente difícil de rastrear con precisión, y cualquier cosa que ponga una nube sobre sus datos no es particularmente bienvenida.

Si no tiene la imagen completa, podría terminar perdiendo su tiempo y energía en la optimización de las cosas mal.

Pero también hay que considerar el valor de este tipo de tráfico.

Si encuentro un enlace para un producto que sé que mi esposa está buscando, y el correo electrónico que el enlace a ella, es justo decir que es probable que se conviertan .

El tráfico social oscuro es por lo tanto extremadamente valioso. Es efectivamente boca a boca entre las personas que probablemente se conocen bien (es seguro asumir esto si se están comunicando a través de algo como aplicaciones de mensajería privada o SMS ).

¿Qué puedes hacer al respecto?

No podrá realizar un seguimiento completo del tráfico social oscuro, pero hay algunos pasos que puede tomar para reducir las cosas.

Si observa su tráfico directo en cualquier plataforma de análisis que esté utilizando, es justo decir que los enlaces largos como los que se incluyen en nuestro tráfico directo de Analytics no se escribieron manualmente.

Por lo tanto, es seguro asumir, al menos con cierta precisión, que la mayoría de esos vínculos son en realidad de oscuro social.

Usted podría configurar un segmento en su análisis que tenga en cuenta todos los enlaces de tráfico directo con los parámetros, por lo que para nosotros sería enlaces que no son econsultancy.com, econsultancy.com/blog y así sucesivamente.

Esto le permite obtener una imagen razonablemente precisa de mucho tráfico viene de social oscuro.

Aún no te ayuda en términos de dónde y cómo se compartió ese contenido originalmente, pero te ayudará a explicar la situación cuando tu jefe te está ladrando para explicar de dónde viene todo tu tráfico.

También debe incluir botones de uso compartido muy visibles en su sitio (incluidos los parámetros de UTM para que pueda rastrearlos) para animar a las personas a compartir contenido utilizando estos en lugar de copiar y pegar el vínculo.

Esto se reduce a la experiencia del usuario. Hacer los botones de compartir la opción más rápida y más fácil, y ¿por qué nadie no usarlos?

Pero asegúrese de incluir los botones de uso compartido de correo electrónico, WhatsApp y otros canales sociales oscuros.

Es discutible más importante incluir éstos que los botones tales como Facebook y gorjeo donde usted puede seguir tráfico incluso si el acoplamiento es copiado y pegado.

Conclusión: seamos honestos, nadie sabe realmente qué diablos hacer al respecto

Estoy siendo un poco facticia con ese subtítulo. Hay algunas conversaciones realmente interesantes sobre el futuro del social oscuro.

Pero todo lo que lee o escucha el consenso parece ser el mismo: puede reducir las cosas y estar consciente de cuánto tráfico social oscuro está recibiendo, pero hasta ahora no he visto una solución convincente para realizar un seguimiento preciso.

Ahora que el social oscuro parece estar en el radar de todos, sin embargo, imagino que algunas mejores herramientas y técnicas comenzarán a materializarse. Cuando lo hagan voy a estar seguro de escribir sobre ellos.

Java SE 9 and Java EE

Things are *definitely* changing

Java SE 9 and Java EE 8 are here

Oracle has just announced the general availability of Java SE 9, Java EE 8 and the Java EE 8 Software Development Kit (SDK). From now on, it’s all about faster releases and more open source engagement.

Things are definitely changing in the Java universe. After today’s releases, there will be two Java feature releases per year (so no need to wait years until the next version is out) and Java EE is moving to the Eclipse Foundation (and changing its name). Let’s enjoy the release of Java SE 9 and Java EE 8 though.

You can download Java SE 9 here and Java EE 8 here. And here is the official announcement.

Java SE 9

Java SE 9 has over 150 new features to offer, including a new module system and quite a few improvements which promise to bring boosted security, more scalability and better performance management.

The star of the release is, of course, the Java Platform Module System, also known as Project Jigsaw. Its goal is to help developers to reliably assemble and maintain sophisticated applications. Furthermore, developers can bundle only the parts of the JDK that are needed to run an application when deploying to the cloud so one could say that the module system also makes the JDK itself more flexible.

If you want to hear what experts think of the Java Platform Module System, here are a couple of statements:

The Java Platform Module System (JPMS) is not perfect, but it has reached a point where it is worth releasing. Most developers can continue to use the classpath, and be unaffected by the module changes.

Stephen Colebourne

For the long term, it’s a great boost to the Java Runtime Environment and hence the Java ecosystem. Imagine being able to build your application and its runtime environment in a modular format. Then, your customer can deploy it right off the bat without having to worry about the JDK version or the footprint.

Monica Beckwith

For the full list of features, visit this page. If you want to read more about other key features such as jshell, improved Javadoc and Streams API enhancements, read this article.

If you don’t want to dive into the modular ecosystem right away, you should know that it is possible to get started on JDK 9 without modules. As Georges Saab, vice president of development for the Java Platform Group at Oracle told us a few months ago, “the class path continues to work, and this is how many developers will likely get started with JDK 9.”

Moving to a 6-month release cadence

Oracle recently announced that they are planning to move to a 6-month release cadence using a time driven release model. Mark Reinhold, the Chief Architect of the Java Platform Group at Oracle, proposed that the Java SE Platform and the JDK go from “the historical feature-driven release model to a strict, time-based model with a new feature release every six months, update releases every quarter, and a long-term support release every three years.”

Post – Java 9 plans

• Feature releases can contain any type of feature, including not just new and improved APIs but also language and JVM features. New features will be merged only when they’re nearly finished, so that the release currently in development is feature-complete at all times. Feature releases will ship in March and September of each year, starting in March of 2018.
• Update releases will be strictly limited to fixes of security issues, regressions, and bugs in newer features. Each feature release will receive two updates before the next feature release. Update releases will ship quarterly in January, April, July, and October, as they do today.
• Every three years, starting in September of 2018, the feature release will be a long-term support release. Updates will be available for at least three years and quite possibly longer, depending upon your vendor.

Oracle will also be providing OpenJDK builds under the General Public License (GPL). Furthermore, they will continue to contribute previously commercial features to OpenJDK [*cough* Java Flight Recorder *cough*] in Oracle JDK in order to make Oracle JDK and OpenJDK more aligned.

We talked with Donald Smith, Senior Director of Product Management for Java SE at Oracle about the transition between OpenJDK and Oracle JDK binaries. Read the entire interview here

Our intent is that transitioning between OpenJDK and Oracle JDK binaries should be seamless, and that implies there should be no feature differences at all.  Although it would be exciting to offer a list of projects we would like to include, we want to do so through the normal OpenJDK processes by discussing with other potential contributors first.

Donald Smith

If you want to meet Donald Smith and find out more about the current status of Java EE, don’t miss his keynote at JAX London. Donald will give a quick overview of how OpenJDK plays a key role in the Java SE ecosystem, followed by details of the proposed plan and its current status.The keynote will be followed by a panel whereby the two key proposals – increased cadence and Oracle produced OpenJDK builds – will be discussed for pros and potential gotchas. Panelists include Daniel Bryant, Stephen Colebourne and Peter Lawrey.

Java EE 8

One of the reasons why the release of Java EE 8 is special has to do with its future — from now on, it will function under the stewardship of the Eclipse Foundation. Oracle, Eclipse and other community members are currently working out the details behind the technology transfer and ongoing governance and process within the Eclipse community.

Mike Lehmann, vice president of product management at Oracle said that “by open sourcing Java EE technologies to the Eclipse Foundation, we have set it up for ongoing success in the future. Oracle is committed to working with the Java EE community and the Eclipse Foundation to continue enterprise Java innovation, support and evolution.”

Oracle intends to:

• Relicense Oracle-led Java EE technologies, and related GlassFish technologies, to the foundation. This would include RIs, TCKs, and associated project documentation.
• Demonstrate the ability to build a compatible implementation, using foundation sources, that passes existing Java EE 8 TCKs.
• Define a branding strategy for the platform within the foundation, including a new name for Java EE to be determined. Oracle intends to enable use of existing javax package names and component specification names for existing JSRs to provide continuity.
• Define a process by which existing specifications can evolve, and new specifications can be included in the platform.
• Recruit and enable developers and other community members, as well as vendors, to sponsor platform technologies, and bring the platform forward within the foundation. This would include potential incorporation of Eclipse MicroProfile technologies into the platform.
• Begin doing the above as soon as possible after completion of Java EE 8 to facilitate a rapid transition.

Some of the key features in Java EE 8 are, HTTP/2 support in Servlet 4.0, new JSON binding API and various enhancements in JSON-P 1.1, expansion of JAX-RS to support Server-Sent Events and a new reactive client API, new security API for cloud and PaaS based applications and multiple CDI enhancements including support for asynchronous events.

For a full list of features included in Java EE 8, visit this page.

If you want to read more about the future of Java EE, don’t miss this interview series with Ivar Grimstad, Martijn Verburg, Reza Rahman and Josh Juneau.

Mapa de viaje del cliente

Content Marketing

Content Marketing

Posted by: Margaret Rouse

Content marketing is the publication of material designed to promote a brand, usually through a more oblique and subtle approach than that of traditional push advertising. Content marketing is most effective when it provides the consumer with accurate and unbiased information, the publisher with additional content and the advertiser with a larger audience and ultimately, a stronger brand.

On the internet, content marketing campaigns involve publishing custom content on specific destination sites the target audience respects and visits often. During the campaign, the advertiser creates custom content that is tightly aligned with the publisher’s website and editorial mission. The goal is to provide prospective customers with an integrated user experience (UX) that encourages engagement and interest in the brand. The challenge is to ensure the content is topically relevant and meets the audience's needs. If the content is simply a thinly veiled sales-pitch, it risks turning the buyer off.

Content marketing can be delivered through a variety of media, including television and magazines, and take a lot of different forms, including articles, infographics, videos and online games. The strategy may be referred to by several different names, including infomercial, sponsored content or native advertising. Whatever the label, content marketing is often integrated in such a way that it doesn't stand out from other material served by the host.

Although native advertising might not look like marketing, the content should explicitly state that it was provided by the advertiser. The Federal Trade Commission (FTC) guidelines for all advertising emphasizes transparency and includes stipulations that advertising claims must be truthful and supported by evidence. The more content marketing is similar in format and topic to the publisher's editorial content, the more important a disclosure is, in order to prevent deception.

Joe Pulizzi explains how large enterprise organizations implement content marketing:

Air Gapping (Air Gap Attack)

Posted by: Margaret Rouse

Air gapping is a security measure that involves isolating a computer or network and preventing it from establishing an external connection. For example, an air gapped computer is one that is physically segregated and incapable of connecting wirelessly or physically with other computers or network devices.

Air-gapped networks are used to protect many types of critical systems, including those that support the stock market, the military, the government and industrial power industries. The U.S. National Security Agency TEMPEST project provides recommendations for using air gapping as a security measure. To prevent unauthorized data extrusion through electromagnetic or electronic exploits, there is often a specified amount of space between the air gapped system and outside walls and between its wires and the wires for other technical equipment. For a system with extremely sensitive data, a Faraday cage can be used to prevent electromagnetic radiation (EMR) escaping from the air-gapped equipment.

Although these measures seem extreme, van Eck phreaking can be used to intercept data such as key strokes or screen images from demodulated EMR waves, using special equipment from some distance away. Other proof-of-concept (POC) attacks for air gapped systems have shown that electromagnetic emanations from infected sound cards on isolated computers can be exploited and continuous wave irradiation can be used to reflect and gather information from isolated screens, keyboards and other computer components.

Perhaps the most important way to protect a computing device or network from an air gap attack is through end user security awareness training. The infamous Stuxnet worm, which was designed to attack air gapped industrial control systems, is thought to have been introduced by infected thumb drives found by employees or obtained as free giveaways.

Editor's note: The software-defined perimeter (SDP) framework is sometimes referred to as a method of virtual air gapping. SDP requires authentication of all external endpoints attempting to access internal infrastructure and ensures that only authenticated systems can see internal IP addresses.

ASP.Net - Intro, Life Cycle & Hello World Program

ASP.Net is a web development platform provided by Microsoft. It is used for creating web-based applications. ASP.Net was first released in the year 2002.

The first version of ASP.Net deployed was 1.0. The most recent version of ASP.Net is version 4.6. ASP.Net is designed to work with the HTTP protocol. This is the standard protocol used across all web applications.

ASP.Net applications can also be written in a variety of .Net languages. These include C#, VB.Net and J#. In this chapter, you will see some basic fundamental of the.Net framework.

In this tutorial, you will learn-

What is ASP.Net?

ASP.Net is a framework which is used to develop a Web-based application. The basic architecture of the ASP.Net framework is as shown below.

The architecture of the.Net framework is based on the following key components

1. Language – A variety of languages exists for .net framework. They are VB.net and C#. These can be used to develop web applications.
2. Library - The .NET Framework includes a set of standard class libraries. The most common library used for web applications in .net is the Web library. The web library has all the necessary components used to develop.Net web based applications.
3. Common Language Runtime - The Common Language Infrastructure or CLI is a platform. .Net programs are executed on this platform. The CLR is used for performing key activities. Activities include Exception handling and Garbage collection.

Below are some of the key characteristics of the ASP.Net framework

1. Code Behind Mode – This is the concept of separation of design and code. By making this separation, it becomes easier to maintain the ASP.Net application. The general file type of an ASP.Net file is aspx. Assume we have a web page called MyPage.aspx. There will be another file called MyPage.aspx.cs which would denote the code part of the page. So Visual Studio creates separate files for each web page, one for the design part and the other for the code.
2. State Management – ASP.Net has the facility to control state management. HTTP is known as a stateless protocol. Let's take an example of a shopping cart application. Now, when a user decides what he wants to buy from the site, he will press the submit button.

The application needs to remember the items the user choose for the purchase. This is known as remembering the state of an application at a current point in time. HTTP is a stateless protocol. When the user goes to the purchase page, HTTP will not store the information on the cart items. Additional coding needs to be done to ensure that the cart items can be carried forward to the purchase page. Such an implementation can become complex at times. But ASP.Net has the ability to do state management on your behalf. So ASP.Net has the ability to remember the cart items and pass it over to the purchase page.

3. Caching – ASP.Net can implement the concept of Caching. This improve's the performance of the application. By caching those pages which are often requested by the user can be stored in a temporary location. These pages can be retrieved faster and better responses can be sent to the user. So caching can greatly improve the performance of an application.

ASP.Net Life cycle

When an ASP.Net application is launched, there are series of steps which are carried out. These series of steps makes up the lifecycle of the application.

Let's look at the various stages of a typical page lifecycle of an ASP.Net web Application.

1) Application Start - The life cycle of an ASP.NET application starts when a request is made by a user. This request is to the Web server for the ASP.Net Application. This happens when the first user normally goes to the home page for the application for the first time. During this time, there is a method called Application_start which is executed by the web server. Usually in this method, all global variables are set to their default values.

2) Object creation - The next stage is the creation of the HttpContext , HttpRequest & HttpResponse by the web server. The HttpContext is just the container for the HttpRequest and HttpResponse objects. The HttpRequest object contains information about the current request, including cookies and browser information. The HttpResponse object contains the response that is sent to the client.

3) HttpApplication creation - This object is created by the web server. It is this object that is used to process each subsequent request sent to the application. For example, let's assume we have 2 web applications. One is a shopping cart application, and the other is a news website. For each application, we would have 2 HttpApplication objects created. Any further requests to each website would be processed by each HttpApplication respectively.

4) Dispose - This event is called before the application instance is destroyed. During this time, one can use this method to manually release any unmanaged resources.

5) Application End - This is the final part of the application. In this part, the application is finally unloaded from memory.

ASP.Net Page Life cycle

When an ASP.Net page is called, it goes through a particular lifecycle. This is done before the response is sent to the user. There are series of steps which are followed for the processing of an ASP.Net page.

Let's look at the various stages of the lifecycle of an ASP.Net web page.

1. Page Request- This is when the page is first requested from the server. When the page is requested, the server checks if it is requested for the first time. If so, then it needs to compile the page, parse the response and send it across to the user. If it is not the first time the page is requested, the cache is checked to see if the page output exists. If so, that response is sent to the user.
2. Page Start – During this time, 2 objects, known as the Request and Response object are created. The Request object is used to hold all the information which was sent when the page was requested. The Response object is used to hold the information which is sent back to the user.
3. Page Initialization – During this time, all the controls on a web page is initialized. So if you have any label, textbox or any other controls on the web form, they are all initialized.
4. Page Load – This is when the page is actually loaded with all the default values. So if a textbox is supposed to have a default value, that value is loaded during the page load time.
5. Validation – Sometimes there can be some validation set on the form. For example, there can be a validation which says that a listbox should have a certain set of values. If the condition is false, then there should be an error in loading the page.
6. Postback event handling – This event is triggered if the same page is being loaded again. This happens in response to an earlier event. Sometimes there can be a situation that a user clicks on a submit button on the page. In this case, the same page is displayed again. In such a case, the Postback event handle is called.
7. Page Rendering – This happens just before all the response information is sent to the user. All the information on the form is saved, and the result is sent to the user as a complete web page.
8. Unload – Once the page output is sent to the user, there is no need to keep the ASP.net web form objects in memory. So the unloading process involves removing all unwanted objects from memory.

Hello World in ASP.Net

Let's look at an example of how we can implement a simple "hello world" application. For this, we would need to implement the below-mentioned steps.

Step 1) The first step involves the creation of a new project in Visual Studio. After launching Visual Studio, you need to choose the menu option New->Project.

Step 2) The next step is to choose the project type as an ASP.Net Web application. Here we also need to mention the name and location of our project.

1. In the project dialog box, you can see various options for creating different types of projects. Click the Web option on the left-hand side.
2. When we click the Web option in the previous step, we will be able to see an option for ASP.Net Web Application. Click this option.
3. We then give a name for the application, which in our case is DemoApplication. We also need to provide a location to store our application.
4. Finally, we click the 'OK' button to let Visual Studio to create our project.

Step 3) In the next screen, you have to choose the type of ASP.net web application that needs to be created. In our case, we are going to create a simple Web Form application.

1. First, choose the project type as 'Empty'. This will ensure that we start with a basic application which is simple to understand.
2. We choose the option "web Forms". This adds the basic folders. These are required for a basic Web Forms Application.
3. Finally, we click the 'OK' button to allow Visual Studio to create our application.

If the above steps are followed, you will get the below output in Visual Studio.

Output:-

In the Solution explorer, you will be able to see the DemoApplication Solution. This solution will contain 2 project files as shown above. At the moment, one of the key files in the project is the 'Global.asax.cs'. This file contains application specific information. In this file, you would initialize all application specific variables to their default values.

Step 4) Now, it's time to add a Web Form file to the project. This is the file which will contain all the web-specific code for our project.

Step 5) In the next screen we are going to be prompted to provide a name for the web form.

1. Give a name for the Web Form. In our case, we are giving it a name of Demo.
2. Click the Ok button.

Automatically Visual Studio will create the Demo Web Form and will open it in Visual Studio.

Step 6) The next step is to add the code, which will do the work of displaying "Hello World." This can be done by just adding one line of code to the Demo.aspx file.

Code Explanation:-

• The Response object in ASP.Net is used to send information back to the user. So in our case, we are using the method "Write" of the Response object to write the text "Hello World." The <% and %> markers are used to add ASP.net specific code.

If you follow all of the above steps and run your program in Visual Studio, you will get the following output.

Output:-

From the output, you can clearly see that 'Hello World' was displayed in the browser.

Adding ASP.Net Controls to Web Forms

ASP.Net has the ability to add controls to a form such as textboxes and labels.

Let's look at the other controls available for Web forms and see some of their common properties.

In our example, we will create one form which will have the following functionality.

1. The ability for the user to enter his name.
2. An option to choose the city in which the user resides in
3. The ability for the user to enter an option for the gender.
4. An option to choose a course which the user wants to learn. There will be choices for both C# and ASP.Net

Let's look at each control in detail. Let's add them to build the form with the above mentioned functionality.

Step 1) The first step is to open the Forms Designer for the Demo web form. Once you do this, you will be able to drag controls from the toolbox to the Web form.

To open the Designer web form,

• Right click the Demo.aspx file in the Solution Explorer and
• Choose the menu option View Designer.

Once you perform the above step, you will be able to see your Form Designer as shown below.

Now let's start adding our controls one by one

Label Control

The label control is used to display a text or a message to the user on the form. The label control is normally used along with other controls. Common examples is wherein a label is added along with the textbox control. The label gives an indication to the user on what is expected to fill up in the textbox. Let's see how we can implement this with an example shown below. We will use a label called 'name.' This will be used in conjunction with the textbox controls, which will be added in the later section.

Step 1) The first step is to drag the 'label' control on to the Web Form from the toolbox as shown below.

Step 2) Once the label has been added, follow the following steps.

1. Go to the properties window by right clicking on the label control
2. Choose the Properties menu option

Step 3) From the properties window, change the name of the Text property to Name

Similarly, also change the ID property value of the control to lblName. By specifying a meaningful ID to controls, it becomes easier to access them during the coding phase. This is shown below.

One you make the above changes, you will see the following output

Output:-

You will see that the Name label appears on the Web Form.

Textbox

A textbox is used for allowing a user to enter some text on the Web form application. Let's see how we can implement this with an example shown below. We will add one textbox to the form in which the user can enter his name.

Step 1) The first step is to drag the textbox control onto the Web Form from the toolbox as shown below

Below is how this would look in the forms designer once the Textbox control is on the form

Step 2) Once the Textbox has been added, you have to change the ID property.

• Go to the properties window by right clicking on the Textbox control and
• Choose properties then
• Change the id property of the textbox to txtName.

One you make the above changes, you see the following output.

Output:-

List box

A Listbox is used to showcase a list of items on the Web form. Let's see how we can implement this with an example shown below. We will add a list box to the form to store some city locations.

Step 1) The first step is to drag the listbox control on to the Web Form from the toolbox as shown below

Step 2) Once you drag the listbox to the form, a separate side menu will appear. In this menu choose the 'Edit Items' menu.

Step 3) You will now be presented with a dialog box in which you can add the list items to the listbox.

2. Give a name for the text value of the list item – In our case Mumbai. Repeat steps 1 and 2 to add list items for Mangalore and Hyderabad.
3. Click on the OK button

Step 4) Go to the properties window and change the ID property value of the control to lstLocation.

One you make the above changes, you will see the following output

Output:-

From the output, you can clearly see that the Listboxes was added to the form.

A Radio button is used to showcase a list of items out of which the user can choose one. Let's see how we can implement this with an example shown below. We will add a radio button for a male/female option.

Step 1) The first step is to drag the 'radiobutton' control onto the Web Form from the toolbox. ( see image below). Make sure to add 2 radio buttons , one for the option of 'Male' and the other for 'Female.'

Step 2) Once the Radiobutton has been added, change the 'text' property.

• Go to the properties window by clicking on the 'Radiobutton control'.
• Change the text property of the Radio button to 'Male'.
• Repeat the same step to change it to 'Female.'
• Also, change the ID properties of the respective controls to rdMale and rdFemale.

Once you make the above changes, you will see the following output

Output:-

From the output, you can clearly see that the radio button was added to the form

Checkbox - A checkbox is used to provide a list of options in which the user can choose multiple choices. Let's see how we can implement this with an example shown below. We will add 2 checkboxes to our Web forms. These checkboxes will provide an option to the user on whether they want to learn C# or ASP.Net.

Step 1) The first step is to drag the checkbox control onto the Web Form from the toolbox as shown below

Step 2) Once the Checkboxes have been added, change the checkbox id property to 'chkASP'.

• Go to the properties window by clicking on the Checkbox control.
• Change the ID properties of the respective controls to 'chkC' and 'chkASP'.

Also, change the text property of the Checkbox control to 'C#'. Do the same for the other Checkbox control and change it to 'ASP.Net'.

1. Change the ID property of the checkbox to 'chkASP'

1. Change the ID property of the checkbox to chkC

Once you make the above changes, you will see the following output

Output:-

From the output, you can clearly see that the Checkboxes was added to the form.

Button

A button is used to allow the user to click on a button which would then start the processing of the form. Let's see how we can implement this with our current example as shown below. We will add a simple button called 'Submit' button. This will be used to submit all the information on the form.

Step 1) The first step is to drag the button control onto the Web Form from the toolbox as shown below

Step 2) Once the button has been added, go to the properties window by clicking on the button control. Change the text property of the button control to Submit. Also, change the ID property of the button to 'btnSubmit'.

One you make the above changes, you will see the following output

Output:-

From the output, you can clearly see that the button was added to the form.

Event Handler in ASP.Net

When working with a web form, you can add events to controls. An event is something that happens when an action is performed. Probably the most common action is the clicking of a button on a form.

In web forms, you can add code to the corresponding aspx.cd file. This code can be used to perform certain actions when a button is pressed on the form. This is generally the most common event in Web Forms. Let's see how we can achieve this.

We are going to make this simple. Just add an event to the button control to display the name which was entered by the user. Let's follow the below steps to achieve this.

Step 1) First you have to double click the Button on the Web Form. This will bring up the event code for the button in Visual Studio.

The btnSubmit_Click event is automatically added by Visual Studio, when you double click the button in the web forms designer.

Step 2) Let's now add code to the submit event to display the name textbox value and the location chosen by the user.

Code Explanation:-

1. The above line of code does the most simplest thing. It takes the value of the Name textbox control and sends it to the client via the Response object. So if you want to enter the string "Guru99" in the name text box , the value of txtName.Text would be 'Guru99'.
2. The next line of code takes the selected value of the listbox via the property 'lstLocation.SelectedItem.text'. It then writes this value via the Response.Write method back to the client.
3. Finally, we make all the controls on the form as invisible. If we don't do this , all the controls plus our response values will be displayed together.

Normally, when a person enters all the information on the form such as the Name , location , Gender, etc. The next page shown to the user should only have the information which was not entered. The user does not want to see the Name , Gender , location controls again. But ASP.Net does not know this , and hence, by default, it will again show all the controls when the user clicks the Submit button. Hence, we need to write code to ensure all the controls are hidden so that the user just sees the desired output.

One you make the above changes, you will see the following output

Output:-

In the Output screen, carry out the following steps

1. Give a name of Guru99 in the name textbox
2. Choose a location in the listbox of Bangalore
3. Click on the Submit button

Once you do this , you will see 'Guru99' and the location 'Bangalore' is displayed on the page.

ASP Net Session Management

The HTTP protocol on which all web applications work is a stateless protocol. By stateless, it just means that information is not retained from one request to another.

For instance, if you had a login page which has 2 textboxes, one for the name and the other for the password. When you click the Login button on that page, the application needs to ensure that the user name and password gets passed onto the next page.

In ASP.Net, this is done in a variety of ways. The first way is via a concept called ViewState. This is wherein ASP.Net automatically stores the contents of all the controls. It also ensures this is passed onto the next page. This is done via a property called the Viewstate.

It is not ideal for a developer to change anything in the view state. This is because it should be handled by ASP.Net only.

The other way is to use an object called a "Session Object." The Session object is available throughout the lifecycle of the application. You can store any number of key-value pairs in the Session object. So on any page, you can store a value in the Session object via the below line of code.

Session["Key"]=value

This stores the value in a Session object and the 'key' part is used to give the value a name. This allows the value to be retrieved at a later point in time. To retrieve a value, you can simply issue the below statement.

Session["Key"]

In our example, we are going to use the Session object to store the name entered in the name textbox field in the page. We are then going to retrieve that value and display it on the page accordingly. Let's add the below code to the Demo.aspx.cs file.

Code Explanation:-

1. The first line of code takes the value of the Name textbox control and stores it in the Session object. By specifying the code of Session["Name"] , we are giving the property a name called "Name." By specifying a name for the property , it becomes easier to retrieve it at a later point in time.
2. The next line of code retrieves the stored value from the Session object. It then writes this value via the 'Response.Write' method back to the client.
3. Finally, we make all the controls on the form as invisible. If we don't do this , all the controls plus our response values will be displayed together.

One you make the above changes, you will see the following output

Output:-

From the output, you can see that the Session value of name was retrieved and displayed in the browser.

Summary:

• ASP.Net is a development language used for constructing web-based applications. ASP.Net is designed to work with the standard HTTP protocol.
• In ASP.Net, you can add the standard controls to a form such as labels, textboxes, listboxes, etc.
• Each control can have an event associated with it. The most common event is the button click event. This is used when information needs to be submitted to the web server.
• Session management is a way in ASP.net to ensure that information is passed over from one page to the other.
• The view state property of a page is used to automatically pass the information of controls from one page to the other.
• The 'Session' object is used to store and retrieve specific values within a web page.

Creating Asp.net Controls, Webforms and Web config file

In ASP.Net, it is possible to create re-usable code. The re-usable code can be used in many places without having the need to write the code again.

The re-usable code helps in reducing the amount of time spent by the developer after writing the code. It can be done once and reused at multiple places.

In this tutorial, you will learn-

Creating Asp.Net Controls

ASP.Net has the ability to create Web controls. These controls contain code which can be re-used. It can be used across application as per the requirement.

Let's take a look at an example of how we can create a web user control in ASP.Net

In our example,

• We are going to create a web control.
• It will be used to create a header component.
• It will contain the below mentioned text.

"Guru99 Tutorials

"This Tutorial is for ASP.Net"

Let's work with our current web application created in the earlier sections. Let's follow the below steps to create a Web user control.

Step 1) The first step is to create a web user control and add it to our Visual Studio Solution.

1. Go to the Solution Explorer in Visual Studio and right click the DemoApplication Solution

Step 2) In the next step, we need to choose the option of creating a web user control

1. In the project dialog box, we can see various options for creating different types of components. Click the "Web" option on the left-hand side.
2. When we click the "Web" option, you see an option for "Web Forms User control." Click this option.
3. We then give a name for the Web Control "Guru99Control".
4. Finally, click the 'Add' button to let Visual Studio add the web user control to our solution.

You will the see the "Guru99Control" added to the solution.

Step 4) Now it's time to add the custom code to the Web user control. Our code will be based on pure HTML syntax. Add the following code to the 'Guru99Control.ascx' file

Code Explanation:-

1. In our Web Control file, we are first creating a table element. This will be used to hold 2 rows of text which will be used to display
• "Guru99 Tutorials" and
• "This Tutorial is for ASP.Net."
1. Next, we define our first table row and put the text as "Guru99 Tutorials."
2. We then define our second table row and put the text as "This Tutorial is for ASP.Net."

NOTE: Now we cannot execute this code and show the output. The only way to see if this works is to include it in our application (aspx file). We will see this in the sub-sequent topic.

Registering a User Controls on a ASP.NET web forms

In the earlier section, we saw how we can create a custom web control. This can be used to display the following two lines in a web form

• "Guru99 Tutorials"
• "This Tutorial is for ASP.Net."

Once the custom 'control' is created, we need to use it in our web application. The first step is to register the component in our application (Demo.aspx). This is the pre-requisite to use in any custom web control in an ASP.Net application.

Let's look at how we can achieve this. The below steps are a continuation to the previous section. In the previous section, we have created our custom control. In this section, we will use the control in our Demo.aspx web form.

First, we will register our custom 'control' into the Demo.aspx file.

Step 1) Ensure that you are working on the demo.aspx file. It is in this file that the web user control will be registered. This can be done by double-clicking the demo.aspx file in the Solution explorer of your .Net solution.

Once you double click the form, you will probably see the below code in the form. This is the default code added by Visual Studio when a web form is added to an ASP.Net project.

The default code consists of steps, which are required to ensure that the form can run as an ASP.Net web form in the browser.

Step 2) Now let's add our code to register the user control. The screenshot below shows registration of the user control to the above basic code.

Code Explanation:-

1. The first step is to register the web user control. This comprises of the below basic parameters
1. The 'Register' keyword is used to register the web user control.
2. The src parameter is used to define the name of the control, which in our case is Guru99Control.ascx.
3. The tagname and Tagprefix are individual names given to the control. This is done so that they can references in HTML pages as a normal HTML control.
2. Next, we reference our Web user control via the TagPrefix:TagName which was assigned earlier. The TagPrefix:TagName is an indicator that we want to use our custom web control. When the page is processed by the web server, you can see we have used the TWebControl:WebControl tag. It will then process the 'Guru99Control' accordingly.

In our example, it is TWebControl:WebControl.

1. An optional ID is given to the control of "Header". It's generally a good practice to give an ID to an HTML control.
2. Finally, the runat=server attribute so that the control will run on the web server. For all ASP.Net controls, this is the default attribute. All ASP.Net controls (including custom controls) have to be run on the server. Their output is then sent from the server to the client and displayed in the browser accordingly.

When the above code is set, and the project is executed using Visual Studio. You will get the below output.

Output:-

The output message displayed in the browser shows that the web user control was successfully executed.

Registering asp.net controls globally in the web config configuration file asp

Sometimes one might want to use user controls in multiple pages in a .Net application. At this point, you don't want to keep on registering user controls on each and every ASP.Net page.

• In .Net you can carry out the registration in the 'web.config' file.
• The web.config file is a common configuration file used by all web pages in .Net project.
• It contains necessary configuration details for the ASP.Net web project. For example, one common configuration in the web.config file is the target framework parameter.
• This parameter is used to identify the .Net framework version used by the application.

Below is a snapshot of the default code in the web.config file. The highlighted part is the target framework part.

Let's see how we can register our Guru99Control in the web.config file.

Step 1) Open the web.config file from solution explorer by double clicking the file.

When you open the web.config file, you might see the below configuration. The 'web.config' is added automatically by Visual Studio when the project is created. This is the basic configuration required to make the ASP.Net project work properly.

Step 2) Now let's register our component in the web.config file. We need to add the below lines for that.

The registration comprises of the below sub steps

1. Add a tag called <pages>. It means all the configuration for the controls will be applicable to all the ASP.Net pages in the solution.
2. The <controls> tag means that you are adding a configuration for the user control.
3. Then we register the user control with the add tag. The remaining parameters of tagPrefix, tagName and src remain the same as before.

Step 3) Remember to go the 'demo.aspx' page and remove the lines for control, which had the registration of the Guru99 component. If you don't perform this step, then the 'Guru99Control.ascx' file will be executed from the 'demo.aspx' file instead of 'web.config' file.

The above code is set, and the project is executed using Visual Studio. You will get the below output.

Output:-

The output message shows that the web user control was successfully executed.

Adding public properties to a web control

A property is a key-value pair associated with any control. Let's take an example of the simple <div> HTML tag. A screenshot of how the tag looks like is shown below.

The 'div' tag is used to create a section in any HTML document. The 'div' tag has a property called a style property. This can be used to give a different style to the text displayed in the div tag. Normally you would see the code for the div tag as shown below.

<div style="color:#0000FF">

So the color attribute is nothing but a key value pair which gives more information on the tag itself. In the above case, the key name is 'style' and the key value is 'color:#0000FF'.

Similarly, for user controls, you can create your own properties that describe the control.

Let's take a simple example and build upon our 'Guru99Control' created in the earlier sections.

In our example, we are going to add a simple integer property called MinValue. This value would represent the minimum number of characters in the text displayed in the user control.

Let's carry out the below-mentioned steps to get this in place.

Step 1) Open the Guru99Control.ascx file. Add the code for adding the MinValue property.

Code Explanation:-

The script runat=server attribute is used to indicate that we are adding some .Net specific code and that it needs to be run on the web server.

This is required for processing any property added to the user control. We then add our property MinValue, and give it a default value of 0.

Step 2) Now let's reference this property in our demo.aspx file. All we are doing now is just referencing the MinValue property and assigning a new value of 100.

NOTE: - When you run this code, it will not show any output. This is because the output falls under 100 character limit.

Summary

• ASP.Net has the ability to create user controls. User controls are used to have code which is used multiple times across an application. The user control can then be reused across the application.
• The user control needs to be registered on the ASP.Net page before it can be used.
• To use user control across all pages in an application, register it into the web.config file.
• Properties can also be added to a web user control.

Insert, Update and Delete database records in Asp.Net

Accessing Data from a database is an important aspect of any programming language. It is necessary for any programming language to have the ability to work with databases.

ASP.Net has the ability to work with different types of databases. It can work with the most common databases such as Oracle and Microsoft SQL Server.

It also has the ability to work with new forms of databases such as MongoDB and MySQL.

In this tutorial, you will learn-

Fundamentals of Database connectivity

ASP.Net has the ability to work with a majority of databases. The most common being Oracle and Microsoft SQL Server. But with every database, the logic behind working with all of them is mostly the same.

In our examples, we will look at working with the Microsoft SQL Server as our database. For learning purposes, one can download and use the Microsoft SQL Server Express Edition. This is a free database software provided by Microsoft.

While working with databases, the following concepts which are common across all databases.

1. Connection – To work with the data in a database, the first obvious step is the connection. The connection to a database normally consists of the below-mentioned parameters.
1. Database name or Data Source – The first important parameter is the database name. Each connection can only work with one database at a time.
2. Credentials – The next important aspect is the 'username' and 'password'. This is used to establish a connection to the database.
3. Optional parameters - You can specify optional parameters on how .net should handle the connection to the database. For example, one can specify a parameter for how long the connection should stay active.
2. Selecting data from the database – Once the connection is established, data is fetched from the database. ASP.Net has the ability to execute 'sql' select command against the database. The 'sql' statement can be used to fetch data from a specific table in the database.
3. Inserting data into the database – ASP.Net is used to insert records into the database. Values for each row that needs to be inserted in the database are specified in ASP.Net.
4. Updating data into the database – ASP.Net can also be used to update existing records into the database. New values can be specified in ASP.Net for each row that needs to be updated into the database.
5. Deleting data from a database – ASP.Net can also be used to delete records from the database. The code is written to delete a particular row from the database.

Ok, now that we have seen the theory part of each operation. Now, let's see how to perform database operations in ASP.Net.

Database Connections in .Net

Let's now look at the code, which needs to be kept in place to create a connection to a database. In our example, we will connect to a database which has the name of Demodb. The credentials used to connect to the database are given below

• User name – sa

Let's work with our current web application created in the earlier sections.

• Start adding database operations to it.
• Our example look's at establishing a simple connection. This connection is made to the Demodb database. This is done when the page is first launched.
• When the connection is established, a message will be sent to the user. The message will indicate that the connection has been established.

Let's follow the below-mentioned steps to achieve this.

Step 1) Let's first ensure that you have your web application (DemoApplication) opened in Visual Studio. Double click the 'demo.aspx.cs' file to enter the code for the database connection.

Step 2) Add the below code which will be used to establish a connection to the database.

Code Explanation:-

1. The first step is to create variables. It will be used to create the connection string and the connection to the SQL Server database.
2. The next step is to actually create the connection string. The connection string consists of the following parts
• Data Source – This is the name of the server on which the database resides. In our case, it resides on a machine called WIN- 50GP30FGO75.
• The Initial Catalog is used to specify the name of the database
• The UserID and Password are the credentials required to connect to the database.
1. Next, we assign the connecting string to the variable 'cnn'.
• The variable cnn is of type SqlConnection. This is used to establish a connection to the database.
• SqlConnection is a class in ASP.Net, which is used to create a connection to a database.
• To use this class, you have to first create an object of this class. Hence, here we create a variable called 'cnn' which is of the type SqlConnection.
1. Next, we use the open method of the cnn variable to open a connection to the database. We display a message to the user that the connection is established. This is done via the 'response.write' method. We then close the connection to the database.

When the above code is set, and the project is executed using Visual Studio. You will get the below output. Once the form is displayed, click the Connect button.

Output:-

The output message displayed in the browser will show that the connection to the database is made.

Asp .net access database

To show data accessed using Asp.Net, let us assume the following artifacts in our database.

1. A table called demotb. This table will be used to store the ID and names of various Tutorials.
2. The table will have two columns, one called "TutorialID" and the other called "TutorialName."
3. For the moment, the table will have two rows as shown below.
 TutorialID TutorialName 1 C# 2 ASP.Net

Let's change the code, so that we can query for this data and display the information on the web page itself. Note that the code entered is in continuation to that written for the data connection module.

Step 1) Let's split the code into two parts,

• The first part will be to construct our "select" statement. It will be used to read the data from the database.
• We will then execute the "select" statement against the database. This will fetch all the table rows accordingly.

Code Explanation:-

1. The first step is to create the following variables
• SQLCommand – The 'SQLCommand' is a class defined within C#. This class is used to perform operations of reading and writing into the database. Hence, the first step is to make sure that we create a variable type of this class. This variable will then be used in subsequent steps of reading data from our database.
• The DataReader object is used to get all the data specified by the SQL query. We can then read all the table rows one by one using the data reader.
• We then define two string variables. One is "SQL" to hold our SQL command string. The next is the "Output" which will contain all the table values.
1. The next step is to actually define the SQL statement. This will be used against our database. In our case, it is "Select TutorialID, TutorialName from demotb". This will fetch all the rows from the table demotb.
2. Next, we create the command object which is used to execute the SQL statement against the database. In the SQL command, you have to pass the connection object and the SQL string.
3. Next, we will execute the data reader command, which will fetch all the rows from the demotb table.
4. Now that we have all the rows of the table with us, we need a mechanism to access the row one by one.
• For this, we will use the 'while' statement.
• The 'while' statement will be used to access the rows from the data reader one at a time.
• We then use the 'GetValue' method to get the value of TutorialID and TutorialName.

Step 2) In the final step, we will just display the output to the user. Then we will close all the objects related to the database operation.

Code Explanation:-

1. We will continue our code by displaying the value of the Output variable. This is done using the Response.Write method.
2. We finally close all the objects related to our database operation. Remember this is always a good practice.

When the above code is set, and the project is run using Visual Studio, you will get the below output.

Output:-

From the output, you can clearly see that the program was able to get the values from the database. The data is then displayed in the browser to the user.

Insert Database Record

Just like Accessing data, ASP.Net has the ability to insert records into the database as well. Let's take the same table structure used for inserting records.

 TutorialID TutorialName 1 C# 2 ASP.Net

Let's change the code in our form, so that we can insert the following row into the table

 TutorialID TutorialName 3 VB.Net

Step 1) As the first step let's add the following code to our program. The below code snippet will be used to insert an existing record in our database.

Code Explanation:-

1. The first step is to create the following variables
1. SQLCommand – This data type is used to define objects. These objects perform SQL operations against a database. This object will hold the SQL command which will run against our SQL Server database.
2. The DataAdapter object is used to perform insert, delete and update SQL commands
3. We then define a string variable, which is "SQL" to hold our SQL command string.
2. The next step is actually to define the SQL statement, which will be used against our database. In our case, we are issuing an insert statement. This will insert the record of TutorialID=1 and TutorialName=VB.Net
3. Next, we create the command object which is used to execute the SQL statement against the database. In the SQL command, you have to pass the connection object and the SQL string
4. In our data adapter command,
• Associate the insert SQL command to the adapter.
• Then issue the 'ExecuteNonQuery' method. This is used to execute the Insert statement against our database.
• The 'ExecuteNonQuery' method is used in C# to issue any DML statements (insert, delete and update operation) against the database.
• To issue any table statements in ASP.Net, one need's to use the 'ExecuteNonQuery' method.
1. We finally close all the objects related to our database operation. Remember this is always a good practice.

Step 2) As a second step, let's add the same code as in the Accessing data section. This is to display the recent table data in the browser. For that, we will add the below code to the demo.aspx.cs file.

When the above code is set, and the project is executed in Visual Studio, you will get the below output.

Output:-

In the browser window, you will see that the rows was successfully inserted in the database.

Update Database Record

ASP.Net has the ability to update existing records from a database. Let's take the same table structure which was used above for the example to insert records.

 TutorialID TutorialName 1 C# 2 ASP.Net 3 VB.Net

Let's change the code in our form, so that we can update the following row. The old row value is TutorialID as "3" and Tutorial Name as "VB.Net". Which we will update it to "VB.Net complete" while the row value for Tutorial ID will remain same. Old row

 TutorialID TutorialName 3 VB.Net

New row

 TutorialID TutorialName 3 VB.Net complete

Step 1) As the first step let's add the following code to our program. The below code snippet will be used to update an existing record in our database.

Code Explanation:-

1. The first step is to create the following variables
1. SQLCommand – his data type is used to define objects to perform SQL operations against a database. This object will hold the SQL command which will run against our SQL Server database.
2. The dataadapter object is used to perform insert, delete and update SQL commands
3. We then define a string variable, which is SQL to hold our SQL command string.
2. The next step is actually to define the SQL statement which will be used against our database. In our case, we are issuing an 'update' statement. This will update the Tutorial name to "VB.Net Complete". The TutorialID will remain unchanged, and the value will be 3.
3. Next, we will create the command object. This is used to execute the SQL statement against the database. In the SQL command, you have passed the connection object and the SQL string
4. In our data adapter command, we now associate the insert SQL command to our adapter. We then issue the ExecuteNonQuery method. This is used to execute the Update statement against our database.
5. We finally close all the objects related to our database operation. Remember this is always a good practice.

Step 2) As a second step, let's add the same code as in the Accessing data section. This is to display the recent table data in the browser. For that, we will add the below code

When the above code is set, and the project is executed using Visual Studio, you will get the below output.

Output:-

In the browser window, you will see that the rows were successfully updated in the database.

Delete Database Record

ASP.Net can delete existing records from a database. Let's take the same table structure which was used above for the example to delete records.

 TutorialID TutorialName 1 C# 2 ASP.Net 3 VB.Net complete

Let's change the code in our form, so that we can delete the following row

 TutorialID TutorialName 3 VB.Net complete

So let's add the following code to our program. The below code snippet will be used to delete an existing record in our database.

Step 1) As the first step let's add the following code to our program. The below code snippet will be used to delete an existing record in our database.

Code Explanation:-

1. The Key difference in this code is that we are now issuing the delete SQL statement. The delete statement is used to delete the row in the demotb table in which the TutorialID has a value of 3.
2. In our data adapter command, we now associate the insert SQL command to our adapter. We also issue the 'ExecuteNonQuery' method which is used to execute the delete statement against our database.

Step 2) As a second step, let's add the same code as in the Accessing data section. This is to display the recent table data in the browser. For that, we will add the below code.

When the above code is set, and the project is executed using Visual Studio, you will get the below output.

Output:-

Connecting Asp.net Controls to Data

We have seen how we can use ASP.Net commands such as SQLCommand and SQLReader to fetch data from a database. We also saw how we can read each row of the table display it on the web page.

There are methods available to link controls directly to different fields in the table. At the moment, only the below controls can be bound to an ASP.Net application

1. CheckboxList
3. DropDownlist
4. Listbox

So let's see an example of how we can use control binding in ASP.Net. Here we will take a listbox example. Let's say we have the following data in our database.

 TutorialID TutorialName 1 C# 2 ASP.Net 3 VB.Net complete

Let's use the Listbox control and see how it can automatically pick up the data from our Demotb table.

Let's follow the below-mentioned steps to achieve this.

Step 1) Construct the basic web form. From the toolbox in Visual Studio, drag and drop 2 components- labels and Listboxes. Then carry out the following substeps;

1. Put the text value of the first label as TutorialID
2. Put the text value of the second label as TutorialName

Below is how the form would look like once the above-mentioned steps are performed.

Step 2) The next step is to start connecting each listbox to the database table.

1. First, click on the Listbox for Tutorial ID. This will bring up another dialog box to the side of the control.
2. From the dialog box, we need to click on the option of Choose Data source.

Step 3) You will then be presented with a dialog box. This can be used to create a new data source. The data source will represent a connection to the database. Choose the option of 'New data source'.

Step 4) The below screen will be prompted after choosing the new data source in the last step. Here we need to mention the type of data source we want to create.

1. Choose the database option to work with an SQL Server database.
2. Now we need to give a name to our data source. Here we are giving it a name of DemoDataSource.
3. Finally, we click the 'OK' button to proceed to the next screen.

Step 5) Now we need to create a connection to our database. In the next screen, click on the New Connection button

Step 6) Next you need to add the credentials to connect to the database.

1. Choose the server name on which the SQL Server resides
2. Enter the user id and password to connect to the database
3. Choose the database as 'demotb'
4. Click the 'OK' button.

Step 7) In the next screen, you will be able to see the Demotb table. Just click on the Next button to accept the default setting.

Step 8) You will now be able to test the connection on the next screen.

1. Click on the Test Query button to just see if you are able to get the values from the table
2. Click the Finish button to complete the wizard.

Step 9) Now in the final screen, you can click the 'OK' button. This will now bind the TutorialID listbox to the TutorialID field name in the 'demotb' table.

Step 10) Now it's time to bind the Tutorial Name listbox to the Tutorial Name field.

1. First, click on the Tutorial Name Listbox.
2. Next, Choose on Data Source in the dialog box which appears at the side of the Listbox.

Step 11) You will already see the DemoDataSource when choosing the Data Source in the next screen.

1. Choose the DemoData Source
2. Click on the OK button.

If all the above steps are executed as shown, you will get the below-mentioned output.

Output:-

From the output, you can see that the listboxes display the Tutorial ID and Tutorial Names respectively

Summary

• ASP.Net can work with databases such as Oracle and Microsoft SQL Server.
• ASP.Net has all the commands which are required to work with databases. This involves establishing a connection to the database. You can perform operations such as select, update, insert and delete.
• The datareader object in ASP.Net is used to hold all the data returned by the database. The While loop in ASP.net can be used to read the data rows one at a time.
• The data adapter object is used to perform SQL operations such as insert, delete, and update.
• ASP.Net can bind controls to the various fields in a table. They are bound by defining a data source in ASP.Net. The data source is used to pull the data from the database and populate them in the controls.

Asp.Net - Tracing, Debugging, Error Handling

In any application, errors are bound to occur during the development process. It is important to be able to discover errors at an early stage.

In Visual Studio, it is possible to do this for ASP.Net applications. Visual Studio is used for Debugging and has error handling techniques for ASP.Net. We will look into all of these aspects in detail further.

In this tutorial, you will learn-

ASP.NET Debugging

Debugging is the process of adding breakpoints to an application. These breakpoints are used to pause the execution of a running program. This allows the developer to understand what is happening in a program at particular point of time.

Let's take an example of a program. The program displays a string "We are debugging" to the user. Suppose when we run the application, for some reason, the string is not displayed. To identify the problem we need to add a breakpoint. We can add a breakpoint to the code line which displays the string. This breakpoint will pause the execution of the program. At this point, the programmer can see what is possibly going wrong. The programmer rectifies the program accordingly.

Here in the example, we will use our 'DemoApplication' that was created in earlier chapters. In the following example, we will see

• How to make the demo application display a string.
• How to add breakpoints to an application.
• How to debug the application using this breakpoint.

Step 1) Let's first ensure we have our web application open in Visual Studio. Ensure the DemoApplication is open in Visual Studio.

Step 2) Now open the Demo.aspx.cs file and add the below code line.

• We are just adding the code line Repsonse.Write to display a string.
• So when the application executes, it should display the string "We are debugging" in the web browser.

Step 3) Now let's add a breakpoint. A breakpoint is a point in Visual Studio where you want the execution of the program to stop.

1. To add a breakpoint, you need to click the column where you want the breakpoint to be inserted. So in our case, we want our program to stop at the code line "Response.Write". You don't need to add any command to add a breakpoint. You just need to click on the line on which you want to add a breakpoint.
2. Once this is done, you will notice that the code gets marked in red. Also, a red bubble comes up in the column next to the code line.

Note: - You can add multiple breakpoints in an application

Step 4) Now you need to run your application using Debugging Mode. In Visual Studio, choose the menu option Debug->Start Debugging.

Output:-

When you perform all the steps correctly, the execution of the program will break. Visual Studio will go to the breakpoint and mark the line of code in yellow.

Now, if the programmer feels that the code is incorrect, the execution can be stopped. The code can then be modified accordingly. To continue proceeding the program, the programmer needs to click the F5 button on the keyboard.

Application Tracing

In Application tracing, we can see how web pages work within an application. This is a feature available in Visual Studio.

Application tracing allows one to see if any pages requested results in an error. When tracing is enabled, an extra page called trace.axd is added to the application. (See image below). This page is attached to the application. This page will show all the requests and their status.

Let's look at how to enable tracing for an application.

Step 1) Let's work on our 'DemoApplication'. Open the web.config file from the Solution Explorer.

Step 2) Add the below line of code to the Web.config file.

The trace statement is used to enable tracing for the application.

• The 'requestLimit' in trace statement is used. It specifies the number of page requests that has to be traced.
• In our example, we are giving a limit of 40. We give limit because a higher value will degrade the performance of the application.

Run the "demoapplication" in Visual Studio.

Output:-

If you now browse to the URL – http://localhost:53003/trace.axd , you will see the information for each request. Here you can see if any errors occur in an application. The following types of information are shown in the above page

1. The time of the request for the web page.
2. The Name of the web page being requested.
3. The status code of the web request. (status code of 200 means that the request is successful).
4. The View details which you allows to view more details about the web request. An example of this is shown below. One important detailed information provided is the header information. This information shows what is the information sent in the header of each web request.

Page Tracing

Page tracing shows all the general information about a web page when it is being processed. This is useful in debugging if a page does not work for any reason.

Visual Studio will provide detailed information about various aspects of the page. Information such as the time for each method that is called in the web request. For example, if your web application is having a performance issue, this information can help in debugging the problem. This information is displayed when the application run's in Visual Studio.

Let's look at how to enable tracing for an application at a page level.

Step 1) Let's work on our DemoApplication. Open the demo.aspx file from the Solution Explorer

Step 2) Add the below line of code to enable page tracing. In the Page declaration, just append the line Trace="true". This code line will allow page level tracing.

Run the application in Visual Studio.

Output:-

Now when the web page Demo.aspx is displayed, you will get a whole lot of information about the page. Information such as the time for each aspect of the page lifecycle is displayed on this page.

ASP.NET Error Handling

In ASP.Net, you can have custom error pages displayed to the users. If an application contains any sort of error, a custom page will display this error to the user.

In our example, we are first going to add an HTML page. This page will display a string to the user "We are looking into the problem". We will then add some error code to our demo.aspx page so that the error page is shown.

Step 1) Let's work on our DemoApplication. Let's add an HTML page to the application

1. Right click on the DemoApplication in Solution Explorer

Step 2) In the next step, we need to provide a name to the new HTML page.

1. Provide the name as 'ErrorPage.'
2. Click the 'OK' button to proceed.

Step 3) The Errorpage will automatically open in Visual Studio. If you go to the Solution Explorer, you will see the file added.

Add the code line "We are looking into the problem" to the HTML page. You don't need to close the HTML file before making the change to the web.config file.

Step 4) Now you need to make a change in the web.config file. This change will notify that whenever an error occurs in the application, the custom error page needs to be displayed.

The 'customErrors' tag allows to define a custom error page. The defaultRedirect property is set to the name of our custom error's page created in the previous step.

Step 5) Now let's add some faulty code to the demo.aspx.cs page. Open this page by double clicking the file in Solution Explorer

Add the below code to the Demo.aspx.cs file.

• These lines of code are designed to read the lines of a text from a file.
• The file is supposed to be located in the D drive with the name 'Example.txt.'
• But in our situation, this file does not really exist. So this code will result in an error when the application runs.

Now execute the code in Visual Studio and you should get the below output.

Output:-

The above page shows that an error was triggered in the application. As a result, the Error.html page is displayed to the user.

ASP.NET Unhandled Exception

Even in the best of scenarios, there can be cases of errors which are just not forseen.

Suppose if a user browses to the wrong page in the application. This is something that cannot be predicted. In such cases, ASP.Net can redirect the user to the errorpage.html.

Let's see an example on this.

• We are going to use our same 'DemoApplication' which has the Errorpage.html.
• And we will try to view a web page which does not exist in our application.
• We should be redirected to our ErrorPage.html page in this case. Let's see the steps to achieve this.

Step 1) Let's work on our DemoApplication. Open the Global.asax.cs file from the Solution Explorer

NOTE: The global.asax.cs file is used to add code that will be applicable throughout all pages in the application.

Step 2) Add the below line of code to the global.asax.cs. These lines will be used to check for errors and display the ErrorPage.html page accordingly.

Code Explanation:-

1. The first line is the Application_Error event handler. This event is called whenever an error occurs in an application. Note that the event name has to be 'Application_Error'. And the parameters should be as shown above.
2. Next, we define an object of the class type HttpException. This is a standard object which will hold all the details of the error. We then use the Server.GetLastError method to get all the details of the last error which occurred in the application.
3. We then check if the error code of the last error is 404. (The error code 404 is the standard code returned when a user browses to a page which is not found). We then transfer the user to the ErrorPage.html page if the error code matches.

Now run the code in Visual Studio and you should get the below output

Output:-

Browse the page http://localhost:53003/Demo1.aspx . Remember that Demo1.aspx does not exist in our application. You will then get the below output.

The above page shows that an error was triggered in the application. As a result, the Error.html page is displayed to the user.

Logging Application Errors

By logging application errors, it helps the developer to debug and resolve the error at a later point of time. ASP.Net has the facility to log errors. This is done in the Global.asax.cs file when the error is captured. During the capturing process, the error message can be written into a log file.

Let's see an example on this.

• We are going to use our same DemoApplication which has the Errorpage.html.
• And we will try to view a web page which does not exist in our application.
• We should be redirected to our ErrorPage.html page in this case.
• And at the same time, we will write the error message to a log file. Let's see the steps to achieve this.

Step 1) Let's work on our DemoApplication. Open the Global.asax.cs file from the Solution Explorer

Step 2) Add the below line of code to the global.asax.cs. It will check for errors and display the ErrorPage.html page accordingly. Also at the same time, we will log the error details in a file called 'AllErrors.txt.' For our example, we will write code to have this file created on the D drive.

Code Explanation:-

1. The first line is to get the error itself by using the 'Server.GetLastError' method. This is then assigned to the variable 'exc'.
2. We then create an empty string variable called 'str'. We get the actual error message using the 'exc.Message' property. The exc.Message property will have the exact message for any error which occurs when running the application. This is then assigned to the string variable.
3. Next, we define the file called 'AllErrrors.txt.' This is where all the error messages will be sent. We write the string 'str' which contains all the error messages to this file.
4. Finally, we transfer the user to the ErrorPage.html file.

Output:-

Browse the page http://localhost:53003/Demo1.aspx . Remember that Demo1.aspx does not exist in our application. You will then get the below output.

And at the same time, if you open the 'AllErrors.txt' file you will see the below information.

The error message can then be passed on to the developer at a later point in time for debugging purposes.

Summary

• ASP.Net has the facility to perform debugging and Error handling.
• Debugging can be achieved by adding breakpoints to the code. One then runs the Start with Debugging option in Visual Studio to debug the code.
• Tracing is the facility to provide more information while running the application. This can be done at the application or page level.
• At the page level, the code Trace=true needs to be added to the page directive.
• At the application level, an extra page called Trace.axd is created for the application. This provides all the necessary tracing information.

Deploying a website on IIS

For users to access a website, it is required that the website is hosted on some sort of web server. There are different web servers available for different technologies. In .Net, the web server available is called Internet Information Services or IIS.

Once the web application is developed, it is then deployed on an IIS Server. This web application can then be accessed by the end users. There are two ways to deploy an application to the server, you will see both over here.

• Using the File Copy method.
• Using the Web publish method.

In this tutorial, you will learn-

What is IIS and Install IIS

IIS or Internet Information Server is the server used to host .Net web applications. IIS is normally installed on a Window Server.

The below diagram shows the process flow for an IIS Server.

1. The first part is the request sent by the user. The request will normally be a web page. An example could be http://example.com/Default.aspx .
• Here 'example.com' is a web site hosted on the IIS Server.
• 'Default.aspx' is a web page on the example.com web site.
• So the user will enter the URL http://example.com/Default.aspx in the web browser. The request will then go to the IIS Server, which has the example.com application.
1. Once the request comes to the IIS server, it is processed. The IIS Server will perform all the required operations as per request.
2. Finally, the IIS Server sends the output back to the user. The output will generally be HTML content sent back to the user. This HTML content will be displayed in the web browser.

Let's look how we can install IIS on a Window Server.

• Once installed, the following steps need to be carried out for installing IIS.

Step 1) On Windows Server 2012, the default dashboard is shown as below.

• The first step is to click on the 'Add roles and features' on the dashboard.
• This allows one to install additional features on a server.

Step 2) In the next screen, you need to click the Next button to proceed.

Step 3) In the next step, we need to perform two sub steps

1. The first is to choose the Role based or feature installation. This will allow us to perform the IIS Installation.
2. Click the 'Next' button to proceed.

Step 4) In the next screen, you will see the name of the server on which the installation is taking place. Click the Next button to proceed.

Step 5) In the next step, we need to perform two sub steps

1. Choose the Web server option. This will ensure that IIS gets installed.
2. Click the 'Next' button to proceed.

Step 6) In the subsequent screen, click the next button to proceed.

Step 7) In the final screen, click the Install button to begin the installation.

Once IIS has been installed, you can launch it, by going to search in Windows 2012.

1. Enter the string 'inetmgr' which is the command for IIS.
2. Then Internet Information Services Manager will come up. Click on this.

After you click on the above link, IIS will open, and you will be presented with the below screen.

In IIS, you will have an initial site setup called Default Web Site.

If you open up your browser and go to the URL http://localhost. You will see the below output. This URL mainly goes to the Default Web site shown in the previous screen. This is the default page which indicates that the IIS Server is up and running.

Deploying to IIS via File copy

After developing a web application, the next important step is to deploy the web application. The web application needs to be deployed so that it can be accessed by other users. The deployment is done to an IIS Web server.

There are various ways to deploy a web application. Let's look at the first method which is the File copy.

We use the web application created in the earlier sections. Let's follow the below-mentioned steps to achieve this.

Step 1) Let's first ensure we have our web application 'DemoApplication' open in Visual Studio.

Step 2) Open the 'Demo.aspx' file and enter the string "Guru 99 ASP.Net."

Now just run the application in Visual Studio to make sure it works.

Output:-

The text 'Guru 99 ASP.Net' is displayed. You should get the above output in the browser.

Step 3) Now it's time to publish the solution.

1. Right click the 'DemoApplication' in the Solution Explorer
2. Choose the 'Publish' Option from the context menu.

It will open another screen (see step below).

Step 4) In the next step, choose the 'New Profile' to create a new Publish profile. The publish profile will have the settings for publishing the web application via File copy.

Step 5) In the next screen we have to provide the details of the profile.

1. Give a name for the profile such as FileCopy
2. Click the OK button to create the profile

Step 6) In this step, we specifically mention that we are going to Publish website via File copy.

1. Choose the Publish method as File System.
2. Enter the target location as C:\inetpub\wwwroot – This is the standard file location for the Default Web site in IIS.
3. Click 'Next' button to proceed.

Step 7) In the next screen, click the Next button to proceed.

Step 8) Click the 'Publish' button in the final screen

When all of the above steps are executed, you will get the following output in Visual Studio

Output:-

From the output, you will see that the Publish succeeded.

Now just open the browser and go to the URL – http://localhost/Demo.aspx

You can see from the output that now when you browse to http://localhost/Demo.aspx , the page appears. It also displays the text 'Guru 99 ASP.Net'.

Publishing the website

Another method to deploy the web application is via publishing a website. The key difference in this method is that

• You have more control over the deployment.
• You can specify to which Web site you want to deploy your application to.
• For example, suppose if you had two websites WebSiteA and WebSiteB. If you use the Web publish method, you can publish your application to any website. Also, you don't need to know the physical path of the Web site.
• In the FileCopy method, you have to know the physical path of the website.

Let's use the same Demo Application and see how we can publish using the "website publish method."

Step 1) In this step,

1. Right click the 'DemoApplication' in the Solution Explorer
2. Choose the Publish Option from the context menu.

Step 2) In the next screen, select the 'New Profile' option to create a new Publish profile. The publish profile will have the settings for publishing the web application via Web Deploy.

Step 3) In the next screen we have to provide the details of the profile.

1. Give a name for the profile such as 'WebPublish'
2. Click the 'OK' button to create the profile

Step 4) In the next screen, you need to give all the details for the publish process

1. Choose the Publish method as Web Deploy
2. Select the server as Localhost
3. Enter the site name as Default Website – Remember that this is the name of the website in IIS
4. Enter the destination URL as http://localhost
5. Finally, click the Next button to proceed

Step 5) Click the 'Next' button in the following screen to continue

Step 6) Finally, click the Publish button to publish the Web site

When all of the above steps are executed, you will get the following output in Visual Studio.

Output:-

From the output, you will see that the Publish succeeded.

Now just open the browser and go to the URL – http://localhost/Demo.aspx

You can see from the output that now when you browse to http://localhost/Demo.aspx , the page appears. It also displays the text Guru 99 ASP.Net.

Summary

• After an ASP.Net application is developed, the next step is that it needs to be deployed.
• In .Net, IIS is the default web server for ASP.Net applications.
• ASP.Net web applications can be deployed using File copy method.
• ASP.Net web applications can also be deployed using Web Publish method.

How to Create and Run Asp.Net Unit Testing Project

Testing is an important aspect of any programming language. Testing for ASP.Net applications is possible with the help of Visual Studio.

Visual Studio is used to create test code. It is also used to run the test code for an ASP.Net application. In this way, it becomes simple to check for any errors in an ASP.Net application. In Visual Studio, the testing module comes as an out of box functionality. One can straightaway perform a test for an ASP.Net project.

In this tutorial, you will learn-

Introduction to testing for ASP.Net

The first level of testing an ASP.Net project is unit level testing. This test's the functionality of an application. The testing is conducted to ensure that the application behaves as expected. In ASP.Net, the first task is to create a test project in Visual Studio. The test project will contain the necessary code to test the application.

Let's consider the below web page. In the page, we have the message "Guru99 – ASP.Net" displayed. Now how can we confirm that the correct message is displayed when an ASP.Net project runs. This is done by adding a test project to the ASP.Net solution (used to develop web-based applications). This test project would ensure that the right message is displayed to the user.

Let's look into more detail now and see how we can work on testing in ASP.Net

Creating a .net unit Testing Project

Before we create a test project, we need to perform the below high-level steps.

1. Use our 'DemoApplication' used in the earlier sections. This will be our application which needs to be tested.
2. We will add a new class to the DemoApplication. This class will contain a string called 'Guru99 – ASP.Net.' This string will be tested in our testing project.
3. Finally, we will create a testing project. This is used to test the ASP.Net application.

So let's follow the above high-level steps and see how to implement testing.

Step 1) Ensure the DemoApplication is open in Visual Studio.

Step 2) Let's now add a new class to the DemoApplication. This class will contain a string called 'Guru99 – ASP.Net.' This string will be tested in our testing project.

1. In Visual Studio, right-click the 'DemoApplication' in the Solution Explorer.

Step 3) In this step,

1. Give a name 'Tutorial.cs' for the new class.
2. Click the 'Add' button to add the file to the DemoApplication.

Now, a new class is added to file "DemoApplication."

Step 4) Open the new Tutorial.cs file from "DemoApplication". Add the string "Guru99 – ASP.Net."

To open the file, double click on the Tutorial.cs file in the Solution Explorer.

The file will have some default code already written. Do not bother about that code, just add the below line of code.

Code Explanation:-

1. The Name variable is of type string.
2. Finally in the constructor of the Tutorial class, assign the value of the Name variable. The value is assigned to "Guru99 – ASP.Net"

Step 5) Now go to the demo.aspx file and add the lines of code to display the text "Guru99 – ASP.Net."

Code Explanation:-

1. The first line create's an object of the class 'Tutorial'. This is the first step when working with classes and objects. The name given to the object is 'tp'.
2. Finally we call 'tutorial.cs' from demo.aspx file. It displays the value of the Name variable.

When you run the above program in Visual Studio, you will get the following output.

Output:-

From the output, you see the message "Guru99 – ASP.Net" displayed.

Step 6) Now let's add our test project to the Demo Application. This is done with the help of Visual Studio.

1. Right click the Solution – DemoApplication.
2. In the context menu, choose the option 'New Project'.

Step 7) The step involves the addition of the Unit Test project to the demo application.

1. Click on item type as 'Test' from the left-hand panel.
2. Choose the item as 'Unit Test Project' from the list, which appears in the center part of the dialog box.
3. Give a name for the test project. In our case, the name given is 'DemoTest'.
4. Finally, click the 'OK' button.

You will eventually see the DemoTest project added to the solution explorer. With this, you can also see other files like UnitTest1.cs, properties, etc. are generated by default.

Running the Test Project

The test project created in the earlier section is used to test our ASP.Net application. In the following steps, we are going to see how to run the Test project.

• The first step would be to add a reference to the ASP.Net project. This step is carried out so that the test project has access to the ASP.Net project.
• Then we will write our test code.
• Finally, we will run the test using Visual Studio.

Step 1) To test our Demo Application, first test project needs to reference the Demo Application. Add a reference to the Demo.aspx solution.

1. Right click the Demo Test project

Step 2) The next step is to add a reference to the DemoApplication

1. Select the Projects option from the left-hand side of the dialog box
2. Click on the check box next to DemoApplication
3. Click on the 'OK' button.

This will allow demotest project to test our DemoApplication.

Step 3) Now it's time to add the test code to our test project.

• For this first double click on the UnitTest1 (UnitTest1 file is automatically added by Visual Studio when the Test project is created) file in the Solution Explorer.
• This is the file which will be run to test the ASP.Net project.

You will see the below code added by Visual Studio in the UnitTest1.cs file. This is the basic code needed for the test project to run.

Step 4) The next step is to add the code which is used to test the string "Guru99 – ASP.Net."

1. Create a new object called 'tp' of the type Tutorial
2. The Assert.AreEqual method is used in .Net to test if a value is equal to something. So in our case, we are comparing the values of tp.Name to Guru99 – ASP.Net.

Step 5) Now let's run our test project. For this, we need to go to the menu option Test->Run->All Tests

Output:-

A test Explorer window will appear in Visual Studio. This will show the above result and display that a successful test was run in Visual Studio.

Summary

• ASP.Net can add Unit Testing for applications.
• To test an application, you need to add a Unit Test project to the ASP.Net solution.
• All tests can be made to run in Visual Studio. A test explorer will show the results of all of the tests.

Meme de Internet

Fuente: https://es.wikipedia.org

El término meme de Internet se usa para describir una idea, concepto, situación, expresión y/o pensamiento humorístico manifestado en cualquier tipo de medio virtual, cómicvídeo, textos, imágenes y todo tipo de construcción multimedia que se replica mediante internet de persona a persona hasta alcanzar una amplia difusión.t1​ Los memes pueden propagarse mediante hipervínculosforosimageboardssitios web y cualquier otro difusor masivo, sobre todo, como lo son hoy en día las redes sociales. El concepto de meme se ha propuesto como un posible mecanismo de evolución cultural.1​ Las estrategias publicitarias de mercadotecnia viral se basan en la propagación de memes para promocionar un producto o concepto.

Historia

El nombre meme, tiene su origen en el concepto concebido por Richard Dawkins, zoólogo y científico. En su libro El gen egoísta (1976) expone la hipótesis memética de la transmisión cultural. Propone la existencia de dos procesadores informativos distintos en los seres humanos: uno actúa a partir del genoma gracias a la replicación de genes a través de las generaciones y otro actúa a nivel cerebral, replicando la información cultural del individuo, la cual es recibida por enseñanza, imitación o simple asimilación. En este caso, Dawkins nombra como «meme» a la unidad mínima de información que se puede transmitir. Según el autor, los memes conforman la base mental de nuestra cultura, como los genes conforman la primera base de nuestra vida.2​ Años más tarde, el propio Dawkins describió a los memes de Internet como un «secuestro de la idea original», implicando que incluso el concepto de meme ha mutado y evolucionado por sí mismo.3

Otros autores, como el biólogo Edward Wilson, han señalado que el concepto de unidades culturales replicables ya apareció a finales de los 60, bajo diversos nombres diferentes como «mnemotipo», «idene», «sociogen», «culturgen» o «tipo cultural».4

Una de las más recientes investigaciones acerca de este tema, fue hecha por Knobel y Lankshear en 2007. Ellos sugieren que la mayoría de los memes no son replicados de manera intacta, sino que pasan por diferentes procesos de reinterpretación y modificación, lo que permite obtener diferentes versiones de un mismo meme, respetando la idea original, lo que a su vez permite su propagación masiva.

La naturaleza de Internet, basada en la premisa de compartir información ha contribuido a la difusión de memes. Uno de los primeros memes documentados transmitidos por Internet fue el gif animado de un bebé bailando conocido como «Ooga-Chaka Baby», que apareció en 1996. Según el especialista en comunicación mexicano Gabriel Pérez Salazar, el meme de Internet como una imagen acompañada por texto como unidad cultural replicada aparece «de manera identificable, plenamente reconocida» entre 2000 y 2002.1

Evolución y propagación

Los memes de internet pueden mantenerse inmutables o evolucionar en el tiempo, bien al azar, o bien por imitación, parodia o por el añadido de nuevos contenidos. Los memes normalmente surgen como una forma de interacción social, como referencias culturales o como una forma de describir situaciones de la vida real de las personas.5​ La rapidez con la que se pueden extender a escala mundial y su impacto social ha atraído el interés de investigadores y profesionales de la industria de comunicación.6​ En el área académica, se investigan modelos para predecir qué memes se propagarán por la red y cómo evolucionarán. Comercialmente, se usan en publicidad y márketing.

Un estudio de las características de los memes de internet alcanzó varias conclusiones sobre la propagación de los mismos: por ejemplo, que los memes «compiten» entre ellos por la atención del público, lo que resulta en un menor tiempo de vida, pero también pueden «colaborar», gracias a la creatividad de los usuarios de Internet, lo que contribuye a su difusión y permanencia.78​ Se dan ejemplos de memes muy populares que se extinguen al poco tiempo, mientras que otros, sin alcanzar el mismo nivel de rápido reconocimiento, sobreviven gracias a su asociación con otros memes.78​ Para Shifman (2011), es importante que los memes sean compartidos en el interior de subculturas específicas: la motivación para que los usuarios participen en la circulación y reinterpretación de los memes en línea, surge precisamente de la necesidad de pertenecer a una comunidad definida, al menos parcialmente, por estas prácticas culturales. Esta es la dimensión que, con base en Giménez (2000), nos permite establecer una relación entre el uso del meme en internet y los procesos de construcción identidarios de los usuarios que participan en dichas comunidades interpretativas: el sentido de pertenencia.[cita requerida]

En 2013, Dominic Basulto escribió en el The Washington Post que el crecimiento de Internet y las tácticas de la industria publicitaria han afectado negativamente a la capacidad de los memes de transmitir unidades de cultura humana durables, y, en su opinión contribuyen a difundir banalidades en vez de ideas importantes.9

Referencias

1.  Martínez Torrijos, Reyes (8 de julio de 2014). «El significado cultural del meme se propaga con el relajo cibernético»La Jornada.
2. Dawkins, Richard (1989). The Selfish Gene (2.ª edición). Oxford University Press. p. 192. ISBN 0-19-286092-5.
3. Dawkins, Richard (22 de junio de 2013). The Saatchi & Saatchi New Directors' Showcase (en inglés).
4. Wilson, Edward (1998). Consilience: The Unity of Knowledge. Nueva York: Alfred A. Knopf, Inc. ISBN 978-0679768678.
5. Pérez Salazar, Gabriel; Aguilar Edwards, Andrea; Guillermo Archilla, María Ernestina (2014). «El meme en internet. Usos sociales, reinterpretación y significados, a partir de Harlem Shake»Argumentos [online] 27ISSN 0187-5795.
6. Kempe, David; Kleinberg, Jon; Tardos, Éva (2003). «Maximizing the spread of influence through a social network»Int. Conf. on Knowledge Discovery and Data Mining (ACM Press).
7.  Coscia, Michele (5 de abril de 2013). «Competition and Success in the Meme Pool: a Case Study on Quickmeme.com». Center for International Development, Harvard Kennedy School. Association for the Advancement of Artiﬁcial Intelligence.
8.  Mims, Christopher (28 de junio de 2013). «Why you’ll share this story: The new science of memes»Quartz. Archivado desde el original el 18 de julio de 2013.
9. Basulto, Dominic (5 de julio de 2013). «Have Internet memes lost their meaning?»The Washington Post. Archivado desde el original el 9 de julio de 2013.
10. Flor, Nick (11 de diciembre de 2000). «Memetic Marketing». InformIT. Consultado el 29 de julio de 2011.
11. Carr, David (29 de mayo de 2006). «Hollywood bypassing critics and print as digital gets hotter». New York Times.

DSC pull server

A DSC pull server (desired state configuration pull server) is an automation server that allows configurations to be maintained on many servers, computer workstations and devices across a network.

DSC pull servers use Microsoft Windows PowerShell DSC's declarative scripting to maintain current version software and also monitor and control the configuration of computers and services and the environment they run in. This capacity makes DSC pull servers very useful for administrators, allowing them to ensure reliability and interoperability between machines by stopping the configuration drift that can occur through making individual machine setting changes over time.

DSC pull servers use PowerShell or Windows Server 2012 and client servers must be running Windows Management Framework (WMF) 4. Microsoft has also developed PowerShell DSC for Linux.

Examples of how built-in DSC resources automation can configure and manage a set of computers or devices:

•     Enabling or disabling server roles and features.
•     Managing registry settings.
•     Managing files and directories.
•     Starting, stopping, and managing processes and services.
•     Managing groups and user accounts.
•     Deploying new software.
•     Managing environment variables.
•     Running Windows PowerShell scripts.
•     Fixing configurations that drift away from the desired state.
•     Discovering the actual configuration state on a given client.

Melbourne shuffle algorithm

The Melbourne shuffle algorithm is a sequence of actions intended to obscure the patterns by which cloud-based data is accessed. The goal is to make it difficult for unauthorized parties to draw conclusions about what type of data is being stored in the cloud by observing patterns that emerge as the data is accessed.

Even when data is encrypted, details about how often the data is accessed or what action is taken after the data has been accessed can be revealing. By analyzing digital footprints, an outsider can predict such things as who is likely to own a particular data set or what business announcement is likely to correlate with a particular access pattern.

As with a deck of cards, a data shuffle rearranges the array to achieve a random permutation of its elements. The Melbourne shuffle moves small amounts of data from the cloud server to the user's local memory, where it is rearranged before being returned to the server. Even when the same user repeatedly accesses the same data, shuffling ensures the access path will not be consistent.

The algorithm, which obfuscates access patterns by making them look quite random, was written by computer scientists at Brown University in 2014. It is named for another kind of shuffle -- a popular dance move in Australia during the 1990s.

Tips to create flexible but clear manuals

Good application deployment manuals are thorough but usable. Follow these tips to create flexible but clear manuals that contribute to release management best practices.

Blue/Green deployment

Posted by: Margaret Rouse | Contributor: Stephen J. Bigelow

A blue/green deployment is a change management strategy for releasing software code. Blue/green deployments, which may also be referred to as A/B deployments require two identical hardware environments that are configured exactly the same way. While one environment is active and serving end users, the other environment remains idle.

Blue/green deployments are often used for consumer-facing applications and applications with critical uptime requirements. New code is released to the inactive environment, where it is thoroughly tested. Once the code has been vetted, the team makes the idle environment active, typically by adjusting a routerconfiguration to redirect application programtraffic. The process reverses when the next software iteration is ready for release.

If problems are discovered after the switch, traffic can be directed back to the idle configuration that still runs the original version. Once the new code has proven itself in production, the team may choose to update code in the idle configuration environment to provide an added measure of capability for disaster recovery.

In a blue/green deployment, identical environments run with one active while the other is updated and thoroughly tested. Once the deployment is ready, a simple network change flips the active and idle environments.

Blue/green deployments need two identical sets of hardware, and that hardware carries added costs and overhead without actually adding capacity or improving utilization. Organizations that cannot afford to duplicate hardware configurations may use other strategies such as canary testing or rolling deployments. A canary test deploys new code to a small group of users, while a rolling deployment staggers the rollout of new code across servers.

A software deployment process that begins but never ends

Starting a project is one thing. Finishing it, well, that's a whole other challenge. But what about when a project never quite ends?

This is where developers and operations specialists find themselves when an IT organization decides its software deployment process will be a continuous one. It's difficult terrain on which to operate, with application development and integration essentially never reaching an endpoint. Updates follow updates in a continuous cycle.

As TechTarget contributor Paul Korzeniowski notes in this handbook's first article, this release strategy is the framework for the creation of faster, better and more efficient applications.

But this endless wave of rollouts can strain an organization. Staff training is useful, but, depending on the discipline, not always readily available. Testing methods need to be tried, accepted and adjusted to. App developers and ops teams need to learn how best to work with one another, for the sake of efficiency as well as for the quality of each iteration they roll out.

When continuous development and continuous integration fuse together in an effective way, an organization puts itself in the land of DevOps -- whether it deliberately set out to do so or not. This software deployment process can mean big changes, but the work is seen as beneficial and all but inevitable. Recent TechTarget data found that 40% of IT shops already do some form of this; another 50% are working on it.

An ongoing, never-ending software deployment process is how a business moves at modern speed. And once you've started, there's no stopping.

3 razones por las que los correos electrónicos de marketing no funcionan (y cómo solucionarlos)

En 2015, había alrededor de 205 mil millones de correos electrónicos de marketing enviados por día. Y usted se sorprenderá de saber que se esperan alrededor de 246 mil millones de correos electrónicos enviados a finales de 2019. ¿Y qué? El marketing por correo electrónico no está muerto, amigo mío. A pesar del zumbido constante de medios sociales, la mayoría de los vendedores todavía consideran la comercialización con emails su herramienta preferida de la comercialización.

"De todos los canales que he probado como un vendedor, el correo electrónico siempre supera a la mayoría de ellos. No sólo tiene una alta tasa de conversión, sino que a medida que construye su lista puede continuamente monetizar lanzando múltiples productos, "

Dice Neil Patel , un vendedor de Internet de renombre.Sin embargo, no todas las empresas son capaces de aprovechar el verdadero poder de marketing por correo electrónico. En el post de hoy, voy a hablar de las 3 razones principales de correo electrónico de marketing chupar. Después de leer este post, sabrá cómo evitarlo. Sin más preámbulos, vayamos directamente a los puntos:

1- No optimización para teléfonos móviles

El 53% de los correos electrónicos se abren en dispositivos móviles. Y el 74% de los usuarios de teléfonos inteligentescomprueban sus correos electrónicos en sus teléfonos. Si sus correos electrónicos no están optimizados para teléfonos móviles, esto afectará gravemente la tasa de apertura y el ROI de sus correos electrónicos porque muchas personas no interactuarán con sus correos electrónicos. Pero, ¿cómo puede optimizar los correos electrónicos para teléfonos móviles? Aquí está una hoja de trucos rápida para asegurarse de que los correos electrónicos que envíe se verá muy bien en los dispositivos móviles:
• Ir fácil en las imágenes - utilizar imágenes simples y ligeras
• Utilizar una plantilla de respuesta
• Romper el texto en párrafos más pequeños
• Escribir una línea de asunto corta
Cuando usted va a escribir copia para sus correos electrónicos de marketing, asegúrese de hacerlo simple, claro y conciso. Nadie va a leer una copia larga y veraz en sus teléfonos móviles.

2- No escribir línea de asunto irresistible

La línea de asunto es la parte más importante de los correos electrónicos de marketing. No importa cuán grande sea su copia de correo electrónico, nadie va a abrir sus correos electrónicos si la línea de asunto no es irresistible.De hecho, el 35% de los destinatarios de correo electrónico consideran las líneas de asunto como un factor decisivo para abrir correos electrónicos. Es por eso que es imperativo que usted debe escribir líneas de asunto convincente para sus correos electrónicos de marketing. Pero, ¿cómo puedes hacerlo?Aquí hay una lista de algunas tácticas probadas para crear líneas de asunto convincentes:
• Mantenga la línea de asunto breve y simple
• Revelar lo que está dentro del correo electrónico
• Iniciar su línea de asunto con verbos orientados a la acción
• Usar el número en la línea de asunto
• Crear un sentido de urgencia en la línea de asunto
Debe recordar que el elemento emocional de la línea de asunto hace que la gente haga clic en él. Así que nunca olvide hacer un atractivo emocional en sus líneas de asunto.

3- No ofrecer a los suscriptores cualquier oferta

Si usted lo llama un soborno ético o un bono, una oferta en correos electrónicos es una manera infalible de aumentar el ROI de sus correos electrónicos. Todos están ocupados estos días. Si la gente está leyendo sus correos electrónicos, están gastando tiempo para que se les pague por su tiempo, tan simple como eso.Sin embargo, no significa que siempre debe ofrecer grandes cupones de descuento en sus correos electrónicos de marketing.El punto es, debe haber algún intercambio de valor. Incluso un eBook pequeño puede traerle grandes resultados.
Como dice el legendario profesor de marketing de Harvard Business School, Theodore Levitt ,
"La gente no quiere comprar un taladro de un cuarto de pulgada. ¡Quieren un agujero de un cuarto de pulgada!
Sus suscriptores siempre piensan lo que usted les ofrecerá cuando abran sus correos electrónicos de marketing.
La próxima vez, cuando va a ejecutar una campaña de marketing por correo electrónico, siga estos consejos prácticos de marketing por correo electrónico , sin duda aumentará el ROI.

Conclusión:

Con inboxes que se inundan de emails de comercialización, la gente actualmente no los abre con frecuencia. Peor, están moviendo cantidades de emails a sus carpetas de Spam regularmente. Si desea asegurarse de que sus correos electrónicos lleguen a leerse, debe actuar inteligentemente.Nunca debe olvidar optimizar los correos electrónicos para teléfonos móviles, escribir una irresistible línea de asunto y ofrecer una valiosa oferta. Esto aumentará la tasa de apertura de sus correos electrónicos y aumentar el ROI.

H - 1B

H-1B es una clasificación de visas de Servicio de Inmigración de los Estados Unidos que permite a los empleadores contratar a trabajadores extranjeros altamente calificados que poseen la aplicación teórica y práctica de un cuerpo de conocimiento especializado. El solicitante debe tener una licenciatura o el equivalente en la especialidad.

Además de las ocupaciones de especialidad en campos como la ciencia, la medicina, la salud, la educación, la tecnología de la información y las empresas, la visa también se aplica a los extranjeros que buscan realizar servicios de mérito y capacidad excepcional relacionados con un Departamento de Defensa (DOD) Proyecto de desarrollo, o servicios como un modelo de moda de distinguido mérito o capacidad.

Para ser elegible para una visa H-1B, un extranjero debe tener un patrocinador del empleador. Se requiere que el empleador declare o demuestre que un trabajador de los Estados Unidos no será desplazado por el solicitante H-1B y que presentará una solicitud ante los Servicios de Inmigración y Ciudadanía de los Estados Unidos (USCIS) en nombre del extranjero. En 2015, dos tercios de las peticiones concedidas en 2015 eran para empleados en ocupaciones relacionadas con la informática.

Las leyes actuales limitan el número anual de trabajadores extranjeros calificados que pueden obtener una nueva visa a 65.000 con un tope de 20.000 adicionales bajo la exención de grado avanzado H-1B. Los empleados extranjeros en las organizaciones de investigación del gobierno, el instituto de educación superior o la organización de investigación sin fines de lucro pueden estar exentos de la tapa.

Las solicitudes de nuevas visas se aceptan cada año el 1 de abril. Si el número de solicitudes supera el tope aprobado por el Congreso después de cinco días, un proceso de selección computarizado (a veces denominado lotería) selecciona 20,000 solicitudes de grado avanzado del grupo solicitante . Los solicitantes que no son aceptados se agregan al pool regular y el proceso de selección por computadora continúa hasta que se hayan concedido 65,000 visas adicionales.

La duración de la estancia permitida por una visa H-1B es de hasta tres años, pero las prórrogas son permitidas. Los titulares de visas H-1B que quieran seguir trabajando en los Estados Unidos después de seis años, pero que no han obtenido la residencia permanente, deben vivir fuera de los Estados Unidos por un año antes de solicitar una nueva visa H-1B. La duración máxima de la visa H-1B es de diez años para trabajos excepcionales del Departamento de Defensa de los Estados Unidos.

En 2017, proyectos de ley para reformar el programa H-1B fueron introducidos tanto en la Cámara como en el Senado.

 Keyword(s): H-1B

.NET Standard FAQ

Summary

.NET Standard 2.0 is final.

You can now start producing .NET Standard 2.0 libraries and NuGet packages. Please use the latest .NET Core 2.0 Preview 2 as it contains many improvements that were necessary to provide a good experience.

Details

• Bigger API Surface: We have more than doubled the set of available APIs from 13k in .NET Standard 1.6to 32k in .NET Standard 2.0. Most of the added APIs are .NET Framework APIs. These additions make it much easier to port existing code to .NET Standard, and, by extension, to any .NET implementation of .NET Standard, such as .NET Core 2.0 and the upcoming version of UWP.

• .NET Framework compatibility mode: The vast majority of NuGet packages are currently still targeting .NET Framework. Many projects are currently blocked from moving to .NET Standard because not all their dependencies are targeting .NET Standard yet. That's why we added a compatibility mode that allows .NET Standard projects to depend on .NET Framework libraries as if they were compiled for .NET Standard. Of course, this may not work in all cases (for instance, if the .NET Framework binaries uses WPF), but we found that

with .NET Standard 2.0, so in practice it unblocks many projects.

• Broad platform support. .NET Standard 2.0 is supported on the following platforms:
• .NET Framework 4.6.1
• .NET Core 2.0
• Mono 5.4
• Xamarin.iOS 10.14
• Xamarin.Mac 3.8
• Xamarin.Android 7.5
• UWP is work in progress and will ship later this year.

Tooling Prerequisites

In general, make sure you run the latest version of the tooling:

• .NET Core SDK. You always need to install .NET Core 2.0 Preview 2. This also includes the CLI (dotnet) for building packages, so if you only want to use the CLI, you can stop here.
• Visual Studio. If you want to use Visual Studio for authoring .NET Standard 2.0 libraries, you also need to install Visual Studio 2017 15.3. Make sure to use 15.3 and not an earlier version, as this release addressed a couple of key issues to provide a good experience.
• Visual Studio for Mac. The latest version of Visual Studio for Mac supports building .NET Standard 2.0 libraries.
• Rider. The latest version also has support for .NET Standard 2.0.

What is .NET Standard?

.NET Standard is a specification that represents a set of APIs that all .NET platforms have to implement. This unifies the .NET platforms and prevents future fragmentation. Think of .NET Standard as POSIX for .NET.

Having a standard solves the code sharing problem for .NET developers by bringing all the APIs that you expect and love across the environments that you need: desktop applications, mobile apps & games, and cloud services.

For more details, take a look at Introducing .NET Standard blog post.

The blog post has 15 pages. Why so complicated?

The general idea of .NET Standard is pretty simple indeed. The blog post is a bit longer because it also provides more context in related areas, specifically how we use .NET Standard in tooling, which additions we're making in .NET Standard 2.0, how we model platform specific APIs, and what .NET Standard means for .NET Core.

I still don't get it. Can you provide an analogy that makes sense for a dev?

David Fowler provided a developer analogy that explains .NET Standard in terms of interfaces and classes.

How is .NET Standard different from .NET Core?

Here is the difference:

• .NET Standard is a specification that covers which APIs a .NET platform has to implement.
• .NET Core is a concrete .NET platform and implements the .NET Standard.

What APIs are part of .NET Standard and which platforms support it?

We have a version document that points you to the platform support matrix as well as which APIs are available in a given .NET Standard version.

As a library author, which version of .NET Standard should I target?

When choosing a .NET Standard version you should consider this trade-off:

• The higher the version, the more APIs are available to you.
• The lower the version, the more platforms you can run on.

So generally speaking, you should target the lowest version you get away with. The version document will help inform your decision.

How does .NET Standard versioning work?

Think of the .NET Standard versions as concentric circles: higher versions incorporate all APIs from previous versions.

From a project that targets .NET Standard version x you'll be able to reference other libraries and NuGet packages that reference .NET Standard from 1.0 up to, and including, version X. For example, when you target .NET Standard 1.6, you'll be able to use packages that are targeting any version from .NET Standard 1.0 up to 1.6. However, you'll not be able to use a package that is targeting a higher version, for example, .NET Standard 2.0.

From a project that is targeting a specific .NET platform, the .NET Standard versions you can reference depends on which version of .NET Standard the platform is implementing.

Starting with .NET Standard 2.0 we also enable referencing binaries compiled for .NET Framework through a compat shim.

How does .NET Standard work with Portable Class Libraries (PCLs)?

Certain PCL profiles are mapped to .NET Standard versions. The mapping can be found in our documentation.

For profiles that have a mapping, these two library types will be able to reference each other.

What about the breaking change between .NET Standard 1.x and 2.0?

Based on community feedback, we decided not to make .NET Standard 2.0 be a breaking change from 1.x. Instead, .NET Standard 2.0 is a strict superset of .NET Standard 1.6. The plan for handling .NET Framework 4.6.1 and .NET Standard 2.0 is outlined in the spec.

If there is no breaking change, why call it .NET Standard 2.0?

We think .NET Standard 2.0 is such a large change that bumping the major version is justified:

• We more than doubled the API surface
• We added a compat shim that allows referencing existing binaries, even if they weren't built against .NET Standard or Portable Class Libraries
Version#APIsGrowth %
1.0 7,949
1.1 10,239 +29%
1.2 10,285 +0%
1.3 13,122 +28%
1.4 13,140 +0%
1.5 13,355 +2%
1.6 13,501 +1%
2.0 32,638 +142%

Is the API set of a .NET Standard version fixed?

Yes. A specific version of .NET Standard remains frozen once shipped. New APIs will first become available in specific .NET platforms, such as .NET Core. If we believe the new APIs should be made available everywhere, we'll create a new .NET Standard version.

What's the difference between targeting, implementing and supporting?

• A library targets a specific framework. .NET Standard is a synthetic framework, which represents a standardized set of APIs across all .NET platforms. A library can also target a specific .NET platform, in which case it gets access to platform-specific APIs. For example, when targeting Xamarin.iOS you also get access to iOS APIs.
• A .NET platform implements a specific .NET Standard version.
• A .NET platform supports all .NET Standard versions that are equal to or lower than the version it implements. For instance, if a .NET platform implements .NET Standard 1.5, it supports 1.0 - 1.5. If it implements .NET Standard 2.0, it supports 1.0 - 2.0.

Is .NET Standard specific to C#?

There is nothing language specific about .NET Standard. From a language view point, the only tie-in to .NET Standard are the language-specific runtime APIs (for example Microsoft.CSharpMicrosoft.VisualBasicFSharp.Core etc.) and the project templates that allow you to target .NET Standard.

We don't plan to add any language-specific runtime APIs to .NET Standard. The expectation is that they sit on top of .NET Standard and are referenced as needed, for example, from the project template.

Who decides what is in .NET Standard?

The current versions of .NET Standard (1.x - 2.0) were mostly computed:

• The 1.x version range was effectively constrained by what was available in .NET Core.
• For 2.0, we've started with the intersection of .NET Framework and Xamarin, deliberately excluding .NET Core to make sure the resulting set isn't held back by it. We decided we'll simply add the delta to .NET Core.

So for the most part, the decision maker was Microsoft, although as explained above, with very little degrees of freedom as the problem was mostly a result of what was feasible at the time.

Moving forward, we want to open up .NET Standard to be driven by the .NET Standard review board. The board is comprised of implementers of .NET.

The idea being that the standard doesn't drive new APIs: rather, the work of the standard body is to decide which of the APIs available on some .NET platforms should be available on all .NET platform and thus be added to the standard. In other words, the innovation happens in the context of a particular .NET platform and standardization follows afterwards.

Is AppDomain part of .NET Standard?

The AppDomain type is part of .NET Standard. Not all platforms will support the creation of new app domains, for example, .NET Core will not, so the method AppDomain.CreateDomain while being available in .NET Standard might throwPlatformNotSupportedException.

The primary reason we expose this type in .NET Standard is because the usage is fairly high and typically not associated with creating new app domains but for interacting with the current app domain, such as registering an unhandled exception handler or asking for the application's base directory.

Is MarshalByRefObject (remoting) part of .NET Standard?

We don't plan to add remoting support to .NET Standard. However, in order to avoid potential breaking changes, we'll have the MarshalByRefObject type as many other types derive from it but we will not expose any remoting-specific members on it, such as CreateObjRef.

Is System.Data part of .NET Standard?

.NET Standard will contain the abstractions (DbProviderDbProviderFactoriesDbConnectionIDbConnection etc.) as well as the general ADO.NET facilities (DataSetDataTable etc.) APIs.

We don't plan on adding any specific providers to .NET Standard as their applicability varies (for example, it's a highly unlikely scenario to use the SQL Server client from an iOS device, but it would make sense to use a provider that can store data on the device, such as SQLite). The expectation is that those sit on top of .NET Standard or remain platform-specific.

Why is JSON.NET not part of .NET Standard?

Today, one of the most popular libraries for dealing with JSON is JSON.NET. But by adding it to the .NET Standard we'd do the community a disservice. What matters is that the JSON support is widely available. And James, the author of JSON.NET, does a great job making sure that JSON.NET is available everywhere. His ability to do this successfully is a function of how hard it is for him to make changes. The best way to do this is by creating a library that targets .NET Standard because it can be updated independently from the standard itself and everyone immediately benefits.

Of course, this doesn't mean we can't or shouldn't provide some built-in JSON support. We've talked with James about this in the past and I believe there is a lot of opportunity for us to collaborate with him on an even more performant way to provide JSON support in .NET. However, we're very interested in doing this with him rather than just building "another" JSON.NET. We want a strong ecosystem for .NET, but this can only happen if we embrace libraries based on merit, rather than by who wrote it. That's what open source is all about.

Why is XYZ not part of .NET Standard?

As explained in the JSON.NET example above, there is a trade-off between adding components to .NET Standard and having components that are on top of .NET Standard and can be updated independently.

Check out the .NET Standard inclusion principles to see how we approach this.

Why do you include APIs that don't work everywhere?

We generally don't include APIs in .NET Standard that don't work everywhere, and instead provide them as libraries that sit above .NET Standard.

But if you think about it: we can't make type members (for example, methods, properties, etc.) additive. The only thing you can make additive are types, as two different types can live in separate assemblies but we don't have a mechanism to split a single type across two different assemblies. In those cases, we leave the members on the type and let platforms that cannot meaningfully implement them throw PlatformNotSupportedException.

Moving forward, we try to avoid creating types where only parts of it work everywhere. But as always, there will be cases where we couldn't predict the future and are forced to throw.

Will there be tooling to highlight APIs that don't work everywhere?

Our current focus is on providing APIs either as part of .NET Standard or as independent packages that sit on top of .NET Standard. In some cases, certain APIs will not be supported everywhere and throw PlatformNotSupportedException. While that isn't ideal, it's much simpler than the alternatives, which are:

• Using #if, also called cross-compiling
• Write complicated reflection code, also called runtime light-up

A simple if statement with a platform check is much easier to express. Of course, there are limits to this. Exceptions are only acceptable for corner cases to avoid the complexities above. We will generally not expose large set of APIs that aren't supported.

In the future, my hope is that we can provide tooling to help you with this, by, for example, providing Roslyn analyzers that can give you squiggles in the IDE.

Will Unity implement .NET Standard?

Yes. We're working with Unity to make sure this is a smooth experience. In general, since Unity is a fork of Mono it will mostly get .NET Standard support for free. The work to support Unity is mostly in tooling.

I saw your video and I like your watch. What is it?

It's a Garmin Forerunner 920, a triathlon watch. Sadly, my watch is fitter than I am :-)

When will Visual Studio support creating .NET Standard libraries?

Visual Studio 2017 supports creation of .NET Standard Libraries, including building NuGet packages.

When will Xamarin Studio support creating .NET Standard libraries?

The upcoming version of Xamarin Studio will support this.

What's the difference between .NET Standard Library and .NET Platform Standard?

These were terms we used in earlier discussions. We only have one concept today called .NET Standard.

Can you explain the assemblies and type forwarding in more detail?

Yes. See this video explaining how .NET Standard works

. You can also take a look at the .NET Standard 2.0 spec, as it has some diagrams.

Should I reference the meta package or should I reference individual packages?

In the past, we've given developers the recommendation to not reference the meta package (NETStandard.Library) from NuGet packages but instead reference individual packages, like System.Runtime and System.Collections. The rationale was that we thought of the meta package as a shorthand for a bunch of packages that were the actual atomic building blocks of the .NET platform. The assumption was: we might end up creating another .NET platform that only supports some of these atomic blocks but not all of them. There were also concerns regarding how our tooling deals with large package graphs.

Moving forward, we'll simplify this:

1. .NET Standard is an atomic building block. In other words, new platforms aren't allowed to subset .NET Standard -- they have to implement all of it.

2. We're moving away from using packages to describe our platforms, including .NET Standard.

This means, you'll not have to reference any NuGet packages for .NET Standard anymore. You expressed your dependency with the lib folder, which is exactly how it has worked for all other .NET platforms, in particular .NET Framework.

However, right now our tooling will still burn in the reference to NETStandard.Library. There is no harm in that either, it will just become redundant moving forward.

cmdlet

A cmdlet (pronounced "command-let") is a lightweight Windows PowerShell script that performs a single function.

command, in this context, is a specific order from a user to the computer's operating system or to an application to perform a service, such as "Show me all my files" or "Run this program for me." Although Windows PowerShell includes more than two hundred basic core cmdlets, administrators can also write their own cmdlets and share them.

A cmdlet, which is expressed as a verb-noun pair, has a .ps1 extension. Each cmdlet has a help file that can be accessed by typing Get-Help <cmdlet-Name> -Detailed.  The detailed view of the cmdlet help file includes a description of the cmdlet, the command syntax, descriptions of the parameters and an example that demonstrate the use of the cmdlet.

Popular basic cmdlets include:

 Cmdlet Function Get-Location get the current directory Set-Location change the current directory Copy-Item copy files Remove-Item remove a file or directory Move-Item move a file Rename-Item rename a file New-Item create a new empty file or directory

Base de datos en la nube

Una base de datos en la nube es una colección de contenido, estructurado o no estructurado, que reside en una plataforma de infraestructura de computación en la nube privada, pública o híbrida.

Existen dos modelos de entorno de base de datos en nube: tradicional y base de datos como servicio (DBaaS).

En un modelo de nube tradicional, una base de datos se ejecuta en la infraestructura de un departamento de TI a través de una máquina virtual. Las tareas de supervisión y gestión de la base de datos recaen sobre el personal de TI de la organización.

¿Qué es la base de datos como servicio?

En comparación, el modelo DBaaS es un servicio de suscripción basado en tarifas en el que la base de datos se ejecuta en la infraestructura física del proveedor de servicios. Los diferentes niveles de servicio suelen estar disponibles. En un acuerdo DBaaS clásico, el proveedor mantiene la infraestructura física y la base de datos, dejando al cliente administrar el contenido y la operación de la base de datos.

Alternativamente, un cliente puede configurar un acuerdo de alojamiento gestionado, en el que el proveedor maneja el mantenimiento y la gestión de la base de datos. Esta última opción puede ser especialmente atractiva para las pequeñas empresas que tienen necesidades de base de datos, pero carecen de la experiencia adecuada en TI.

Beneficios de la base de datos en la nube

En comparación con la operación de una base de datos tradicional en un servidor físico en el sitio y la arquitectura de almacenamiento, una base de datos en la nube ofrece las siguientes ventajas:

• Eliminación de la infraestructura física. En un entorno de base de datos en la nube, el proveedor de computación en nube de servidores, almacenamiento y otras infraestructuras es responsable del mantenimiento y la disponibilidad. La organización que posee y opera la base de datos solo es responsable de soportar y mantener el software de base de datos y su contenido. En un entorno DBaaS, el proveedor de servicios es responsable de mantener y operar el software de base de datos, dejando a los usuarios de DBaaS responsables solo de sus propios datos.
• Ahorro de costos. A través de la eliminación de una infraestructura física propiedad y operada por un departamento de TI, ahorros significativos pueden lograrse a partir de una reducción de gastos de capital, menos personal, disminución de los costos de operación eléctrica y HVAC, y una menor cantidad de espacio físico necesario.

Beneficios de DBaaS

Además de los beneficios de emplear un modelo de entorno de base de datos en la nube, la contratación con un proveedor DBaaS ofrece beneficios adicionales:

• Escalabilidad instantánea. Si la capacidad de la base de datos se necesita debido a picos de negocios estacionales o picos inesperados en la demanda, un proveedor de DBaaS puede ofrecer rápidamente capacidad adicional, rendimiento y ancho de banda de acceso basados en una tarifa, a través de su propia infraestructura. Una base de datos que funciona en una infraestructura tradicional en sitio probablemente tendría que esperar semanas o meses para la adquisición e instalación de recursos adicionales de servidor, almacenamiento o comunicaciones.
• Garantías de rendimiento. A través de un acuerdo de nivel de servicio (SLA), un proveedor de DBaaS puede estar obligado a proporcionar garantías que cuantifican típicamente la disponibilidad mínima de tiempo de actividad y los tiempos de respuesta de transacción. Un SLA especifica los recursos monetarios y legales si estos umbrales de desempeño no se cumplen.
• Experiencia especializada. En un entorno de TI corporativo, a excepción de las empresas multinacionales más grandes, encontrar expertos de base de datos de clase mundial puede ser difícil, y mantenerlos en el personal puede resultar prohibitivo. En un entorno DBaaS, el proveedor puede servir a miles de clientes; por lo tanto, encontrar, ofrecer y mantener talento de clase mundial es menos un desafío.
• Última tecnología. Para mantenerse competitivos, los proveedores de DBaaS trabajan duro para garantizar que todo el software de bases de datos, sistemas operativos de servidores y otros aspectos de la infraestructura general se mantengan al día con actualizaciones de seguridad y características publicadas regularmente por los proveedores de software.
• Soporte de conmutación por error. Para que un proveedor de servicios de base de datos cumpla con las garantías de desempeño y disponibilidad, incumbe a dicho proveedor garantizar un funcionamiento ininterrumpido si el centro de datos principal falla por cualquier motivo. El soporte de conmutación por error típicamente abarca la operación de varias instalaciones de almacenamiento de datos y servidores de imágenes duplicadas. Gestionada correctamente, la conmutación por error a un centro de datos de respaldo debe ser imperceptible para cualquier cliente de ese servicio.
• Precios decrecientes. Con los avances en tecnología y un mercado intensamente competitivo entre los principales proveedores de servicios, los precios de una amplia gama de servicios de computación en nube se someten a una recalibración continua. La disminución de los precios es un importante impulso para migrar las bases de datos in situ y otras infraestructuras de TI a la nube.

Arquitectura de la base de datos en la nube

Las bases de datos en la nube, como sus ancestros tradicionales, se pueden dividir en dos grandes categorías: relacionales y no relacionales.

Una base de datos relacional, escrita típicamente en lenguaje de consulta estructurado (SQL), está compuesta por un conjunto de tablas interrelacionadas que se organizan en filas y columnas. La relación entre tablas y columnas (campos) se especifica en un esquema. Las bases de datos SQL, por diseño, se basan en datos que son muy coherentes en su formato, como las transacciones bancarias o una guía telefónica. Las opciones más populares incluyen MySQL, Oracle, IBM DB2 y Microsoft SQL Server.

Las bases de datos no relacionales, a veces llamadas NoSQL, no emplean un modelo de tabla. En su lugar, almacenan contenido, independientemente de su estructura, como un solo documento. Esta tecnología es adecuada para datos no estructurados, como contenido de medios sociales, fotos y videos.

Migrar bases de datos heredadas a la nube

Una base de datos local puede migrar a una implementación en la nube. Existen muchas razones para ello, entre las que se incluyen las siguientes:

• Permite a TI retirar el servidor físico y la infraestructura de almacenamiento local;

• Rellena la brecha de talento cuando TI carece de experiencia interna adecuada en base de datos;

• Mejora la eficiencia del procesamiento, especialmente cuando las aplicaciones y análisis que acceden a los datos también residen en la nube; y

• Logra ahorros de costos a través de varios medios, incluyendo:

• Reducción del personal interno de TI;

• La disminución constante del precio de los servicios en la nube; y

• Pagar únicamente los recursos realmente consumidos, conocidos como pagos por uso.

La reubicación de una base de datos a la nube puede ser una manera efectiva de permitir que el rendimiento de las aplicaciones empresariales sea parte de un despliegue más amplio de software como servicio. Esto simplifica los procesos necesarios para hacer que la información esté disponible a través de conexiones basadas en internet. La consolidación de almacenamiento también puede ser un beneficio de mover las bases de datos de una empresa a la nube. Las bases de datos en múltiples departamentos de una gran empresa, por ejemplo, se pueden combinar en la nube en un único sistema alojado de gestión de base de datos.

¿Cómo funciona una base de datos en la nube?

Desde una perspectiva estructural y de diseño, una base de datos en la nube no es diferente de la que opera en los propios servidores de una empresa. La diferencia clave radica en dónde reside.

Cuando una base de datos local está conectada a usuarios locales a través de la red local de área local (LAN) de una corporación, una base de datos en la nube reside en servidores y almacenamiento proporcionados por una nube o proveedor DBaaS, y se accede únicamente a través de internet. Para una aplicación de software, por ejemplo, una base de datos SQL que resida localmente o en la nube debe aparecer idéntica.

Si se accede a través de consultas directas (como sentencias SQL) o mediante llamadas API, el comportamiento de la base de datos debe ser el mismo. Sin embargo, puede ser posible discernir pequeñas diferencias en el tiempo de respuesta. Una base de datos local, a la que se accede a través de una LAN, es probable que proporcione una respuesta ligeramente más rápida que una base de datos basada en la nube, que requiere un viaje de ida y vuelta en internet para cada interacción con la base de datos. En la práctica, sin embargo, es probable que las diferencias sean pequeñas.

Plataforma de colaboración

Los proveedores están adoptando diferentes enfoques para crear plataformas de colaboración. Algunos están agregando una "capa social" a las aplicaciones empresariales heredadas, mientras que otras incorporan herramientas de colaboración en nuevos productos. Todas las plataformas de colaboración empresarial exitosas comparten ciertos atributos: necesitan ser fácilmente accesibles y fáciles de usar, necesitan ser construidas para la integración, y deben incluir un conjunto común de funciones que apoyan la colaboración de equipos, el seguimiento de problemas y la mensajería. Muchas plataformas de colaboración están diseñadas para parecerse a Facebook u otros sitios que los empleados ya están acostumbrados a usar en sus vidas personales.

IBM, Google y Lyft lanzan Istio, plataforma de microservicios de código abierto

por Darryl K. Taft

IBM, Google y Lyft unieron fuerzas con Istio, una plataforma de microservicios de código abierto que conecta y gestiona redes de microservicios, independientemente de su fuente o proveedor.

IBM, Google y Lyft se han unido para ofrecer Istio, una plataforma de microservicios de código abierto para desarrolladores.

El proyecto Istio es una colaboración entre las tres compañías para crear una tecnología abierta que proporcione una forma uniforme de conectar, asegurar, administrar y monitorizar redes de microservicios en plataformas de nube, independientemente de la plataforma, fuente o proveedor.

La plataforma de microservicios de código abierto soporta la gestión del tráfico entre microservicios, la imposición de políticas de acceso y la agregación de datos de telemetría, todo ello sin necesidad de cambios en el código de microservicios. El servicio también ofrece a los desarrolladores nuevos niveles de visibilidad, seguridad y control cuando construyen aplicaciones en la nube, ya que los microservicios y los contenedores se convierten en la forma de facto para construir aplicaciones, dijo IBM.

Paisaje cambiante

De hecho, Istio es una respuesta al cambiante panorama del desarrollo de aplicaciones con microservicios, que se centran en descomponer grandes aplicaciones en piezas más pequeñas y más manejables, dijo Scott Laningham, estratega de IBM developerWorks, en un video sobre el proyecto.

"Hemos tenido esta visión de tratar de crear un nuevo paradigma en torno a las aplicaciones nativas de la nube basadas en microservicios, y hemos estado en este viaje durante algún tiempo", dijo Ángel Díaz, vicepresidente de arquitectura y tecnología de la nube, a TechTarget. Díaz dijo que el viaje se remonta a 2014, cuando IBM unió fuerzas con Docker para llevar los contenedores a la gobernanza abierta. Y continuó con Big Blue empujando la Iniciativa Open Container, apoyando a la Cloud Native Computing Foundation y llevando la tecnología de orquestación de contenedores Kubernetes hacia la red.

Sin embargo, Díaz señaló que el interés de IBM en la descomposición de grandes aplicaciones en piezas más pequeñas se remonta casi tanto a la historia que IBM tiene en el negocio del software.

IBM se incorpora a esto de nuevo, ya que sus grandes clientes empresariales han comenzado a adoptar contenedores y microservicios como parte integral de sus entornos de desarrollo. "El enfoque de microservicios es particularmente adecuado para desarrollar software a gran escala y continuamente disponible en la nube", dijo Jason McGee, vicepresidente y CTO de IBM Cloud Platform, en un blog en Istio. Sin embargo, gestionar la escala de los esfuerzos de microservicios puede ser problemático, y ahí es donde entra Istio.

"A medida que los microservicios escalan dinámicamente, temas como el descubrimiento de servicios, el equilibrio de cargas y la recuperación de fallos necesitan ser resueltos uniformemente", dijo Laningham. "Es por eso que la colaboración en Istio es tan importante".

La plataforma de microservicios de código abierto ayuda a los equipos de software a realizar el descubrimiento de servicios, el equilibrio de carga, la tolerancia a fallos, el monitoreo de extremo a extremo y el enrutamiento dinámico para la experimentación de características, así como el cumplimiento y la seguridad, expresaron las tres compañías en un blog conjunto.

Tecnología de tres empresas

El proyecto Istio se basa en la tecnología de cada una de las empresas fundadoras: Amalgam8 de IBM, Service Control de Google y Envoy, de Lyft.

Construido en IBM Research, Amalgam8 es una malla de servicio unificado que proporciona un tejido de enrutamiento de tráfico con un plano de control programable para ayudar a los usuarios con pruebas tipo A/B y lanzamientos de canarios, así como para probar sistemáticamente la resistencia de sus servicios contra fallas, explicó McGee.

Una malla de servicio es una capa de infraestructura que se encuentra entre un servicio y la red que proporciona a los operadores el control que necesitan, al tiempo que libera a los desarrolladores de tener que resolver problemas comunes del sistema distribuido en su código.

Service Control de Google proporciona una malla de servicio con un plano de control que se centra en aplicar políticas, como listas de control de acceso, límites de velocidad y autenticación, además de recopilar datos de telemetría de varios servicios y proxies.

Y Lyft creó el proxy de Envoy para soportar su propio entorno de microservicios. La tecnología soporta el sistema de producción de Lyft, que abarca más de 10.000 máquinas virtuales (VM) que manejan más de 100 microservicios.

De hecho, "quedó claro para todos nosotros que sería sumamente beneficioso combinar nuestros esfuerzos creando una abstracción de primera clase para el enrutamiento y la administración de políticas en Envoy, y exponer las API del plano de administración para controlar las características de Envoys de una manera que pueda ser fácilmente integrada con los flujos de integración continua y entrega continua [CI/CD]", dijo.

Así, además de desarrollar el plano de control de Istio, IBM también aportó varias características a Envoy, como la división de tráfico entre versiones de servicio, el seguimiento de solicitud distribuida con Zipkin y la inyección de fallas. Google endureció a Envoy en varios aspectos relacionados con la seguridad, el rendimiento y la escalabilidad, señaló McGee.

Charles King, analista principal de Pund-IT, dijo que cree que el anuncio de Istio es interesante en términos de tecnología y sus tres socios proveedores.

"En el lado de la tecnología, el problema que Istio pretende solucionar son demasiado reales –los desafíos que los equipos de desarrolladores enfrentan en términos de comunicaciones eficaces y mantener las piezas en movimiento de varios microservicios trabajando juntos sin problemas. La conversión de Istio de microservicios hacia una red de servicios integrados sujetos a enrutamiento programable y una capa de gestión compartida es un enfoque intrigante que parece que abordará los puntos de dolor clave y también permitirá que los microservicios escalen de manera más fácil y eficiente".

Istio actualmente se ejecuta en plataformas Kubernetes, como el IBM Bluemix Container Service. IBM y Google planean construir un soporte adicional para plataformas como Cloud Foundry y VMs en un futuro próximo.

McGee señaló que Istio ha generado apoyo temprano de varias fuentes, incluyendo Red Hat con Red Hat OpenShift y OpenShift Application Runtimes, Pivotal con Pivotal Cloud Foundry, Weaveworks con Weave Cloud y Weave Net 2.0, y Tigera con el Proyecto Calico Network Policy Engine.

Disaster Recovery as a Service (DRaaS)

Posted by: Margaret Rouse | Contributors: Kim Hefner and Stan Gibilisco

Disaster recovery as a service (DRaaS) is the replication and hosting of physical or virtual servers by a third party to provide failover in the event of a man-made or natural catastrophe.

Typically, DRaaS requirements and expectations are documented in a service-level agreement (SLA) and the third-party vendor provides failover to a cloud computing environment, either through a contract or on a pay-per-use basis. In the event of an actual disaster, an off-site vendor is less likely than the enterprise itself to suffer the direct and immediate effects, which allows the provider to implement the disaster recovery plan even in the event of the worst-case scenario: a total or near-total shutdown of the affected enterprise.

How to pick a DRaaS provider

If you determine DRaaS is the right approach to disaster recovery planning for your organization, there are some important questions to consider, according to analyst George Crump:

• What percentage of customers can the service provider support concurrently during a regional disaster such as a hurricane?
• What DR resources are available for recovery?
• How does the provider manage, track and update these resources?
• What happens if the provider cannot deliver DR services?
• What are the rules for declaring a disaster?
• Is it first-come, first-served until resources are maxed out?
• What happens to customers who cannot be serviced?
• How will users access internal applications?
• Will virtual private networks be managed or rerouted?
• How does a virtual desktop infrastructure affect user access and who manages it during disaster recovery?
• How will customers, partners and users access outward-facing applications?
• Will domain name system nodes be updated for outward or customer-facing applications?
• What are the procedures for failback?
• What professional services, skills and/or experiences are available from the service provider to facilitate disaster recovery and how much do they cost?
• How much help can be expected in a DR event?
• What are the DRaaS provider's testing processes?
• Can customers perform their own testing?
• How long can a customer run in the service provider's data center after a disaster is declared?
• What are the costs associated with the various disaster recovery as a service options?
• Are they a la carte, bundled or priced upfront?
• Is there a mix of upfront and recovery event costs?

Some examples of disaster-recovery-as-a-service providers in the market include AcronisAmazon Web Services, Axcient, Bluelock, Databarracks, EVault, IBM, iland, Infrascale, Net3 Technology, Peak 10, Quorum, RapidScale, Sungard Availability Services (AS), Unitrends, Verizon Communications, VMware, Windstream Communications and Zerto.

With disaster recovery as a service, the time to return applications to production is reduced because data does not need to be restored over the internet. DRaaS can be especially useful for small and medium-sized businesses that lack the necessary expertise to provisionconfigure and test an effective disaster recovery plan. Using DRaaS also means the organization doesn't have to invest in -- and maintain -- its own off-site DR environment.

The biggest disadvantage to DRaaS is that the business must trust its service provider to implement the plan in the event of a disaster and meet the defined recovery time and recovery point objectives. Additional drawbacks include possible performance issues with applications running in the cloud and potential migration issues when returning applications to a customer's on-premises data center.

DRaaS vs. backup as a service (BaaS)

DRaaS fails over processing to the cloud so an organization can continue to operate during a disaster. The failover notice can be automated or manual. The DRaaS operation remains in effect until IT can repair the on-premises environment and issue a failback order.

In backup as a service, an organization decides which files it will back up to a BaaS provider's storage systems. The customer organization is also responsible for setting up its RPO and RTO service levels, as well as its backup windows. A BaaS provider is only responsible for data consistency and restoring backed up copies of data.

The Better Business Bureau (BBB) is a non-profit accreditor of ethical businesses. The BBB also acts as a consumer watchdog for questionable sales tactics and scams.

Accreditation allows companies to be certified as legitimate and reputable businesses. For consumers, BBB offers free business reviews of over four million businesses and investigates complaints.

As an avenue for voluntary industry self-regulation, the BBB offers a business code that organizations can pledge to adhere to, paying a fee and receiving a BBB logo to display as a sign of reputability. The BBB also intermediates customer complaints in an official capacity. The organization resolves about 75 percent of more than 885,000 consumer complaints per year.

Fraudulent activities the BBB has dealt with, and raised awareness of, include telephone cruise contest frauds, “Can you hear me?” scams and various tech support scams

Although the BBB is not affiliated with any government department and endorses no particular business, the organization itself isn't without controversy. The non-profit has been alleged to give higher ratings to businesses which pay a membership to the organization, a charge which they deny.

Related Terms

Voice Signature | Firma Biométrica por Voz

Posted by: Margaret Rouse | Contributor(s): Kaitlin Herbert

A voice signature is a type of electronic signature that uses an individual’s recorded verbal agreement in place of a handwritten signature. It is considered legally binding in both the private and public sectors under certain conditions. A voice signature may also be referred to as a telephonic signature.

During the contracting phase of a telephone transaction, a company can use biometrics software to record a customer approving a transaction.  The recording is used to create a unique voiceprint, which is comparable to a fingerprint or retina, as no two voices are the same.  Once a voiceprint has been collected, it can be used to validate a person’s identity on later phone calls.

Under the Electronic Signatures in Global and National Commerce (ESIGN) Act and common law in the United States, voice signatures are legally enforceable with the addition of certainty of terms. The certainty of terms documents that both contracting parties accept  “I Agree” statements. In order to be binding, the voice signature needs to be attached to the contract.  A standard solution is to embed the digital voice recording in the contract and use encryption to prevent the files from being disassociated or altered.

Voice signatures benefit organizations by increasing conversion rates compared to conventional in-person, ink-to-paper methods. They can eliminate the long process that usually involves printing, distributing and waiting for the returned signed documents. In turn, organizations see real increases in customer service quality, levels of data security and conversion rates.

In the United States, voice signatures have played an important part in the implementation of the Affordable Care Act (ACA). Telephonic applications and signatures are accepted for coverage through the Health Information Exchange (HIE), Medicaid and Children’s Health Insurance Plan (CHIP) programs across the United States. Under the law, however, collection and storage of voice signatures can vary from state to state.

Unfortunately, voice signatures can be misused with potentially negative consequences. An example is the can you hear me telephone scam where the victim is recorded answering "yes" to a question that will most likely be answered affirmatively. The affirmative response is then butt spliced to another audio file and used as a voice signature to authorize charges without the victim's knowledge.

To avoid becoming the victim of voice signature scams, the United States Federal Trade Commission (FTC) offers mobile and landline phone customers the following advice:

Hang up immediately if a call begins "Can you hear me?"

• Be suspicious of robocalls.
• When speaking to an unfamiliar caller, be alert for any question that prompts the answer "yes."
• Check bank, credit card and cell phone bill statements regularly for unauthorized charges.
• Ignore incoming phone calls from unfamiliar numbers.
• Do not return missed calls from unfamiliar numbers.
• Report suspicious calls to the Better Business Bureau and/or FTC hotlines.
• Consider joining the Do Not Call Registry.

Seven Wastes

Posted by: Margaret Rouse

The seven wastes are categories of unproductive manufacturing practices. The seven wastes are an integral part of lean production, a just-in-time production model that seeks to limit overproduction, unnecessary wait times and excess inventory.

The idea of categorizing seven wastes is credited to Engineer Taiichi Ohno, the father of the Toyota Production System (TPS). Although the classifications were intended to improve manufacturing, they can be adapted for most types of workplaces.

Following are the seven wastes, as categorized by Taiichi Ohno:

• Overproduction -- Manufacture of products in advance or in excess of demand wastes money, time and space.
• Waiting -- Processes are ineffective and time is wasted when one process waits to begin while another finishes. Instead, the flow of operations should be smooth and continuous. According to some estimates, as much as 99 percent of a product's time in manufacture is actually spent waiting.
• Transportation -- Moving a product between manufacturing processes adds no value, is expensive and can cause damage or product deterioration.
• Inappropriate processing -- Overly elaborate and expensive equipment is wasteful if simpler machinery would work as well.
• Excessive inventory - This wastes resources through costs of storage and maintenance.
• Unnecessary motion -- Resources are wasted when workers have to bend, reach or walk distances to do their jobs. Workplace ergonomics assessment should be conducted to design a more efficient environment.
• Defects -- Quarantining defective inventory takes time and costs money.

Since the categories of waste were established, others have been proposed for addition, including:

• Underutilization of employee skills -- Although employees are typically hired for a specific skill set, they always bring other skills and insights to the workplace that should be acknowledged and utilized.
• Unsafe workplaces and environments -- Employee accidents and health issues as a result of unsafe working conditions waste resources.
• Lack of information or sharing of information -- Research and communication are essential to keep operations working to capacity.
• Equipment breakdown -- Poorly maintained equipment can result in damage and cost resources of both time and money.

R programming language

Posted by: Margaret Rouse | Contributor(s): Ed Burns

The R programming language is an open sourcescripting language for predictive analytics and data visualization.

The initial version of R was released in 1995 to allow academic statisticians and others with sophisticated programming skills to perform complex data statistical analysis and display the results in any of a multitude of visual graphics. The "R" name is derived from the first letter of the names of its two developers, Ross Ihaka and Robert Gentleman, who were associated with the University of Auckland at the time.

The R programming language includes functions that support linear modeling, non-linear modeling, classical statistics, classifications, clustering and more. It has remained popular in academic settings due to its robust features and the fact that it is free to download in source code form under the terms of the Free Software Foundation's GNU general public license. It compiles and runs on UNIX platforms and other systems including Linux, Windows and macOS.

The appeal of the R language has gradually spread out of academia into business settings, as many data analysts who trained on R in college prefer to continue using it rather than pick up a new tool with which they are inexperienced.

The R software environment

The R language programming environment is built around a standard command-line interface. Users leverage this to read data and load it to the workspace, specify commands and receive results. Commands can be anything from simple mathematical operators, including +, -, * and /, to more complicated functions that perform linear regressions and other advanced calculations.

Users can also write their own functions. The environment allows users to combine individual operations, such as joining separate data files into a single document, pulling out a single variable and running a regression on the resulting data set, into a single function that can be used over and over.

Looping functions are also popular in the R programming environment. These functions allow users to repeatedly perform some action, such as pulling out samples from a larger data set, as many times as the user wants to specify.

R language pros and cons

Many users of the R programming language like the fact that it is free to download, offers sophisticated data analytics capabilities and has an active community of users online where they can turn to for support.

Because it's been around for many years and has been popular throughout its existence, the language is fairly mature. Users can download add-on packages that enhance the basic functionality of the language. These packages enable users to visualize data, connect to external databases, map data geographically and perform advanced statistical functions. There is also a popular user interface called RStudio, which simplifies coding in the R language.

The R language has been criticized for delivering slow analyses when applied to large data sets. This is because the language utilizes single-threaded processing, which means the basic open source version can only utilize one CPU at a time. By comparison, modern big data analytics thrives on parallel data processing, simultaneously leveraging dozens of CPUs across a cluster of servers to process large data volumes quickly.

In addition to its single-threaded processing limitations, the R programming environment is an in-memory application. All data objects are stored in a machine's RAM during a given session. This can limit the amount of data R is able to work on at one time.

R and big data

These limitations have mitigated the applicability of the R language in big data applications. Instead of putting R to work in production, many enterprise users leverage R as an exploratory and investigative tool. Data scientists will use R to run complicated analyses on sample data and then, after identifying a meaningful correlation or cluster in the data, put the finding into product through enterprise-scale tools.

Several software vendors have added support for the R programming language to their offerings, allowing R to gain a stronger footing in the modern big data realm. Vendors including IBM, Microsoft, Oracle, SAS Institute, TIBCO and Tableau, among others, include some level of integration between their analytics software and the R language. There are also R packages for popular open source big data platforms, including Hadoop and Spark.