Glosario KW | KW Glossary

Ontology Design | Diseño de Ontologías

All categories

Page: (Previous)   1  ...  3  4  5  6  7  8  9  10  11  12  ...  63  (Next)


Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 9:52 PM
Picture of System Administrator

Tag Management

by System Administrator - Wednesday, 8 July 2015, 8:45 PM

Forrester Report: Boost Digital Intelligence with Tag Management


Struggling with fragmented data sources from your digital marketing and analytics applications? Download our Forrester Research Report: Boost Digital Intelligence with Tag Management (a $499 value) to learn how tag management is becoming widely recognized as a foundational element to unifying data and fueling omni channel marketing programs.

In this new report, you will learn:

  • How tag management practices have evolved to activate digital customer insights across touch points
  • How tag management can scale digital analytics and insights-driven customer engagements
  • Why it's important to keep tag management independent of your analytics technology

Please read the attached report.

Picture of System Administrator

Tags: IT's Guide to Third Party Monitoring

by System Administrator - Thursday, 9 July 2015, 4:14 PM

The True Impact of Tags: IT's Guide to Third Party Monitoring

This eBook addresses the impact various third parties have on your site, the challenges IT/Ops faces to meet both business and performance expectations, and provides a game plan to managing the risk and latency carried by content served from outside your infrastructure.

Please read the attached ebook.

Picture of System Administrator

Tesla Powerwall

by System Administrator - Wednesday, 20 May 2015, 9:32 PM

Tesla Powerwall

Posted by: Margaret Rouse

Tesla Powerwall is a wall-mounted battery system for storing energy generated by solar power panels and wind turbines. Powerwall is a product of Tesla Energy, a business division of Tesla Motors.

Powerwall, which is aimed at the residential market, is designed to store power generated at peak solar time for use during power outages and out-of-peak solar time, including night. The system’s slim, modular design is adapted from the technology used in those for Tesla's electric cars and can hold up to 7 kilowatt-hours of energy, which is enough energy to power a typical home in the United States for about 7 hours. A similar product, Tesla Powerpack, is aimed at the business and utility market. Both products are based on lithium ion battery technology.

Powerwall will be available in a 7 kilowatt hour (kW) model, designed for daily use applications, and a 10kwh model, designed for backup power. The units weigh about 220 lbs. Total capacity can be upgraded by modular connection of up to nine additional units. Tesla expects to start shipping Powerwall sometime during the summer of 2015.

Tesla provides the following specifications for Powerwall:

  • Mounting: Wall Mounted Indoor/Outdoor
  • Inverter: Pairs with growing list of inverters
  • Energy: 7 kWh or 10 kWh
  • Continuous Power: 2 kW
  • Peak Power: 3.3 kW
  • Round Trip Efficiency: 92%
  • Operating Temperature Range: -20C (-4F) to 43C (110F)
  • Warranty: 10 years
  • Dimensions: H: 1300mm W: 860mm D:180mm

Watch a video introduction to Powerwall:



Picture of System Administrator

The CIO's And CMO's Blueprint For Strategy In The Age Of The Customer

by System Administrator - Friday, 20 February 2015, 4:23 PM

The CIO's And CMO's Blueprint For Strategy In The Age Of The Customer

Four Imperatives To Establish New Competitive Advantage

By Kyle McNabbJosh Bernoff 

Put Customer Obsession At The Center Of Your Corporate Strategy

Face it: Your technology-empowered customers now know more than you do about your products and services, your pricing, and your reputation. Technology has tipped the balance in favor of the customer.

Once, Technology Favored Companies; Now, It Empowers Customers


They can buy anything instantly and have it delivered to anywhere. And in a world of global sourcing and efficient supply chains, your competitors can copy or undermine the moves you take to compete. Your only successful response — the only way to retain customers and their loyalty — is to become customer-obsessed. Here's what we mean:

A customer-obsessed enterprise focuses its strategy, its energy, and its budget on processes that enhance knowledge of and engagement with customers and prioritizes these over maintaining traditional competitive barriers.

Customer-obsessed enterprises invest differently. Some, like, Macy's, and USAA, have customer obsession in their budgeting DNA. Others, such as Delta Air Lines and Royal Bank of Canada, use the customer life cycle to support a continuous relationship with their customers.

The Customer Life Cycle Shapes Customer-Obsession Decisions

Customer obsession also drives firms, such as Hilton Worldwide and The Home Depot, to spend hundreds of millions on revamping technology to meet customer needs. And it pays off, regardless of whether the customer is a consumer or a business buyer. According to an analysis by Watermark Consulting, 10-year investment returns from publicly traded customer experience leaders (measured by Forrester's Customer Experience Index) were 43%, while investments from customer experience laggards generated negative returns.

Customer Experience Leaders Outperform The Market


A Blueprint For Strategy In The Age Of The Customer

Customer obsession is easy to talk about but hard to do. It requires remaking your company, systematically, to re-orient each element toward improved customer experience. It also requires embracing the technologies that make customer obsession real and actionable. We'll lay out a blueprint for how to do it, based on four key strategic imperatives:

  1. Transform the customer experience with a systematic, measurable approach.
  2. Accelerate your digital business future to deliver greater agility and customer value.
  3. Embrace the mobile mind shift to serve customers in their moments of need.
  4. Turn big data into business insights to continuously improve your efforts.
Four Market Imperatives In The Age Of The Customer


1. Transform The Customer Experience With A Systematic, Measurable Approach

While every company says that it's customer-focused, few act systematically on that impulse. In a global survey across all industries, seven out of 10 business leaders said thatcustomer experience is a critical or high priority. Even so, less than one-third of customer experience professionals indicate that their firms consistently take the impact on customer experience into consideration when making business decisions. Customer experience success requires discipline and relentless focus. Here's how to approach it:
  • Embrace disciplines to design, implement, and manage customer experience. For example, a systematic approach to improving and measuring customer experience helped Fidelity Investments save more than $24 million per year, even as it grew investments from key customers by several billion dollars. Successful companies improve in four phases: repair, elevate, optimize, and differentiate. At each phase, employees must adopt new, increasingly sophisticated customer experience management practices.
  • Address defects in the customer experience ecosystem. To find unique strengths, opportunities, and differentiators, you need a full view of your ecosystem, including competitors, partners, and regulatory constraints. Delta Air Lines boosted its Customer Experience Index in a heavily regulated industry by embracing the process and operational changes needed to reduce flight cancellations and to improve on-time performance, service recovery, and baggage handling. To start, you must map painful customer journeys, including the parts of the ecosystem you don't control. Then, create improvements in collaboration with all the stakeholders you identify, including customers.
  • Design experiences that exceed customer expectations. Change happens so fast now that quick iterations and rapid prototyping and testing are mandatory. One major US home appliance manufacturer that embraced these Agile methods grew 30% annually for the past six years and is now approaching $1 billion in revenue. These approaches demand design expertise that encompasses customer understanding and empathy; effective prototyping, storyboarding, and envisioning; and building creative digital experiences.
Growth And Customer Experience Improvement Top Business Leaders' Priorities


2. Accelerate Your Digital Business Future For Greater Agility And Customer Value

How can you keep up with empowered customers? Be more digital. Your customers, your channels, and your competitors are digital. The future of your business is digital. But you probably lack a clear digital business vision. Executives in eBusiness and marketing often bolt on digital channels and processes rather than retool their company for digital agility. By contrast, truly digital businesses continuously exploit digital technologies to both create new sources of value for customers and increase operational agility in service of customers. This approach is how Mercedes-Benz can use digital sensors to transform the driving experience; Rolls-Royce can use digital sensors in its jet engines to revamp its business model; and Procter & Gamble can test product packaging and shelf layouts in virtual stores before committing to costly manufacturing. Here's how you can be like them:
  • Master digital customer experience and digital operational excellence. CMOs and CIOs should work with eBusiness professionals to embrace web and mobile customer touchpoints. Re-envision your business not as a standalone entity but as part of an ecosystem of suppliers that customers assemble according to their needs and an ecosystem of collaborating businesses that share data and services. And infuse all your processes with digital efficiency so you can react more quickly to customer demands.
  • Start your journey toward embracing digital business techniques everywhere.Digital business should be every person's job, every team's task, and every division's modus operandi. The size of this overhaul of your business varies: Some firms, like Amazon, were born digital — they find embracing these digital principles far easier. If you're not this lucky, start with a focus on just one business unit or product line. Use digital to reposition it within the value ecosystem. Then, build on that success.
  • Become comfortable with disruption. Digital disruption is here, and your company could be the next to be disrupted by the likes of Google News, Hailo, or Zipcar. Don't just wait for disruption to come to your industry — learn to disrupt your own business. Big companies from Intermountain Healthcare to Target are already doing this. Their strategy: 1) Define a clear vision of what digital disruption promises and 2) seek a frank understanding of the obstacles your specific organization must overcome to embrace disruption.
Executives Don't Believe They Have A Clear Vision For Digital Transformation

3. Embrace The Mobile Mind Shift To Serve Customers In Their Moments Of Need

The most urgent place to apply digital thinking is through mobile devices. One billion smartphones have trained people, your customers, to turn to mobile first. Both consumers and business buyers have experienced a mobile mind shift: They expect that they can get what they want in their immediate context and moments of need.

The Mobile Mind Shift Is Spreading Rapidly 


Companies like Citibank (with mobile check deposit) and Starbucks (with mobile payment) have leveraged these mobile moments to reinforce customer loyalty. The new focus on mobile moments differs from PC-based web interactions, and it requires new thinking on how to interact with customers in their moments of need. To embrace it, develop new interactions with the four-step IDEA process:

  • Identify your customers' mobile moments and context. This starts with mapping the customer's journey using your systematic approach to customer experience. Use techniques like ethnographic research to determine moments where mobile can solve problems, reduce friction, or answer questions. For example, Johnson & Johnson uses its bedtime app to solve problems at the mobile moment of getting the baby to sleep, with lullabies and a sleep routine. Air conditioner vendor Trane streamlines the mobile selling moment by enabling its independent reps with a tablet app.
  • Design the mobile engagement. Design is about choosing the moments that matter most — not only the ones that customers value but also those that drive revenue or reduce costs. It's also about using context to deliver more value. For example, American Airlines knows it's your day of travel, knows what seats are available, and knows your frequent-flyer status — and uses this information to present the opportunity to upgrade your seat right from its app.
  • Engineer your platforms, processes, and people for mobile. The true costs of mobile spring from the challenges of updating corporate systems to live up to mobile demands. This is what's behind Hilton Worldwide's $550 million mobile makeover. You'll need to retool your platforms with atomized, responsive APIs; remake your processes with mobile in mind; and reform your design, business, and development talent into agile teams.
  • Analyze to optimize performance and improve interactions. Apps and mobile sites must evolve — and rapidly. That's why it's key to build analytics into every mobile project. Mobile apps and sites spin off lots of performance data, but you'll also want to instrument them to make it easy to measure their impact on business metrics (like room nights for a hotel chain). Because mobile engagement happens so close to the customer, you'll also want to mine this data for new customer insights.
An Overview Of The Steps In The IDEA Cycle


4. Turn Big Data Into Business Insights To Continuously Improve Your Efforts

Your understanding of your customers' context will make, or break, your ability to succeed in a customer's moment of need. Thankfully, your customers now create and leave behind digital breadcrumbs through their activity across all their touchpoints, such as websites, mobile apps, store visits, and phone calls. Firms like Lowe's, Macy's, and Walgreens use this data to respond based on context and develop deeper insights. American Express used big data to achieve an eightfold improvement in identifying at-risk customers. Here's what you can do to infuse insights from data into your business:

  • Reset what big data means to you. What's so big about big data? It's the opportunities you uncover when you put increasingly novel sources and types of data to use. Large, diverse, and messy forms of data can create new sources of customer value and increase operational agility in service of your customers. Big data helped Clorox anticipate demand based on social media and achieve record sales of its cleaning products. To exploit big data, CMOs, eBusiness leaders, and CIOs must collaborate to develop the culture, competencies, and capabilities required to close the gap between the data available and your ability to turn that data into business insight.
  • Use data to fuel a real-time contextual marketing engine. Your edge will come from self-sustaining cycles of real-time, two-way, insight-driven interactions with individual customers. Brands that have seized on this potential — such as McCormick & Company, Mini USA, and Nike — are assembling proprietary digital platforms that Forrester calls contextual marketing engines. They create sticky, highly engaging environments for customer interaction and generate unique, proprietary data and insights. The results improve customer engagement, boost revenue, and enhance customer experiences.
  • Accelerate innovation by using big data to anticipate customer needs. It's time to change how, and especially when, you apply and perform analytics. Go beyond static segmentation; embed analytics in your business. Emerging methods like location analytics and device usage analysis can generate the real-time contextual insight you need to deliver engaging, contextual experiences.

A Business Technology Agenda Will Sustain New Competitive Advantage

It's a tall order, remaking your business for the imperatives of customer experiencedigital businessmobile engagement, and big data insights. That's why it will take the combined efforts of your most senior leaders — from the CIO to the CMO. Most likely, your technology is not up to the task of supporting these market imperatives. If you're like most firms, you've concentrated your technology management efforts on traditional IT — supporting and transforming internal operations. Successful companies will refocus their technology efforts on business technology (BT) — technology, systems, and processes to win, serve, and retain customers. Your BT agenda will lead to new competitive advantage if you:

  • Center on technologies that support the customer life cycle. The customer life cycle, and the systematic approach to transforming the customer experience, will push you to prioritize new and different technologies. You'll prioritize life-cycle solutions and engagement platform technologies to deliver seamless and compelling customer experiences.
  • Place a premium on software skills. Software powers nearly all the touchpoints your customers use to engage with your brand. Software is also essential to empowering your employees (in sales and customer service, for example) to address your customer's moment of need. Software is now a core asset to deliver elements of your customer-obsessed brand: trusted, remarkable, unmistakable, and essential. CIOs and their teams must rethink software's role, striving to build a software competency that establishes and maintains a unique advantage.
  • Embrace modern approaches to application and solution delivery. Your BT agenda must be delivered at a new, faster pace to keep up with rapidly changing business and market dynamics. It must focus on iterative delivery and continuous improvement to deliver impact and business value in weeks, not months. Modern delivery approaches will be the de facto expectation when delivering customer-obsessed solutions.
  • Force difficult prioritization and organization conversations. Companies like Delta Air Lines and Wal-Mart jumpstarted their competitive advantage by acquiring software companies. Firms like FedEx and The Washington Post got executive support and had the scale to start on their own. Comcast and UBS turned to software engineering firms for help. What you do will be based on an assessment of what your firm can do with the finite capital and scarce qualified resources it has available.
  • Demand changes to your partner ecosystem. Your BT agenda will redefine what's strategic to your organization and reshape the partner ecosystem you use. CMOs, eBusiness leaders, and CIOs must prioritize the agencies, management consultancies, and systems integrators that help advance your market imperative efforts. You may not replace existing relationships, but you will have an expanded partner ecosystem to both navigate and manage.
Top Technologies To Support The Customer Life Cycle



What It Means For CIOs
IT Groups Will Wane, And Their Favored Vendors Will Be Replaced

The age of the customer spells the end of power for the traditional CIO. Those that make the BT transition will become valued partners in winning, serving, and retaining customers. But most won't make the transition.

  • The BT agenda will shrink technology departments. In many, half the staff — those with skills focused on maintaining systems — will lose their jobs. Smart CIOs will hire based on creativity and Agile skills with customer-facing technology. This structural change and the unemployment it creates could be the biggest labor retraining challenge of the 2020s.
  • Big tech vendors will lose power to data collectives. The IBMs, Microsofts, and Oracles of the world gain power from software. But power in the age of the customer comes not just from software, but from data. Already, companies like The Coca-Cola Company in consumer packaged goods, FICO in financial services, and UPS in shipping are building data collectives that will provide complete full-service, cloud-enabled shared data warehouses by industry. The vendors of the future will serve data the way that today's vendors serve software.
  • Systems integrators will find themselves competing with cloud players. The cloud is central to the agility required for digital business transformation, mobile engagement, and big data analytics. Cloud leaders like Amazon and Google will build out their software, eventually offering custom-built services similar to the way today's systems integrators build software solutions. The IBMs and KPMGs of the world will find Amazon their toughest competitor.
  • Vendors will merge based on where they live in the customer journey. Mergers in the tech world are commonplace. But in the age of the customer , vendors won't combine based on what department they sell into (marketing versus IT). Instead, they'll attempt to dominate elements of the  customer life cycle. We'll see packaged app suites for discovery, for service, and for closing the sale..

What It Means For CMOs

CMOs Will Shift From Customer Acquisition To Customer Experience

Traditional marketing is out of step with the age of the customer. Instead, CMOs will learn to focus on the contextual marketing that matters most when empowered customers can switch at a moment's notice.

  • The Obama political machinery will become the template for effective marketing. The Obama presidential victory in 2012 came from intelligence about voters, focused in key places on the day that it mattered. In marketing, now every day is decision day. You'll see the shift first in the entertainment industry, which will take a portion of the millions it spends promoting movies and divert it to personalized, contextualized, mobile persuasion. These techniques will spread to any company that needs to catalyze decisions — from car companies to telecom vendors. More and more of the marketing budget will go to buying the data to fuel these algorithms..
  • Companies will reorganize around customer journeys. Customer experience standout USAA already organizes its company around the customer journey. This trend will catch on in highly competitive industries like consumer packaged goods and travel. Instead of product lines, companies will have departments that focus on getting real-time information to buyers, easing the closing process, and turning service into loyalty. Heavy-handed efforts like Comcast's recently revealed crude upselling tactics will be replaced with agile, digitally enhanced, just-in-time offers that buyers will welcome.
  • Get ready for the race to the middle. Forrester's Customer Experience Index reveals that while breakthrough customer experience is still rare, truly poor customer experiences are becoming rarer. In industries from travel to telecom, the lowest performers have realized that there's money to be made going from bad to middling. If you are not above the midpoint in your industry, you'll be left in the dust by this movement.
  • Google will be your new key partner. Google's Nest Labs acquisition is just the beginning. Technology companies that control the intelligent hardware — set-top boxes, home appliances, mobile phones — will know the most about their customers. If your products are intelligent, too, you'll find this to be an advantage. If not, you'll need to come begging to partner with companies that have these intimate customer connections.


Picture of System Administrator

The digital effect on BPM systems

by System Administrator - Friday, 26 June 2015, 7:14 PM

The digital effect on BPM systems

Business processes have changed radically in the past decade, with rapid  developments in cloud, social networking, mobile and analytics driving today's businesses to transform into digital companies. This requires just as radical a transformation in these ompanies' IT systems, a reality reflected in today's modernized breed of business process management (BPM) systems and tools.

In this SearchCIO handbook, CIO expert Harvey Koeppel explains how the "whole new deal" of BPM really lies in the shift from traditional to digital processes itself, and while BPM technologies have been incrementally improved to adapt to this shift, the BPM ifecycle itself remains relatively unchanged. In our second story, CTO Niel Nickolaisen warns about the temptation of defaulting to BPM to deal with complex workflows – at least not until you simply standardize your processes and systems first. And in our last piece, xecutive Editor Linda Tucci explores how a new generation of BPM systems has arisen to help companies make the transition into digital and tailor their applications to the customer experience.

Table Of Contents

  • BPM in the digital age: A whole new game?
  • How not to use BPM systems
  • Going digital with BPM

Please read the attached guide.

Picture of System Administrator

Threat Reduction

by System Administrator - Tuesday, 10 November 2015, 7:40 PM

Threat Reduction: How agencies are addressing the attack surface in an era of rising cyber incidents

by F5

Federal agencies face numerous threats from multiple points across their enterprise. Within this ever-growing threat surface, they have to make sure that their employees and users can access and use data and applications so they can do their work seamlessly and without incident.

FierceGovernmentIT presents the challenges, issues, perspectives and advancements for federal agencies and other public-sector organizations in managing the threat surface.

Download this eBook to learn:

  • How agencies view the threat surface as they continue to drive their missions.
  • What steps agencies are taking to manage the threat surface, including addressing access and new technologies and practices and implementing new tools.
  • The future of the threat surface, how the Internet of Things is changing the game and what the government must do to manage it.
Picture of System Administrator

Tips to create flexible but clear manuals

by System Administrator - Tuesday, 29 August 2017, 9:53 PM

Good application deployment manuals are thorough but usable. Follow these tips to create flexible but clear manuals that contribute to release management best practices.

Ways to make the application deployment process clear and flexible

Picture of System Administrator

Top 10 Considerations when Selecting a Secure Text Messaging Solution

by System Administrator - Monday, 13 July 2015, 5:25 PM

Top 10 Considerations when Selecting a Secure Text Messaging Solution

Evaluating Secure Text Messaging solutions can cause anyone’s eyes to glaze over in dreaded anticipation. But the process doesn’t have to be laborious, overwhelming, or fraught with perils when you know the right questions to ask.

Please read the attached whitepaper.

Picture of System Administrator

Top 10 Database Security Threats

by System Administrator - Monday, 12 October 2015, 9:34 PM

Top 10 Database Security Threats

by  Imperva

Practical applications for Big Data have become widespread, and Big Data has now become the new "prize" for hackers. Worse, widespread lack of Big Data security expertise spells disaster. These threats are real. This whitepaper explains common injection points that provide an avenue for attackers to maliciously access database and Big Data components.

Please read the attached whitepaper.

Picture of System Administrator

Top 10 ways to have your project end up in court

by System Administrator - Thursday, 13 August 2015, 2:03 PM

Top 10 ways to have your project end up in court

By David Taber

David Letterman may be off the air, but his Top 10 List format remains in the comedic canon. In his honor, here’s David Taber-man’s Top 10 list of these worst practices for agile projects.

As someone who’s sometimes called to be an expert witness, I’ve had to testify in arbitrations and court proceedings about the best practices in agile project management. Of course, when things devolve to the point of legal action, there haven’t been a lot of “best practices” in play by either side. Suffice it to say I’ve seen more than a few blunders by clients.

Here are the ones that show up over and over again:

10. Give the consultant ambiguous requirements, then start using rubber yardsticks*

Nothing’s more comforting to the client than the idea that they'll get exactly what they want, even if they never put sufficient energy into specifying exactly what that is. This goes double for the high-risk area of custom software development. So state your requirements vaguely, to make sure that anything you dream up later can be construed as being within the bounds of your original wording. This tactic works best when you don't really understand the technology, but you do know you need the deliverable to be infinitely detailed yet easy enough for your grandmother to use without training. This tactic is even more effective when you start having detailed conversations about requirements during acceptance testing, when there are no development funds left.

[Related: Top 10 project management certifications]

[*What’s a rubber yardstick? Instead of being like a regular yardstick that is a straight line of fixed length, the rubber yardstick stretches and shrinks and even bends and twists to connect dots that aren’t even on the same plane.]

9. Don't put decisions in writing or email

Writing things down merely ties you down, and that just limits your flexibility in the future (see #10). Much better to give verbal feedback in wide-ranging meetings that never really come to a conclusion. During these meetings, make sure that many attendees are texting or responding to fire-drills unrelated to your project, so they lose focus and have no recollection of what was said. When it comes to signing off requirements, monthly reviews or acceptance testing – just ignore this bean-counting detail!

8. Under-staff your team

You're paying good money for the consultant to do their job, so there's no point in over-investing in your own team members. Put in no-loads and people who don't care, so that the people who actually know what they’re doing can stick to their regular tasks. Once you have your drones in place, make sure to undercut their authority by questioning every decision. No point in motivating anybody – you're already paying them more than they deserve!

7. Blow off approval cycles, wireframe reviews and validation testing

You've got to focus on the big picture of making your business more profitable, so you don’t have time to get into the niggling details of this software project. Besides, business processes and policy decisions are boring and can be politically charged. So when some pesky business analyst asks you to validate the semantic interpretation of a business rule, just leave that mail in your inbox. It'll keep. Later, when it comes to testing and reviews, just send a flunkie with no decision-making authority to check things out.

6. Blatantly intimidate your team

Review meetings should be an expression of your personal power, prestige and authority. Change your mind endlessly and capriciously about details. Change the subject when substantive issues are brought up. Discuss how much your new shoes cost. Punish any questioner. Trust no one (not even your own team members), and make sure that trust never gets a chance to build within the team. Make sure team members know to hide bad news. Use blame as a weapon.

5. Focus on big-bang, slash cut projects with lots of dependencies

Crash programs are the way to get big things done in a hurry. Incrementalism is for wimps and weenies without the imagination to see the big picture. Since complex projects probably involve several vendors, make sure that nothing can progress without your direction and approval. Do not delegate – or if you do, don't empower the delegate to do anything. You wouldn't want to lose control!

4. Switch deadlines and priorities frequently

If the generals are right in saying that all your plans go out the window as soon as the first shot is fired, there's no point in planning realistically in the first place. Make sure to have deadlines for things with lots of dependencies, and then move those deadlines three or four times during the project. This’ll ensure that the project will involve inordinate waste and overhead – but hey, that’s the consultant’s problem, not yours.

[Related: Agile project management lessons learned from Texas Hold'em]

3. Have no contingency plan and no tolerance for budget shifts

It's pedal to the metal – nobody has time for insurance policies. You can't afford to run two systems in parallel, validate live transactions or reconcile variances before full production use. Make sure you dedicate 100 percent of your budgetary authority to the vendors, so there's no way to reallocate funds...let alone have a buffer to handle the unexpected. This works even better when your company enforces a use-it-or-lose-it financial regime.

2. Squeeze the vendor as early in the project as you can

Get your money’s worth. It's never too early to start squeezing your vendors to get the most out of them. Their negative profit margin is not your problem. Show 'em who's really boss. As the project nears its end-game, start modifying the production system yourself, and begin phase-2 work before phase-1 work has been signed off. Configuration control is for weenies.

And the #1 way to make sure your project ends up in court…

1. Don't monitor your own budget and pay little attention at status reviews

Ignore invoices and progress-against-goals reports. Make sure the integrator doesn't know you are not paying attention. Don’t ask questions at project review meetings. Delete emails that bore you. The vendor is there to deliver, so the details and consequences of project management are not your problem. As the project nears its deadline, insist on extra consultant personnel on site without giving any written authorization for extra charges.

Before I say anything more, I have to make it really clear that I’m not an attorney, and none of this is to be construed or used as legal advice. (Yes, my lawyer made me write that.) So get counsel from counsel about the best ways to remedy or prevent the issues above.

As I said at the start, projects that are deeply troubled have problems rooted in the behavior of both the client and the consultant. Next time, I’ll have a Top 10 list for consultants to make sure they end up in court, too.


Picture of System Administrator

Two-Factor Authentication (2FA)

by System Administrator - Tuesday, 16 June 2015, 9:00 PM

Two-Factor Authentication (2FA)


Posted by Margaret Rouse

Two-factor authentication is a security process in which the user provides two means of identification, one of which is typically a physical token, such as a card, and the other of which is typically something memorized, such as a security code.

Two-factor authentication is a security process in which the user provides two means of identification from separate categories of credentials; one is typically a physical token, such as a card, and the other is typically something memorized, such as a security code.

In this context, the two factors involved are sometimes spoken of as something you have and something you know. A common example of two-factor authentication is a bank card: the card itself is the physical item and the personal identification number (PIN) is the data that goes with it. Including those two elements makes it more difficult for someone to access the user’s bank account because they would have to have the physical item in their possession and also know the PIN.

According to proponents, two-factor authentication can drastically reduce the incidence of online identity theft, phishing expeditions, and other online fraud, because stealing the victim's password is not enough to give a thief access to their information.

What are authentication factors?

An authentication factor is an independent category of credential used for identity verification. The three most common categories are often described as something you know (the knowledge factor), something you have (the possession factor) and something you are (the inherence factor). For systems with more demanding requirements for security, location and time are sometimes added as fourth and fifth factors.

Single-factor authentication (SFA) is based on only one category of identifying credential. The most common SFA method is the familiar user name and password combination (something you know). The security of SFA relies to some extent upon the diligence of users. Best practices for SFA include selecting strong passwords and refraining from automatic or social logins.

For any system or network that contains sensitive data, it's advisable to add additional authentication factors. Multifactor authentication (MFA) involves two or more independent credentials for more secure transactions.

Single-factor authentication (SFA) vs. two-factor authentication (2FA)

Although ID and password are two items, because they belong to the same authentication factor (knowledge), they are single factor authentication (SFA). It is really because of their low cost, ease of implementation and familiarity that passwords that have remained the most common form of SFA. As far as SFA solutions go, ID and password are not the most secure. Multiple challenge-response questions can provide more security, depending on how they are implemented, and standalone biometric verification methods of many kinds can also provide more secure single-factor authentication.

One problem with password-based authentication is that it requires knowledge and diligence to create and remember strong passwords. Passwords also require protection from many inside threats like carelessly discarded password sticky notes and old hard drives and social engineering exploits. Passwords are also prey to external threats such as hackers using brute force, dictionary or rainbow table attacks. Given enough time and resources, an attacker can usually breach password-based security systems. Two-factor authentication is designed to provide additional security.

2FA products

  • There are a huge number of devices and solutions for 2FA, from tokens to RFID cards to smartphone apps.
  • Offerings from some well-known companies:
  • RSA SecureID is still very common (although its SecurID was hacked in 2011).
  • Microsoft Phonefactor offers 2FA for a reasonable cost and is free to small organizations of 25 members or less.
  • Dell Defender is a multifactor authentication suite that offers biometrics and various token methods for 2FA and higher.
  • Google Authenticator is a 2FA app that works with any supporting site or service.
  • Apple’siOS, iTunes store and cloud services all support 2FA to protect user accounts and content.

2FA for mobile authentication

Apples iOS, Google Android and Blackberry OS 10 all have apps supporting 2FA and other multifactor authentication. Some have screens capable of recognizing fingerprints; a built-in camera can be used for facial recognition or iris scanning and the microphone can be used in voice recognition. Many smartphones have GPS to verify location as an additional factor. Voice or SMS may also be used as a channel for out-of-band authentication. There are also apps that provide one time password tokens, allowing the phone itself to serve as the physical device to satisfy the possession factor.

Google Authenticator is a two-factor authentication app. To access websites or web-based services, the user types in his username and password and then enters a one-time passcode (OTP) that was delivered to his device in response to the login. The six-digit one time password changes once every 30-60 seconds and serves again to prove possession as an authentication factor.

Smartphones offer a variety of possibilities for 2FA, allowing companies to use what works best for them.

Is two-factor authentication secure?

Opponents argue (among other things) that, should a thief gain access to your computer, he can boot up in safe mode, bypass the physical authentication processes, scan your system for all passwords and enter the data manually, thus -- at least in this situation -- making two-factor authentication no more secure than the use of a password alone.

Higher levels of authentication for more secure communications

Some security procedures now require three-factor authentication (3FA), which typically involves possession of a physical token and a password used in conjunction with biometric data, such as fingerscanning or a voiceprint.

An attacker may occasionally break an authentication factor in the physical world. A persistent search of the target premises, for example, might yield an employee card or an ID and password in an organization’s trash or carelessly discarded storage containing password databases. If additional factors are required for authentication, however, the attacker would face at least one more obstacle.

The majority of attacks come from remote internet connections. 2FA can make distance attacks much less of a threat because accessing passwords is not sufficient for access and it is unlikely that the attacker would also possess the physical device associated with the user account. Each additional authentication factor makes a system more secure. Because the factors are independent, compromise of one should not lead to the fall of others.


Picture of System Administrator

Type 2 hypervisor (hosted hypervisor)

by System Administrator - Wednesday, 10 December 2014, 7:57 PM

Type 2 hypervisor (hosted hypervisor)


Posted by Margaret Rouse

A Type 2 hypervisor, also known as a hosted hypervisor, is a virtual machine manager that installs on top of a host's operating system (OS). 

A Type 2 Hypervisor is a virtualization layer that is installed above a host operating system (OS), such as Windows Server, Linux, or a custom OS installation.  The host operating system has direct access to the server's hardware and is responsible for managing basic OS services.  The Type 2 Hypervisor creates virtual machine environments and coordinates calls for CPU, memory, disk, network, and other resources through the host OS.

A Type 1 Hypervisor, by contrast, is installed directly on physical host server hardware.  It does not require the presence of a full host OS and has direct access to the underlying physical hardware.  Regardless of the implementation, virtual machines (VMs) and their guest OS's are typically unaware of which type of Hypervisor is implemented, as they interact only with the hypervisor itself.

From an implementation standpoint, there are potential benefits and drawbacks of both types of Hypervisors.  For example, the requirement of a full host OS can be seen as an advantage in some areas (including hardware and driver compatibility, configuration flexibility, and reliance on familiar management tools), or as a potential liability (based on potential security issues exposed by the host OS, possible performance overhead, and management burdens for configuring and maintaining the host OS).  It is also important to note that current virtualization platforms can exhibit characteristics of both Type 1 and Type 2 Hypervisors and that vendors have provided features that can mitigate potential issues in both approaches.

Continue Reading About Type 2 hypervisor (hosted hypervisor)


'Type 2 hypervisor (hosted hypervisor)' is part of the Virtualization Glossary

Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 9:53 PM
Picture of System Administrator

Uncloud (de-cloud)

by System Administrator - Wednesday, 11 November 2015, 4:06 PM

Uncloud (de-cloud)

Posted by Margaret Rouse

Uncloud is the removal of applications and data from a cloud computing platform.

In recent years, organizations ranging from small and medium-sized businesses to large enterprises have turned to the cloud to run applications, store data and accomplish other IT tasks. Over time, however, an organization may elect to uncloud one, a few or, possibly, all of its cloud-based assets. Examples could include shutting down a server instance in a public cloud and moving the associated software and data to an in-house data center or colocation facility. De-cloud is another term used to describe this reverse cloud migration.

In the process of unclouding, the cloud customer or, potentially, a channel partneracting on its behalf, will work with the cloud vendor to extract the customer's applications and data. The task involves locating the data and mapping the application's dependencies within the cloud vendor's infrastructure. The unclouding customer -- and its channel partner -- may encounter higher levels of complexity in the case of a public multi-tenant cloud setting. A customer  may have to wait for the cloud vendor's scheduled downtime to migrate its applications and data or the cloud provider may limit the customer's use of migration tools so as not to interfere with the application performance of other customers

Customers may cite a number of reasons for wanting to uncloud. Factors include security issues, liability concerns and difficulty in integrating cloud-based applications with on-premises enterprise applications and data. Frustrated expectations with respect to the cloud's cost efficiency may also influence de-clouding decisions. Anecdotal evidence suggests that customers citing cost as a factor may elect to move applications to an in-house, hyper-converged infrastructure as the better economic choice.

Reverse migration on the rise: Channel partners see customers uncloud

by John Moore

Channel partners report that a small but increasing number of customers are moving some or all of their applications off the cloud.

Channel partners say some of their customers have begun to uncloud and are asking for help migrating back to in-house data centers or colocation facilities.

While cloud computing, in general, remains a high growth area, a counter trend of reverse migration has started to surface. Organizations, industry executives said, cite a number of reasons for moving some or all of their applications off the cloud: security and compliance concerns, frustration over elusive cost savings, and the changing data center economics of hyper-converged

At Trace3 Inc., cost and hyper-convergence played key roles in one customer's off-the-cloud migration. Trace3, based in Irvine, Calif., focuses on data center, big data and cloud technologies. Mark Campbell, research principal and director of Innovation Research at Trace3, said a retail client recently completed a "back-sourcing" exercise in which it migrated its entire cloud footprint into a colocation data center, where the retailer could control the infrastructure.

"Cost was the primary driver," Campbell said of the retail customer. "They were estimating they could save 40% over their cloud IaaS [infrastructure as a service] and PaaS [platform as a service] provider by building their own private cloud built on hyper-converged and commodity infrastructure," he said.

Campbell noted that he hasn't had the opportunity to follow up with the company to see whether it actually realized the projected savings.

Getting off the cloud

Nevertheless, other Trace3 customers have taken steps to uncloud, a pattern Campbell began noticing last year. He said a few customers -- numbered in the dozens, out of a client base of some 2,000 companies -- have encountered issues in the cloud.

"The vast majority of our customers have moved at least some of their enterprise applications to the cloud, and the vast majority of those are continuing in the cloud," Campbell said. "There is a small minority, however, that are moving some or all of their applications back into their own data centers or colocation sites more under their control."

"We are exiting the honeymoon stage, and that is always a rude awakening -- and expectation readjustment -- for both parties."

Paul Dippell CE, Service Leadership

Irwin Teodoro, senior director of data center transformation atDatalink Corp., a data center services provider based in Eden Prairie, Minn., has also observed declouding among his company's customers. He said for every 10 companies pursuing some form of cloud computing, he has seen two or three looking to get out of the cloud.

"This is definitely a trend we are going to see more of."

The counter-cloud migration may signal a resetting of expectations among channel partner customers. Paul Dippell, CEO of Service Leadership Inc., a company based in Plano, Texas, that provides a financial and operational benchmark for channel companies, said cloud vendors tell customers that their offerings are "wonderful, weightless, agile, low cost, mobile [and] fantastically free of the impediments of past computing models."

But that vision doesn't always line up with reality.

"What the customers are experiencing is different enough that a material number of customers are declouding or significantly changing -- reducing -- their cloud strategies to regain a more solid computing foundation and rational cost," Dippell said.

"I don't expect cloud to fail, by any means, and I do expect it to grow," Dippell said. "But we are exiting the honeymoon stage, and that is always a rude awakening -- and expectation readjustment -- for both parties."

Dippell added that he's heard anecdotal accounts of solution providers winning new customers by agreeing to decloud them.

When customers uncloud: Top factors

A number of factors influence migration decisions. Unforeseen security issues, for example, may drive some applications back in-house. In general, risk and liability concerns are tempering enthusiasm for the cloud, said Dan Liutikas, managing attorney at InfoTech Law Advocates P.C., and chief legal officer and secretary at CompTIA.

Channel partners, as well as customers, are questioning whether cloud is the correct answer for every customer. While the cloud adoption wave continues, more and more service providers are weighing whether cloud is the right approach for a particular customer or a subset of customers, Liutikas said. The latter includes companies in highly regulated industriessuch as healthcare and financial services.

"Sometimes … on-premises is the better answer based on their customers' needs," he said.

Organizations may also struggle to achieve deep integration between their cloud applications and their on-premises legacy applications and data, according to industry executives. But beyond legal and technical hurdles,cost has become a sticking point for some cloud users.

Unexpected cloud costs may stem from a customer's failure to quantify all the necessary services in its initial calculations. Campbell said most customers tend to be accurate in estimating traditional infrastructure and capacity costs for servers, storage capacity and bandwidth, among other components. But on the other hand, they tend to underestimate the cost of items beyond their data centers. Those items include the cost of creating multiple storage snapshots to back up data, the cost of data replication and the cost of restoring data.

"This leads to budgetary surprises," Campbell said.

Cloud sprawl can also stress budgets.

"Much like [virtual machine] sprawl, it is not uncommon for the initial targets of a cloud installation to grow as both the IT and business discover new applications, features and snap-of-the-fingers capacity bursts," Campbell explained. "These all add line items to the monthly bill."

In addition, cloud offerings may run afoul of conventional budget controls.

Campbell said traditional IT organizations built their financial processes and controls to monitor big-ticket items such as projects and large Capexpurchases and smaller items such as consumables and onetime Opexexpenditures handled on an approval basis.

"This works great in a data-center-centric operation, but imagine the befuddled expression on the comptroller's face when he gets his first 23,000-line-item bill from Amazon," Campbell said. "It is very hard to even decipher what these expenditures are for, let alone garner business justification."

Customers disappointed with cloud cost savings may end up migrating applications to hyper-converged infrastructures.

"Some [companies] are pulling in applications from the cloud to their data centers," Campbell said. "If they do that, we are seeing hyper-convergenceas being one of those enabling mechanisms."

In addition to cost, corporate culture can play a role in a cloud reversal.

"Executives who are not fully aware of the concepts of the cloud feel somewhat apprehensive that data is somewhere else and feel lack of control," Teodoro said.

Managing the declouding challenge

Assisting customers as they back out of the cloud can prove difficult. Teodoro said public clouds in which multiple customers share a common infrastructure represent the greatest challenge. Dealing with maintenance windows is one issue. A customer can't just extract an application based on its own ad hoc maintenance timetable; they have to wait for the cloud provider's scheduled downtime.

"You can't move when you want to move," Teodoro said. "You've got to move at somebody else's pace and schedule."

Determining a cloud-based application's dependencies with respect to the cloud provider's infrastructure is another consideration. A channel partner working on an off-the-cloud migration project needs to figure out what virtual machines the application resides on and identify the virtual LANs and subnets in the compute infrastructure to which the application can be traced, Teodoro explained. The goal: extract the application without breaking something in the environment.

"The keys for us are really to understand the dependencies in the environment -- down to the infrastructure -- and find ways to carve out the environment into smaller chunks or workgroups," Teodoro said.

Another complication: Migration tools can help channel companies uncloud customers, automating the tasks of data gathering, analysis and forensics. But in a shared, multi-tenant cloud, service providers can't use their own tools, since they could impact a cloud provider's other clients, Teodoro said.

Seeking a happy balance

Customers juggling multiple IT environments provide yet another degree of difficulty. Jim Piazza, vice president of service management at CenturyLink Inc., which offers colocation, public cloud and IT services, said customers such as software as a service providers may offer multiple versions of their software to support different clients. And those different versions may be hosted on different computing platforms: in-house private clouds, colocation centers and public clouds, for example.

"It's an interesting mix … that is really quite a challenge to manage," Piazza said.

Piazza said CenturyLink, based in Monroe, La., provides customers a service catalog to help them keep track of what version of their software is deployed where. In addition, the company has built interconnects between customers' colocation footprints in CenturyLink facilities and CenturyLink's public cloud. The service catalog and interconnects enable the company's clients to move their end customers from one platform to another, Piazza said.

Piazza likened migrating customers and their workloads among the various platforms to supporting a 3D jigsaw puzzle.

For Campbell, the cloud conundrum boils down to harmonizing the computing platforms now available to customers.

"It's finding that happy balance -- what lives best in the cloud and what lives best in-house."

Continue Reading About uncloud (de-cloud)

Dig Deeper on Cloud integration services and cloud enablement services

Picture of System Administrator

Unified Endpoint Management

by System Administrator - Thursday, 30 March 2017, 4:54 PM

Unified Endpoint Management (UEM)

Posted by: Margaret Rouse | Contributor(s): Colin Steele

Unified endpoint management (UEM) is an approach to securing and controlling desktop computers, laptops, smartphones and tablets in a connected, cohesive manner from a single console. Unified endpoint management typically relies on the mobile device management (MDM) application program interfaces (APIs) in desktop and mobile operating systems.

Microsoft's inclusion of MDM application program interfaces in Windows 10 made unified endpoint management a possibility on a large scale. Prior to the release of Windows 8.1, there was no way for MDM software to access, secure or control the operating system and its applications. 

In Windows 10, the tasks IT can perform through MDM software include:

  • configuring devices' VPN, email and Wi-Fi settings;
  • enforcing passcode and access policies;
  • installing patches and updates;
  • blacklisting and whitelisting applications; and
  • installing and managing Universal Windows Platform (.appx) and Microsoft Installer (.msi) applications.

Mobile device management is significantly less robust than traditional Windows management tools, however. Examples of tasks information technology (IT) administrators can't perform through Windows 10 MDM APIs include:

  • deploying and managing legacy executable (.exe) applications;
  • enforcing encryption;
  • deploying Group Policy Objects; and
  • managing printers, file shares and other domain-based resources.

Many vendors market UEM as a feature of their broader enterprise mobility management (EMM) software suites and some EMM vendors have made strides to close the gap between MDM and traditional Windows management tools. For example, MobileIron Bridge allows IT administrators to use MDM to deploy scripts that modify the Windows 10 file system and registry and perform other advanced tasks, including deploying legacy.exe applications.

Other vendors that support UEM include VMware, Citrix, BlackBerry and Apple. Apple's Mac OS X operating system has included MDM APIs since at least 2012, when AirWatch and MobileIron announced support. Today, all of the major vendors that offer UEM also support OS X.


Picture of System Administrator

Unstructured Data

by System Administrator - Tuesday, 18 April 2017, 3:07 PM

Pulling Insights from Unstructured Data – Nine Key Steps

by Salil Godika

Data, data everywhere, but not a drop to use. Companies are increasingly confronted with floods of data, including “unstructured data” which is information from within email messages, social posts, phone calls, and other sources that isn’t easily put into a traditional column. Making sense and actionable recommendations from structured data is difficult, and doing so from unstructured data is even harder.

Despite the challenge, the benefits can be substantial. Companies that commit to examining unstructured data that comes from devices and other sources should be able to find hidden correlations and surprising insights. It promotes trend discovery and opens opportunities in ways that traditionally-structured data cannot.

Analyzing unstructured data can be best accomplished by following these nine steps:

1. Gather the data

Unstructured data means there are multiple unrelated sources. You need to find the information that needs to be analyzed and pull it together. Make sure the data is relevant so that you can ultimately build correlations.

2. Find a method

You need a method in place to analyze the data and have at least a broad idea of what should be the end result. Are you looking for a sales trend, a more traditional metric, or overall customer sentiment? Create a plan for finding a result and what will be done with the information going forward.

3. Get the right stack

The raw data you pull will likely come from many sources, but the results have to be put into a tech stack or cloud storage in order for them to be operationally useful. Consider the final requirements that you want to achieve and then judge the best stack. Some basic requirements are real-time access and high availability. If you’re running an ecommerce firm, then you want real-time capabilities and also want to be sure you can manage social media on the fly based on trend data.


4. Put the data in a lake

Organizations that want to keep information will typically scrub it and then store it in a data warehouse. This is a clean way to manage data, but in the age of Big Data it removes the chance to find surprising results. The newer technique is to let the data swim in a “data lake” in its native form. If a department wants to perform some analysis, they simply dip into the lake and pull the data. But the original content remains in the lake so future investigations can find correlations and new results.

5. Prep for storage

To make the data useful (while keeping the original in the lake), it is wise to clean it up. For example text files can contain a lot of noise, symbols, or whitespace that should be removed. Dupes and missing values should also be detected so analysis will be more efficient.

6. Find the useful information amongst the clutter

Semantic analysis and natural language processing techniques can be used to pull various phrases as well as the relationship to that phrase. For example “location” can be searched and categorized from speech in order to establish a caller’s location.

7. Build relationships

This step takes time, but it’s where the actionable insights lay. By establishing relationships between the various sources, you can build a more structured database which will have more layers and complexity (in a good way) then a traditional single-source database.

8. Employing statistical modeling

Segmenting and classifying the data comes next. Use tools such as K-means, Naïve Bayes, and Support Vector Machine algorithms to do the heavy lifting to find correlations. You can use sentiment analysis to gauge customer’s moods over time and how they are influenced by product offerings, new customer service channels, and other business changes. Temporal modeling can be applied to social media and forums to find the most relevant topics that are being discussed by your customers. This is valuable information for social media managers who want the brand to stay relevant.

9. End results matter

The end result of all this work has to be condensed down to a simplified presentation. Ideally, the information can be viewed on a tablet or phone and helps the recipient make smart real-time decisions. They won’t see the prior eight steps of work, but the payoff should be in the accuracy and depth of the data recommendations.

Every company’s management is pushing the importance of social media and customer service as the main drivers of company success. However, these services can provide another layer of assistance to firms after diagnostic tools are applied to their underlying data. IT staff need to develop certain skills in order to properly collect, store, and analyze unstructured data in order to compare it with structured data to see the company and its users in a whole new way.


About the author: Salil Godika is Co-Founder, Chief Strategy & Marketing Officer and Industry Group Head at Happiest Minds Technologies. Salil has 18 years of experience in the IT industry across global product and services companies. Prior to Happiest Minds, Salil was with MindTree for 4 years as the Chief Strategy Officer. Before MindTree, Salil spent 12 years in the United States working for start-ups and large technology product companies like Dassault Systems, EMC and i2 Technologies. His accomplishments include incubating a new product to $30million in revenue, successful market positioning of multiple products, global marketing for a $300million business and multiple M&As.

 Related Items:


Picture of System Administrator

UTM vs. NGFW: Unique products or advertising semantics?

by System Administrator - Wednesday, 18 February 2015, 6:49 PM

UTM vs. NGFW: Unique products or advertising semantics?

by: Michael Heller

In comparing UTM vs. NGFW, organizations find it difficult to see if there are differences between the two products or if it is just marketing semantics.

It can often be difficult to discern the difference between unified threat management (UTM) and next-generation firewalls (NGFW). Experts agree that the lines appear to be blurring between the two product sets, but enterprises that focus on defining each product type during the purchasing process may be making a mistake.

NGFWs emerged more than a decade ago in response to enterprises that wanted to combine traditional port and protocol filtering with IDS/IPSfunctionality and the ability to detect application-layer traffic; over time they added more features like deep-packet inspection and malware detection. 

Meanwhile, UTM s were borne of a need for not only firewall functionality among small and midsize businesses, but also IDS/IPS, antimalware, antispam and content filtering in a single, easy-to-manage appliance. More recently UTMs have added features, like VPN, load-balancing and data loss prevention (DLP), and are increasingly delivered as a service via the cloud. 

According to Jody Brazil, CEO of Overland Park, Kan.-based security management firm FireMon LLC, SMBs and remote office locations were attracted to the UTM, but larger enterprises tended to favor the NGFW to standalone devices throughout the network, minimizing the impact on firewall performance.

Greg Young, research vice president for Stamford, Conn.-based Gartner Inc., said larger enterprises have had the budgets to buy the best technology, and the staff to support the more advanced features and better performance afforded by NGFWs. On the other hand, SMBs not only wanted an all-in-one product, but also needed extra support from the channel to manage the device, even if it meant that each feature of the UTM was good but not the best.

"Service providers for ISPs have different needs than enterprises," said Young. "So, UTM vendors will only offer basic firewall features as a price-play for that market."

Young said those differences in ease of use and support demands still exist today, though they have become more nuanced; there is overlap in the underlying technology of NGFW and UTM, and spec sheets tend to look similar. Young said that the key differences now are more around quality of features, and the level of support from channel partners to meet customer needs.

Young also noted that vendors tend to excel in one market or the other, like Fortinet Inc. with UTM for SMBs, or Palo Alto Networks Inc. with NGFW for enterprises. Few vendors can succeed in both, he said, like Check Point Software Technologies Ltd. has done.

"The confusion came from SMB vendors trying to move into the enterprise market without making channel and quality changes," said Young. "It was an intentional campaign to confuse, but very few end users are confused about what they need. It is either a racecar [NGFW] or a family van [UTM]."

Brazil admitted that the differences between NGFW and UTM can be confusing, even for experienced practitioners, but described UTM as a collection of unrelated security features, one of which is the firewall.

"UTM generally refers to a firewall with a mix of other 'bolted-on' security functions like antivirus and even email spam protection," said Brazil. "These are not access control features that typically define a firewall."

What traditionally has defined next-gen firewalls, Brazil said, is robust Layer 7 application access control, though an increasing number of NGFWs are being augmented with integrated threat intelligence, enabling them to deny known threats based on a broad variety of automatically updated policy definitions.  

However, Brazil did caveat his distinctions by saying that a UTM could be considered an NGFW if it met the Layer 7 parameters, and an NGFW that included malware functions could be considered a UTM. Though, he was clear that despite these potential overlap points, he would keep the classifications separate because of a lack of similarities in other respects, like access control.

Brazil said that NGFW will eventually become the standard, and the terms NGFW and firewall will become synonymous. He said UTM will remain an important product for SMBs, especially when a company prioritizes simplicity of deployment over the depth of security and performance, but NGFW and UTM will not converge because of performance and management concerns.

"The idea of a 'converged' network security gateway will continue to have appeal, so vendors will continue to add functionality to reduce cost of firewall ownership to the customer and increase revenue to the vendor," said Brazil. "However, issues with performance and manageability will continue to force separate, purpose-built systems that will be deployed in enterprise networks. As such, there will continue to be enterprise firewalls that should not be considered UTMs."

Mike Rothman, analyst and president for Phoenix-based security firm Securosis LLC., said he believes that UTM and NGFW are essentially the same, and the differences are little more than marketing semantics. Rothman agreed that marketing from vendors caused confusion, but also blamed analysts for adopting the term NGFW and driving it into the vernacular.

He said that early UTMs did have problems scaling performance from SMBs to larger enterprises, especially when trying to enforce both positive rules (firewall access) and negative rules (IPS), but that early NGFW had the same issues keeping up with wire speed when implementing threat prevention. He said that the perceived disparities were used to enforce market differentiation, and persist today, despite these scaling issues not being relevant anymore.

According to Rothman, the confusion lies not only in comparing the two device types, but also in the term "next-generation firewall" itself, which he thinks minimizes what the device does.

"What an NGFW does is bigger than just a firewall," said Rothman. "A firewall is about access control, basically enforcing what applications, ports, protocols, users, etc., are allowed to pass through the firewall. The NGFW also can look for and deny access to threats, like an IPS. So it's irritating that the device is called an NGFW, as it does more than just a firewall. We call it the Network Security Gateway, as that is a more descriptive term."

Rothman said that today's UTMs can do everything an NGFW can do, as long as they are configured properly and have the right policy integration. He said he believes that arguments about feature sets or target markets are examples of aritificial distinctions that only serve to confuse the issue.

"From a customer perspective, the devices do the same thing," Rothman said. "The NGFW does both access control and threat prevention, as does the UTM, just a little differently in some devices. Ultimately, the industry needs to focus on what's important: Will the device scale to the traffic volumes they need to handle with all of the services turned on? That's the only question that matters."

Moving forward, despite differences in opinions, the experts agree that enterprises shouldn't go into a purchasing process by trying to decide whether they need an NGFW or a UTM. Rather, the ultimate goal should always be to focus on the best product to solve their problems.

Rothman said that the distinctions will go away as low-end UTM vendors add more application-inspection capabilities and more traditional NGFW vendors go downmarket by offering versions suitable for SMBs. He also said he doesn't expect an end to confusing vendor marketing anytime soon, so enterprises need to be careful to ignore these semantics and focus on finding the right product to address security needs.

Young said that in the short term, UTM and NGFW will remain separate and will both continue to be mainstays for SMBs and larger enterprises respectively, and the decision around what device to use will be a question of need.

The question of UTM vs. NGFW is still divisive, and experts have different ideas regarding if and where the two technologies diverge when looking at the issue from a vendor perspective. However, when looking at the issue from a customer perspective, the experts agree that focusing on an enterprise's security needs will help to mitigate the confusion and lead to the right product.

"It isn't just about technology, it is about how small company's security is different than a big company's security," said Young. "It's all about the use case, not a 'versus.'"



Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 9:54 PM
Picture of System Administrator

VDI Disaster Recovery

by System Administrator - Wednesday, 14 January 2015, 7:20 PM

Turning to the cloud for VDI disaster recovery

by Robert Sheldon

The cloud makes it possible for any business to have a strong VDI disaster recovery plan. You can choose to back up data by file, or you can back up the whole image.

Cloud services can provide the flexibility and low up-front costs that make implementing a DR strategy more feasible than ever, but it can be difficult to integrate the cloud with existing systems. Luckily, there are several ways to incorporate cloud services into your VDI disaster recovery strategy.

The servers hosting your virtual desktops are just as susceptible to floods, hurricanes and cyberattacks as your physical desktops, whether those servers reside on-premises or at a remote data center. Even a minor disaster can bring an entire operation to a standstill. But you can minimize the effects that a disaster can have on your business if you have a VDI disaster recovery (DR) plan in place.

Comprehensive disaster recovery plans are sometimes out of reach for companies – small and medium-sized businesses -- that lack the resources or infrastructure necessary to build an effective strategy. The cloud has changed all that.

Backing up to the cloud

Many organizations already back up virtual machines (VMs) to cloud storage services, usually by file-level backup or image-level backup.

With file-level backup, companies copy individual files within the VM to the cloud, whereas image-level backup replicates the entire VM image. In both cases, the data is copied to an off-site location, away from your primary operations.

File-level backups are similar to the traditional types of backups that happen routinely on desktops and servers. An agent is installed on the guest operating system and it controls which files get backed up and when. File-level backup systems are easy to implement and make it simple to restore individual files. But this approach can be cumbersome and time-consuming if you have to restore an entire VM.

Image-level backups are snapshots of your VMs at a given point in time. One method you can use to create the snapshots is to run a backup script or similar mechanism to periodically copy the image files to a cloud storage provider. Then you can restore the entire VM when you need it without going through the tedious process of restoring many individual files. That being said, using a script can be slow and resource-intensive.

A better approach is to do what many services and tools already offer: Back up an initial copy of the entire VM image to storage, and then apply changed blocks to the image at regular intervals. This approach to VDI disaster recovery can help avoid much of the system overhead associated with the script method and provide an efficient and simple mechanism to back up and restore VMs in their entirety.

Some VM storage services and backup tools support both file-level and image-level backups, often without requiring a change to the VM configuration, such as installing an agent. In this way, you're getting the best of both worlds: You can restore individual files if you need them, or restore the entire VM.



Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 9:55 PM
Picture of System Administrator

Web Server Security

by System Administrator - Wednesday, 15 July 2015, 2:00 AM

Web Server Security

Posted by Margaret Rouse

Web server security is the protection of information assets that can be accessed from a Web server.

Web server security is the protection of information assets that can be accessed from a Web server.

Web server security is important for any organization that has a physical or virtual Web server connected to the Internet. It requires a layered defense and is especially important for organizations with customer-facing websites.

Separate servers should be used for internal and external-facing applications and servers for external-facing applications should be hosted on a DMZ or containerized service network to prevent an attacker from exploiting a vulnerability to gain access to sensitive internal information.

Penetration tests should be run on a regular basis to identify potential attack vectors, which are often caused by out-of-date server modules, configuration or coding errors and poor patch management. Web site security logs should be audited on a continuous basis and stored in a secure location. Other best practices include using a separate development server for testing and debugging, limiting the number of superuser and administrator accounts and deploying an intrusion detection system (IDS) that includes monitoring and analysis of user and system activities, the recognition of patterns typical of attacks, and the analysis of abnormal activity patterns.

Continue Reading About web server security


Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 9:56 PM
Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 9:56 PM
Picture of System Administrator


by System Administrator - Thursday, 2 May 2013, 9:57 PM
Picture of System Administrator


by System Administrator - Thursday, 29 January 2015, 1:08 PM


Posted by: Margaret Rouse

Z-Wave is a wireless communication technology that is used in security systems and also business and home automation.

Z-Wave is often used in locks, security systems, lighting, heating, cooling and home appliances. Support can be integrated in products or added by retrofitting standard electronics and devices.

Z-Wave communications use low-power radio signals in the 900MHz range, separated fromWi-Fi. The system supports automatic discovery of up to 230 devices per controller. Multiple controllers can also communicate with one another and pass commands to support additional devices. Z-wave is optimized for low latency, with data rates of up to 100KB/s.

Z-Wave is marketed primarily as a security product. However, vulnerabilities have been detected that allow attackers to spoof an access point to gain control, even on encrypted versions. Like most security automation products, Z-Wave increases a system’s attack surface because it adds connected devices and associated software. To prevent networked devices from increasing the overall vulnerability of a system, it’s important to consider the security of any connected element.

Over 80 percent of commercial home security systems use Z-Wave as the protocol by which their components communicate; the Z-Wave Alliance, a global consortium organized to bring compatible devices to market, includes more than 250 manufacturers among its members.

See a Black Hat conference video about hacking Z-Wave automation systems:



Picture of System Administrator


by System Administrator - Wednesday, 10 July 2013, 7:20 PM
  • .ogg - Vorbis is an open source patent-free audio compression format, develo...
  • 1170 - Spec 1170 was the working name of the standard UNIX programming inter...
  • 1170 (UNIX 98) - Spec 1170 was the working name of the standard UN...
Picture of System Administrator

7 plantillas para hacer infografías sin Photoshop

by System Administrator - Tuesday, 30 June 2015, 11:54 PM

7 plantillas para hacer infografías sin Photoshop

Por Carolina Samsing

Cuando le quieres presentar información a un colega ¿cómo lo haces? ¿escribes un reporte? ¿usas una plantilla de PowerPoint que ya te tiene aburrido de ver tantas veces? Realmente, esta es una duda que mucha gente tiene. Una buena solución para este problema es utilizar infografías.

¿Por qué una infografía? Una infografía es una herramienta muy eficaz para comunicarse y cautivar la atención del lector - te permiten presentar de forma simple información que de cualquier otra forma sería difícil de comunicar.

Una encuesta reciente nos mostró que este año, profesionales de marketing tienen como prioridad aprender sobre contenido original y visual. Y como también aprendimos en nuestro reporte sobre el estado del Inbound Marketing en Latinoamérica, 17% de las empresas considera el contenido visual una prioridad.

Pero aquí está el problema: ¿cómo aquellos que no tienen experiencia con diseño - o el presupuesto para pagar una agencia, un diseñador o un programa de diseño - van a crear infografías profesionales y atractivas?

Que bueno que preguntaste. Aquí te revelamos un pequeño secreto: puedes ser un diseñador profesional utilizando un programa que probablemente ya tienes en tu computadora hace años: PowerPoint. PowerPoint puede ser tu mejor amigo cuando quieres crear contenido visual.

Y para ayudarte a comenzar, hemos creado 7 plantillas de infografías increíbles que puedes usar gratuitamente.

>>Descarga aquí tus 7 plantillas de infografías gratuitas<<

En el siguiente video te mostraremos cómo editar una de estas plantillas y gacer tu propia infografía. No te olvides de descargar las plantillas para poder personalizarlas.

Herramientas básicas para utilizar en cualquier infografía

Cuando piensas en crear una infografía, tienes que considerar cuatro herramientas esenciales de PowerPoint que te ayudarán a lo largo del proceso de creación: 

  • Relleno: determina el color principal del objeto o el texto 
  • Líneas: determina el color del contorno 
  • Efectos: agrega elementos de diseño en la infografía 
  • Formas: te permite escoger una serie de formas pre concebidas


Una infografía con distintos colores e imágenes 

Una vez que ya entendiste cómo funcionan las herramientas básicas tienes que empezar por elegir los colores te gustaría utilizar. La mejor forma de hacer esto es seleccionando dos colores principales y dos secundarios. Trata de que estos colores vayan de acuerdo a tu imagen corporativa. 

Si quieres usar formas, íconos y tipos de letra distintos, un buen lugar para encontrarlos es el proprio PowerPoint, que tiene más de 400 opciones de íconos para descargar.


Muestra estadísticas utilizando distintos tipos de letra 

Es muy común querer compartir estadísticas dentro de una infografía. Los gráficos pueden ser monótonos y poco atractivos por lo que intenta utilizar distintos colores. Otra cosa que ayuda a destacar esta información es el uso de distintos tipos de letras y distintos tamaños. También le puedes agregar íconos que sean relevantes a cada estadística, o a las que quieras destacar más. Aquí hay un ejemplo de esto:


Compara alternativas

Una infografía es una muy buena forma de comparar dos cosas distintas porque puedes ponerlas lado a lado y es fácil de visualizar las diferencias. Divide cada slide en dos partes y elige un esquema de colores distintos para cada lado, de esta forma, el contraste será mayor. Incorpora todos los puntos que hablamos en este post - utiliza distintos tipos de letras, tamaños, gráficos e imágenes para hacer la información más clara.


Busca inspiración en Pinterest

Otra buena idea es inspirarse en Pinterest, por ejemplo, utiliza cajas grandes para mostrar información importante y utiliza tamaños distintos, siempre siguiendo la idea de utilizar imágenes.


Algo un poco distinto

Si quieres mostrar información y estadísticas en un formato que no tenga que ser tan formal, puedes utilizar esta plantilla: es divertida pero al mismo tiempo te ayuda a mostrar tu información de una forma clara y cautivadora. 


Para terminar

Cuando termines la infografía, guárdala en formato PNG - esto le va a dar mejor calidad de imagen si la quieres utilizar para web.



Picture of System Administrator


by System Administrator - Wednesday, 10 July 2013, 7:14 PM
  • Adaptive Server Enterprise - Adaptive Server Enterpris...
  • Adaptive Server Enterprise (ASE) - Adaptive Serv...
  • agnostic - Agnostic, in an information technology (IT) context, refers t...
  • Android OS - Android OS is a Linux-based open source platform for mobi...
  • android sdk - Android OS is a Linux-based open source platform for mo...
  • Ant - Ant is an open source build tool (a program for putting together all th...
  • AoE - ATA over Ethernet (AoE) is an open source network protocol designed to ...
  • Apache - Apache is a freely available Web server that is distributed under...
  • Apache Cassandra - Apache Cassandra is an open source distribute...
  • Apache HBase - Apache HBase is an open source columnar database buil...
  • Apache HTTP server project - The Apache HTTP server pr...
  • Apache Lucene - Apache Lucene is a freely available information ret...
  • Apache Solr - Apache Solr is an open source search platform built upo...
  • ASE - Adaptive Server Enterprise (ASE) is a relational database management sy...
  • ATA over Ethernet - ATA over Ethernet (AoE) is an open source n...
  • ATA over Ethernet (AoE) - ATA over Ethernet (AoE) is an o...
Picture of System Administrator

Application Modernization

by System Administrator - Friday, 12 September 2014, 12:03 AM

Application Modernization

Page: (Previous)   1  ...  3  4  5  6  7  8  9  10  11  12  ...  63  (Next)