Glosario KW | KW Glossary
Ontology Design | Diseño de Ontologías
Posted by: Margaret Rouse
Powerwall, which is aimed at the residential market, is designed to store power generated at peak solar time for use during power outages and out-of-peak solar time, including night. The system’s slim, modular design is adapted from the technology used in those for Tesla's electric cars and can hold up to 7 kilowatt-hours of energy, which is enough energy to power a typical home in the United States for about 7 hours. A similar product, Tesla Powerpack, is aimed at the business and utility market. Both products are based on lithium ion battery technology.
Powerwall will be available in a 7 kilowatt hour (kW) model, designed for daily use applications, and a 10kwh model, designed for backup power. The units weigh about 220 lbs. Total capacity can be upgraded by modular connection of up to nine additional units. Tesla expects to start shipping Powerwall sometime during the summer of 2015.
Tesla provides the following specifications for Powerwall:
Watch a video introduction to Powerwall:
The CIO's And CMO's Blueprint For Strategy In The Age Of The Customer
The CIO's And CMO's Blueprint For Strategy In The Age Of The Customer
Four Imperatives To Establish New Competitive Advantage
Tips to create flexible but clear manuals
Good application deployment manuals are thorough but usable. Follow these tips to create flexible but clear manuals that contribute to release management best practices.
Ways to make the application deployment process clear and flexible
Top 10 Database Security Threats
Top 10 Database Security Threats
Practical applications for Big Data have become widespread, and Big Data has now become the new "prize" for hackers. Worse, widespread lack of Big Data security expertise spells disaster. These threats are real. This whitepaper explains common injection points that provide an avenue for attackers to maliciously access database and Big Data components.
Please read the attached whitepaper.
Top 10 ways to have your project end up in court
Top 10 ways to have your project end up in court
By David Taber
David Letterman may be off the air, but his Top 10 List format remains in the comedic canon. In his honor, here’s David Taber-man’s Top 10 list of these worst practices for agile projects.
As someone who’s sometimes called to be an expert witness, I’ve had to testify in arbitrations and court proceedings about the best practices in agile project management. Of course, when things devolve to the point of legal action, there haven’t been a lot of “best practices” in play by either side. Suffice it to say I’ve seen more than a few blunders by clients.
Here are the ones that show up over and over again:
10. Give the consultant ambiguous requirements, then start using rubber yardsticks*
Nothing’s more comforting to the client than the idea that they'll get exactly what they want, even if they never put sufficient energy into specifying exactly what that is. This goes double for the high-risk area of custom software development. So state your requirements vaguely, to make sure that anything you dream up later can be construed as being within the bounds of your original wording. This tactic works best when you don't really understand the technology, but you do know you need the deliverable to be infinitely detailed yet easy enough for your grandmother to use without training. This tactic is even more effective when you start having detailed conversations about requirements during acceptance testing, when there are no development funds left.
[Related: Top 10 project management certifications]
[*What’s a rubber yardstick? Instead of being like a regular yardstick that is a straight line of fixed length, the rubber yardstick stretches and shrinks and even bends and twists to connect dots that aren’t even on the same plane.]
9. Don't put decisions in writing or email
Writing things down merely ties you down, and that just limits your flexibility in the future (see #10). Much better to give verbal feedback in wide-ranging meetings that never really come to a conclusion. During these meetings, make sure that many attendees are texting or responding to fire-drills unrelated to your project, so they lose focus and have no recollection of what was said. When it comes to signing off requirements, monthly reviews or acceptance testing – just ignore this bean-counting detail!
8. Under-staff your team
You're paying good money for the consultant to do their job, so there's no point in over-investing in your own team members. Put in no-loads and people who don't care, so that the people who actually know what they’re doing can stick to their regular tasks. Once you have your drones in place, make sure to undercut their authority by questioning every decision. No point in motivating anybody – you're already paying them more than they deserve!
7. Blow off approval cycles, wireframe reviews and validation testing
You've got to focus on the big picture of making your business more profitable, so you don’t have time to get into the niggling details of this software project. Besides, business processes and policy decisions are boring and can be politically charged. So when some pesky business analyst asks you to validate the semantic interpretation of a business rule, just leave that mail in your inbox. It'll keep. Later, when it comes to testing and reviews, just send a flunkie with no decision-making authority to check things out.
6. Blatantly intimidate your team
Review meetings should be an expression of your personal power, prestige and authority. Change your mind endlessly and capriciously about details. Change the subject when substantive issues are brought up. Discuss how much your new shoes cost. Punish any questioner. Trust no one (not even your own team members), and make sure that trust never gets a chance to build within the team. Make sure team members know to hide bad news. Use blame as a weapon.
5. Focus on big-bang, slash cut projects with lots of dependencies
Crash programs are the way to get big things done in a hurry. Incrementalism is for wimps and weenies without the imagination to see the big picture. Since complex projects probably involve several vendors, make sure that nothing can progress without your direction and approval. Do not delegate – or if you do, don't empower the delegate to do anything. You wouldn't want to lose control!
4. Switch deadlines and priorities frequently
If the generals are right in saying that all your plans go out the window as soon as the first shot is fired, there's no point in planning realistically in the first place. Make sure to have deadlines for things with lots of dependencies, and then move those deadlines three or four times during the project. This’ll ensure that the project will involve inordinate waste and overhead – but hey, that’s the consultant’s problem, not yours.
3. Have no contingency plan and no tolerance for budget shifts
It's pedal to the metal – nobody has time for insurance policies. You can't afford to run two systems in parallel, validate live transactions or reconcile variances before full production use. Make sure you dedicate 100 percent of your budgetary authority to the vendors, so there's no way to reallocate funds...let alone have a buffer to handle the unexpected. This works even better when your company enforces a use-it-or-lose-it financial regime.
2. Squeeze the vendor as early in the project as you can
Get your money’s worth. It's never too early to start squeezing your vendors to get the most out of them. Their negative profit margin is not your problem. Show 'em who's really boss. As the project nears its end-game, start modifying the production system yourself, and begin phase-2 work before phase-1 work has been signed off. Configuration control is for weenies.
And the #1 way to make sure your project ends up in court…
1. Don't monitor your own budget and pay little attention at status reviews
Ignore invoices and progress-against-goals reports. Make sure the integrator doesn't know you are not paying attention. Don’t ask questions at project review meetings. Delete emails that bore you. The vendor is there to deliver, so the details and consequences of project management are not your problem. As the project nears its deadline, insist on extra consultant personnel on site without giving any written authorization for extra charges.
Before I say anything more, I have to make it really clear that I’m not an attorney, and none of this is to be construed or used as legal advice. (Yes, my lawyer made me write that.) So get counsel from counsel about the best ways to remedy or prevent the issues above.
As I said at the start, projects that are deeply troubled have problems rooted in the behavior of both the client and the consultant. Next time, I’ll have a Top 10 list for consultants to make sure they end up in court, too.
Two-Factor Authentication (2FA)
Two-Factor Authentication (2FA)
Posted by Margaret Rouse
Two-factor authentication is a security process in which the user provides two means of identification, one of which is typically a physical token, such as a card, and the other of which is typically something memorized, such as a security code.
Two-factor authentication is a security process in which the user provides two means of identification from separate categories of credentials; one is typically a physical token, such as a card, and the other is typically something memorized, such as a security code.
In this context, the two factors involved are sometimes spoken of as something you have and something you know. A common example of two-factor authentication is a bank card: the card itself is the physical item and the personal identification number (PIN) is the data that goes with it. Including those two elements makes it more difficult for someone to access the user’s bank account because they would have to have the physical item in their possession and also know the PIN.
According to proponents, two-factor authentication can drastically reduce the incidence of online identity theft, phishing expeditions, and other online fraud, because stealing the victim's password is not enough to give a thief access to their information.
What are authentication factors?
An authentication factor is an independent category of credential used for identity verification. The three most common categories are often described as something you know (the knowledge factor), something you have (the possession factor) and something you are (the inherence factor). For systems with more demanding requirements for security, location and time are sometimes added as fourth and fifth factors.
Single-factor authentication (SFA) is based on only one category of identifying credential. The most common SFA method is the familiar user name and password combination (something you know). The security of SFA relies to some extent upon the diligence of users. Best practices for SFA include selecting strong passwords and refraining from automatic or social logins.
For any system or network that contains sensitive data, it's advisable to add additional authentication factors. Multifactor authentication (MFA) involves two or more independent credentials for more secure transactions.
Single-factor authentication (SFA) vs. two-factor authentication (2FA)
Although ID and password are two items, because they belong to the same authentication factor (knowledge), they are single factor authentication (SFA). It is really because of their low cost, ease of implementation and familiarity that passwords that have remained the most common form of SFA. As far as SFA solutions go, ID and password are not the most secure. Multiple challenge-response questions can provide more security, depending on how they are implemented, and standalone biometric verification methods of many kinds can also provide more secure single-factor authentication.
One problem with password-based authentication is that it requires knowledge and diligence to create and remember strong passwords. Passwords also require protection from many inside threats like carelessly discarded password sticky notes and old hard drives and social engineering exploits. Passwords are also prey to external threats such as hackers using brute force, dictionary or rainbow table attacks. Given enough time and resources, an attacker can usually breach password-based security systems. Two-factor authentication is designed to provide additional security.
2FA for mobile authentication
Apples iOS, Google Android and Blackberry OS 10 all have apps supporting 2FA and other multifactor authentication. Some have screens capable of recognizing fingerprints; a built-in camera can be used for facial recognition or iris scanning and the microphone can be used in voice recognition. Many smartphones have GPS to verify location as an additional factor. Voice or SMS may also be used as a channel for out-of-band authentication. There are also apps that provide one time password tokens, allowing the phone itself to serve as the physical device to satisfy the possession factor.
Google Authenticator is a two-factor authentication app. To access websites or web-based services, the user types in his username and password and then enters a one-time passcode (OTP) that was delivered to his device in response to the login. The six-digit one time password changes once every 30-60 seconds and serves again to prove possession as an authentication factor.
Smartphones offer a variety of possibilities for 2FA, allowing companies to use what works best for them.
Is two-factor authentication secure?
Opponents argue (among other things) that, should a thief gain access to your computer, he can boot up in safe mode, bypass the physical authentication processes, scan your system for all passwords and enter the data manually, thus -- at least in this situation -- making two-factor authentication no more secure than the use of a password alone.
Higher levels of authentication for more secure communications
Some security procedures now require three-factor authentication (3FA), which typically involves possession of a physical token and a password used in conjunction with biometric data, such as fingerscanning or a voiceprint.
An attacker may occasionally break an authentication factor in the physical world. A persistent search of the target premises, for example, might yield an employee card or an ID and password in an organization’s trash or carelessly discarded storage containing password databases. If additional factors are required for authentication, however, the attacker would face at least one more obstacle.
The majority of attacks come from remote internet connections. 2FA can make distance attacks much less of a threat because accessing passwords is not sufficient for access and it is unlikely that the attacker would also possess the physical device associated with the user account. Each additional authentication factor makes a system more secure. Because the factors are independent, compromise of one should not lead to the fall of others.
Type 2 hypervisor (hosted hypervisor)
Type 2 hypervisor (hosted hypervisor)
Posted by Margaret Rouse
A Type 2 hypervisor, also known as a hosted hypervisor, is a virtual machine manager that installs on top of a host's operating system (OS).
A Type 2 Hypervisor is a virtualization layer that is installed above a host operating system (OS), such as Windows Server, Linux, or a custom OS installation. The host operating system has direct access to the server's hardware and is responsible for managing basic OS services. The Type 2 Hypervisor creates virtual machine environments and coordinates calls for CPU, memory, disk, network, and other resources through the host OS.
A Type 1 Hypervisor, by contrast, is installed directly on physical host server hardware. It does not require the presence of a full host OS and has direct access to the underlying physical hardware. Regardless of the implementation, virtual machines (VMs) and their guest OS's are typically unaware of which type of Hypervisor is implemented, as they interact only with the hypervisor itself.
From an implementation standpoint, there are potential benefits and drawbacks of both types of Hypervisors. For example, the requirement of a full host OS can be seen as an advantage in some areas (including hardware and driver compatibility, configuration flexibility, and reliance on familiar management tools), or as a potential liability (based on potential security issues exposed by the host OS, possible performance overhead, and management burdens for configuring and maintaining the host OS). It is also important to note that current virtualization platforms can exhibit characteristics of both Type 1 and Type 2 Hypervisors and that vendors have provided features that can mitigate potential issues in both approaches.
Continue Reading About Type 2 hypervisor (hosted hypervisor)
'Type 2 hypervisor (hosted hypervisor)' is part of the Virtualization Glossary
Posted by Margaret Rouse
Uncloud is the removal of applications and data from a cloud computing platform.
In recent years, organizations ranging from small and medium-sized businesses to large enterprises have turned to the cloud to run applications, store data and accomplish other IT tasks. Over time, however, an organization may elect to uncloud one, a few or, possibly, all of its cloud-based assets. Examples could include shutting down a server instance in a public cloud and moving the associated software and data to an in-house data center or colocation facility. De-cloud is another term used to describe this reverse cloud migration.
In the process of unclouding, the cloud customer or, potentially, a channel partneracting on its behalf, will work with the cloud vendor to extract the customer's applications and data. The task involves locating the data and mapping the application's dependencies within the cloud vendor's infrastructure. The unclouding customer -- and its channel partner -- may encounter higher levels of complexity in the case of a public multi-tenant cloud setting. A customer may have to wait for the cloud vendor's scheduled downtime to migrate its applications and data or the cloud provider may limit the customer's use of migration tools so as not to interfere with the application performance of other customers
Customers may cite a number of reasons for wanting to uncloud. Factors include security issues, liability concerns and difficulty in integrating cloud-based applications with on-premises enterprise applications and data. Frustrated expectations with respect to the cloud's cost efficiency may also influence de-clouding decisions. Anecdotal evidence suggests that customers citing cost as a factor may elect to move applications to an in-house, hyper-converged infrastructure as the better economic choice.
Reverse migration on the rise: Channel partners see customers uncloud
by John Moore
Channel partners report that a small but increasing number of customers are moving some or all of their applications off the cloud.
Channel partners say some of their customers have begun to uncloud and are asking for help migrating back to in-house data centers or colocation facilities.
While cloud computing, in general, remains a high growth area, a counter trend of reverse migration has started to surface. Organizations, industry executives said, cite a number of reasons for moving some or all of their applications off the cloud: security and compliance concerns, frustration over elusive cost savings, and the changing data center economics of hyper-converged architecture.in
At Trace3 Inc., cost and hyper-convergence played key roles in one customer's off-the-cloud migration. Trace3, based in Irvine, Calif., focuses on data center, big data and cloud technologies. Mark Campbell, research principal and director of Innovation Research at Trace3, said a retail client recently completed a "back-sourcing" exercise in which it migrated its entire cloud footprint into a colocation data center, where the retailer could control the infrastructure.
"Cost was the primary driver," Campbell said of the retail customer. "They were estimating they could save 40% over their cloud IaaS [infrastructure as a service] and PaaS [platform as a service] provider by building their own private cloud built on hyper-converged and commodity infrastructure," he said.
Campbell noted that he hasn't had the opportunity to follow up with the company to see whether it actually realized the projected savings.
Getting off the cloud
Nevertheless, other Trace3 customers have taken steps to uncloud, a pattern Campbell began noticing last year. He said a few customers -- numbered in the dozens, out of a client base of some 2,000 companies -- have encountered issues in the cloud.
"The vast majority of our customers have moved at least some of their enterprise applications to the cloud, and the vast majority of those are continuing in the cloud," Campbell said. "There is a small minority, however, that are moving some or all of their applications back into their own data centers or colocation sites more under their control."
"We are exiting the honeymoon stage, and that is always a rude awakening -- and expectation readjustment -- for both parties."
Paul Dippell CE, Service Leadership
Irwin Teodoro, senior director of data center transformation atDatalink Corp., a data center services provider based in Eden Prairie, Minn., has also observed declouding among his company's customers. He said for every 10 companies pursuing some form of cloud computing, he has seen two or three looking to get out of the cloud.
"This is definitely a trend we are going to see more of."
The counter-cloud migration may signal a resetting of expectations among channel partner customers. Paul Dippell, CEO of Service Leadership Inc., a company based in Plano, Texas, that provides a financial and operational benchmark for channel companies, said cloud vendors tell customers that their offerings are "wonderful, weightless, agile, low cost, mobile [and] fantastically free of the impediments of past computing models."
But that vision doesn't always line up with reality.
"What the customers are experiencing is different enough that a material number of customers are declouding or significantly changing -- reducing -- their cloud strategies to regain a more solid computing foundation and rational cost," Dippell said.
"I don't expect cloud to fail, by any means, and I do expect it to grow," Dippell said. "But we are exiting the honeymoon stage, and that is always a rude awakening -- and expectation readjustment -- for both parties."
Dippell added that he's heard anecdotal accounts of solution providers winning new customers by agreeing to decloud them.
When customers uncloud: Top factors
A number of factors influence migration decisions. Unforeseen security issues, for example, may drive some applications back in-house. In general, risk and liability concerns are tempering enthusiasm for the cloud, said Dan Liutikas, managing attorney at InfoTech Law Advocates P.C., and chief legal officer and secretary at CompTIA.
Channel partners, as well as customers, are questioning whether cloud is the correct answer for every customer. While the cloud adoption wave continues, more and more service providers are weighing whether cloud is the right approach for a particular customer or a subset of customers, Liutikas said. The latter includes companies in highly regulated industriessuch as healthcare and financial services.
"Sometimes … on-premises is the better answer based on their customers' needs," he said.
Organizations may also struggle to achieve deep integration between their cloud applications and their on-premises legacy applications and data, according to industry executives. But beyond legal and technical hurdles,cost has become a sticking point for some cloud users.
Unexpected cloud costs may stem from a customer's failure to quantify all the necessary services in its initial calculations. Campbell said most customers tend to be accurate in estimating traditional infrastructure and capacity costs for servers, storage capacity and bandwidth, among other components. But on the other hand, they tend to underestimate the cost of items beyond their data centers. Those items include the cost of creating multiple storage snapshots to back up data, the cost of data replication and the cost of restoring data.
"This leads to budgetary surprises," Campbell said.
Cloud sprawl can also stress budgets.
"Much like [virtual machine] sprawl, it is not uncommon for the initial targets of a cloud installation to grow as both the IT and business discover new applications, features and snap-of-the-fingers capacity bursts," Campbell explained. "These all add line items to the monthly bill."
In addition, cloud offerings may run afoul of conventional budget controls.
Campbell said traditional IT organizations built their financial processes and controls to monitor big-ticket items such as projects and large Capexpurchases and smaller items such as consumables and onetime Opexexpenditures handled on an approval basis.
"This works great in a data-center-centric operation, but imagine the befuddled expression on the comptroller's face when he gets his first 23,000-line-item bill from Amazon," Campbell said. "It is very hard to even decipher what these expenditures are for, let alone garner business justification."
Customers disappointed with cloud cost savings may end up migrating applications to hyper-converged infrastructures.
"Some [companies] are pulling in applications from the cloud to their data centers," Campbell said. "If they do that, we are seeing hyper-convergenceas being one of those enabling mechanisms."
In addition to cost, corporate culture can play a role in a cloud reversal.
"Executives who are not fully aware of the concepts of the cloud feel somewhat apprehensive that data is somewhere else and feel lack of control," Teodoro said.
Managing the declouding challenge
Assisting customers as they back out of the cloud can prove difficult. Teodoro said public clouds in which multiple customers share a common infrastructure represent the greatest challenge. Dealing with maintenance windows is one issue. A customer can't just extract an application based on its own ad hoc maintenance timetable; they have to wait for the cloud provider's scheduled downtime.
"You can't move when you want to move," Teodoro said. "You've got to move at somebody else's pace and schedule."
Determining a cloud-based application's dependencies with respect to the cloud provider's infrastructure is another consideration. A channel partner working on an off-the-cloud migration project needs to figure out what virtual machines the application resides on and identify the virtual LANs and subnets in the compute infrastructure to which the application can be traced, Teodoro explained. The goal: extract the application without breaking something in the environment.
"The keys for us are really to understand the dependencies in the environment -- down to the infrastructure -- and find ways to carve out the environment into smaller chunks or workgroups," Teodoro said.
Another complication: Migration tools can help channel companies uncloud customers, automating the tasks of data gathering, analysis and forensics. But in a shared, multi-tenant cloud, service providers can't use their own tools, since they could impact a cloud provider's other clients, Teodoro said.
Seeking a happy balance
Customers juggling multiple IT environments provide yet another degree of difficulty. Jim Piazza, vice president of service management at CenturyLink Inc., which offers colocation, public cloud and IT services, said customers such as software as a service providers may offer multiple versions of their software to support different clients. And those different versions may be hosted on different computing platforms: in-house private clouds, colocation centers and public clouds, for example.
"It's an interesting mix … that is really quite a challenge to manage," Piazza said.
Piazza said CenturyLink, based in Monroe, La., provides customers a service catalog to help them keep track of what version of their software is deployed where. In addition, the company has built interconnects between customers' colocation footprints in CenturyLink facilities and CenturyLink's public cloud. The service catalog and interconnects enable the company's clients to move their end customers from one platform to another, Piazza said.
Piazza likened migrating customers and their workloads among the various platforms to supporting a 3D jigsaw puzzle.
For Campbell, the cloud conundrum boils down to harmonizing the computing platforms now available to customers.
"It's finding that happy balance -- what lives best in the cloud and what lives best in-house."
Continue Reading About uncloud (de-cloud)
Dig Deeper on Cloud integration services and cloud enablement services
Unified Endpoint Management
Unified Endpoint Management (UEM)
Unified endpoint management (UEM) is an approach to securing and controlling desktop computers, laptops, smartphones and tablets in a connected, cohesive manner from a single console. Unified endpoint management typically relies on the mobile device management (MDM) application program interfaces (APIs) in desktop and mobile operating systems.
Microsoft's inclusion of MDM application program interfaces in Windows 10 made unified endpoint management a possibility on a large scale. Prior to the release of Windows 8.1, there was no way for MDM software to access, secure or control the operating system and its applications.
In Windows 10, the tasks IT can perform through MDM software include:
Mobile device management is significantly less robust than traditional Windows management tools, however. Examples of tasks information technology (IT) administrators can't perform through Windows 10 MDM APIs include:
Many vendors market UEM as a feature of their broader enterprise mobility management (EMM) software suites and some EMM vendors have made strides to close the gap between MDM and traditional Windows management tools. For example, MobileIron Bridge allows IT administrators to use MDM to deploy scripts that modify the Windows 10 file system and registry and perform other advanced tasks, including deploying legacy.exe applications.
Other vendors that support UEM include VMware, Citrix, BlackBerry and Apple. Apple's Mac OS X operating system has included MDM APIs since at least 2012, when AirWatch and MobileIron announced support. Today, all of the major vendors that offer UEM also support OS X.
Pulling Insights from Unstructured Data – Nine Key Steps
Data, data everywhere, but not a drop to use. Companies are increasingly confronted with floods of data, including “unstructured data” which is information from within email messages, social posts, phone calls, and other sources that isn’t easily put into a traditional column. Making sense and actionable recommendations from structured data is difficult, and doing so from unstructured data is even harder.
Despite the challenge, the benefits can be substantial. Companies that commit to examining unstructured data that comes from devices and other sources should be able to find hidden correlations and surprising insights. It promotes trend discovery and opens opportunities in ways that traditionally-structured data cannot.
Analyzing unstructured data can be best accomplished by following these nine steps:
1. Gather the data
Unstructured data means there are multiple unrelated sources. You need to find the information that needs to be analyzed and pull it together. Make sure the data is relevant so that you can ultimately build correlations.
2. Find a method
You need a method in place to analyze the data and have at least a broad idea of what should be the end result. Are you looking for a sales trend, a more traditional metric, or overall customer sentiment? Create a plan for finding a result and what will be done with the information going forward.
3. Get the right stack
The raw data you pull will likely come from many sources, but the results have to be put into a tech stack or cloud storage in order for them to be operationally useful. Consider the final requirements that you want to achieve and then judge the best stack. Some basic requirements are real-time access and high availability. If you’re running an ecommerce firm, then you want real-time capabilities and also want to be sure you can manage social media on the fly based on trend data.
4. Put the data in a lake
Organizations that want to keep information will typically scrub it and then store it in a data warehouse. This is a clean way to manage data, but in the age of Big Data it removes the chance to find surprising results. The newer technique is to let the data swim in a “data lake” in its native form. If a department wants to perform some analysis, they simply dip into the lake and pull the data. But the original content remains in the lake so future investigations can find correlations and new results.
5. Prep for storage
To make the data useful (while keeping the original in the lake), it is wise to clean it up. For example text files can contain a lot of noise, symbols, or whitespace that should be removed. Dupes and missing values should also be detected so analysis will be more efficient.
6. Find the useful information amongst the clutter
Semantic analysis and natural language processing techniques can be used to pull various phrases as well as the relationship to that phrase. For example “location” can be searched and categorized from speech in order to establish a caller’s location.
7. Build relationships
This step takes time, but it’s where the actionable insights lay. By establishing relationships between the various sources, you can build a more structured database which will have more layers and complexity (in a good way) then a traditional single-source database.
8. Employing statistical modeling
Segmenting and classifying the data comes next. Use tools such as K-means, Naïve Bayes, and Support Vector Machine algorithms to do the heavy lifting to find correlations. You can use sentiment analysis to gauge customer’s moods over time and how they are influenced by product offerings, new customer service channels, and other business changes. Temporal modeling can be applied to social media and forums to find the most relevant topics that are being discussed by your customers. This is valuable information for social media managers who want the brand to stay relevant.
9. End results matter
The end result of all this work has to be condensed down to a simplified presentation. Ideally, the information can be viewed on a tablet or phone and helps the recipient make smart real-time decisions. They won’t see the prior eight steps of work, but the payoff should be in the accuracy and depth of the data recommendations.
Every company’s management is pushing the importance of social media and customer service as the main drivers of company success. However, these services can provide another layer of assistance to firms after diagnostic tools are applied to their underlying data. IT staff need to develop certain skills in order to properly collect, store, and analyze unstructured data in order to compare it with structured data to see the company and its users in a whole new way.
About the author: Salil Godika is Co-Founder, Chief Strategy & Marketing Officer and Industry Group Head at Happiest Minds Technologies. Salil has 18 years of experience in the IT industry across global product and services companies. Prior to Happiest Minds, Salil was with MindTree for 4 years as the Chief Strategy Officer. Before MindTree, Salil spent 12 years in the United States working for start-ups and large technology product companies like Dassault Systems, EMC and i2 Technologies. His accomplishments include incubating a new product to $30million in revenue, successful market positioning of multiple products, global marketing for a $300million business and multiple M&As.
UTM vs. NGFW: Unique products or advertising semantics?
UTM vs. NGFW: Unique products or advertising semantics?
by: Michael Heller
In comparing UTM vs. NGFW, organizations find it difficult to see if there are differences between the two products or if it is just marketing semantics.
It can often be difficult to discern the difference between unified threat management (UTM) and next-generation firewalls (NGFW). Experts agree that the lines appear to be blurring between the two product sets, but enterprises that focus on defining each product type during the purchasing process may be making a mistake.
"Service providers for ISPs have different needs than enterprises," said Young. "So, UTM vendors will only offer basic firewall features as a price-play for that market."
Young said those differences in ease of use and support demands still exist today, though they have become more nuanced; there is overlap in the underlying technology of NGFW and UTM, and spec sheets tend to look similar. Young said that the key differences now are more around quality of features, and the level of support from channel partners to meet customer needs.
Young also noted that vendors tend to excel in one market or the other, like Fortinet Inc. with UTM for SMBs, or Palo Alto Networks Inc. with NGFW for enterprises. Few vendors can succeed in both, he said, like Check Point Software Technologies Ltd. has done.
"The confusion came from SMB vendors trying to move into the enterprise market without making channel and quality changes," said Young. "It was an intentional campaign to confuse, but very few end users are confused about what they need. It is either a racecar [NGFW] or a family van [UTM]."
Brazil admitted that the differences between NGFW and UTM can be confusing, even for experienced practitioners, but described UTM as a collection of unrelated security features, one of which is the firewall.
"UTM generally refers to a firewall with a mix of other 'bolted-on' security functions like antivirus and even email spam protection," said Brazil. "These are not access control features that typically define a firewall."
What traditionally has defined next-gen firewalls, Brazil said, is robust Layer 7 application access control, though an increasing number of NGFWs are being augmented with integrated threat intelligence, enabling them to deny known threats based on a broad variety of automatically updated policy definitions.
However, Brazil did caveat his distinctions by saying that a UTM could be considered an NGFW if it met the Layer 7 parameters, and an NGFW that included malware functions could be considered a UTM. Though, he was clear that despite these potential overlap points, he would keep the classifications separate because of a lack of similarities in other respects, like access control.
Brazil said that NGFW will eventually become the standard, and the terms NGFW and firewall will become synonymous. He said UTM will remain an important product for SMBs, especially when a company prioritizes simplicity of deployment over the depth of security and performance, but NGFW and UTM will not converge because of performance and management concerns.
"The idea of a 'converged' network security gateway will continue to have appeal, so vendors will continue to add functionality to reduce cost of firewall ownership to the customer and increase revenue to the vendor," said Brazil. "However, issues with performance and manageability will continue to force separate, purpose-built systems that will be deployed in enterprise networks. As such, there will continue to be enterprise firewalls that should not be considered UTMs."
Mike Rothman, analyst and president for Phoenix-based security firm Securosis LLC., said he believes that UTM and NGFW are essentially the same, and the differences are little more than marketing semantics. Rothman agreed that marketing from vendors caused confusion, but also blamed analysts for adopting the term NGFW and driving it into the vernacular.
He said that early UTMs did have problems scaling performance from SMBs to larger enterprises, especially when trying to enforce both positive rules (firewall access) and negative rules (IPS), but that early NGFW had the same issues keeping up with wire speed when implementing threat prevention. He said that the perceived disparities were used to enforce market differentiation, and persist today, despite these scaling issues not being relevant anymore.
According to Rothman, the confusion lies not only in comparing the two device types, but also in the term "next-generation firewall" itself, which he thinks minimizes what the device does.
"What an NGFW does is bigger than just a firewall," said Rothman. "A firewall is about access control, basically enforcing what applications, ports, protocols, users, etc., are allowed to pass through the firewall. The NGFW also can look for and deny access to threats, like an IPS. So it's irritating that the device is called an NGFW, as it does more than just a firewall. We call it the Network Security Gateway, as that is a more descriptive term."
Rothman said that today's UTMs can do everything an NGFW can do, as long as they are configured properly and have the right policy integration. He said he believes that arguments about feature sets or target markets are examples of aritificial distinctions that only serve to confuse the issue.
"From a customer perspective, the devices do the same thing," Rothman said. "The NGFW does both access control and threat prevention, as does the UTM, just a little differently in some devices. Ultimately, the industry needs to focus on what's important: Will the device scale to the traffic volumes they need to handle with all of the services turned on? That's the only question that matters."
Moving forward, despite differences in opinions, the experts agree that enterprises shouldn't go into a purchasing process by trying to decide whether they need an NGFW or a UTM. Rather, the ultimate goal should always be to focus on the best product to solve their problems.
Rothman said that the distinctions will go away as low-end UTM vendors add more application-inspection capabilities and more traditional NGFW vendors go downmarket by offering versions suitable for SMBs. He also said he doesn't expect an end to confusing vendor marketing anytime soon, so enterprises need to be careful to ignore these semantics and focus on finding the right product to address security needs.
Young said that in the short term, UTM and NGFW will remain separate and will both continue to be mainstays for SMBs and larger enterprises respectively, and the decision around what device to use will be a question of need.
The question of UTM vs. NGFW is still divisive, and experts have different ideas regarding if and where the two technologies diverge when looking at the issue from a vendor perspective. However, when looking at the issue from a customer perspective, the experts agree that focusing on an enterprise's security needs will help to mitigate the confusion and lead to the right product.
"It isn't just about technology, it is about how small company's security is different than a big company's security," said Young. "It's all about the use case, not a 'versus.'"
VDI Disaster Recovery
Turning to the cloud for VDI disaster recovery
The cloud makes it possible for any business to have a strong VDI disaster recovery plan. You can choose to back up data by file, or you can back up the whole image.
Cloud services can provide the flexibility and low up-front costs that make implementing a DR strategy more feasible than ever, but it can be difficult to integrate the cloud with existing systems. Luckily, there are several ways to incorporate cloud services into your VDI disaster recovery strategy.
The servers hosting your virtual desktops are just as susceptible to floods, hurricanes and cyberattacks as your physical desktops, whether those servers reside on-premises or at a remote data center. Even a minor disaster can bring an entire operation to a standstill. But you can minimize the effects that a disaster can have on your business if you have a VDI disaster recovery (DR) plan in place.
Comprehensive disaster recovery plans are sometimes out of reach for companies – small and medium-sized businesses -- that lack the resources or infrastructure necessary to build an effective strategy. The cloud has changed all that.
Backing up to the cloud
Many organizations already back up virtual machines (VMs) to cloud storage services, usually by file-level backup or image-level backup.
With file-level backup, companies copy individual files within the VM to the cloud, whereas image-level backup replicates the entire VM image. In both cases, the data is copied to an off-site location, away from your primary operations.
File-level backups are similar to the traditional types of backups that happen routinely on desktops and servers. An agent is installed on the guest operating system and it controls which files get backed up and when. File-level backup systems are easy to implement and make it simple to restore individual files. But this approach can be cumbersome and time-consuming if you have to restore an entire VM.
Image-level backups are snapshots of your VMs at a given point in time. One method you can use to create the snapshots is to run a backup script or similar mechanism to periodically copy the image files to a cloud storage provider. Then you can restore the entire VM when you need it without going through the tedious process of restoring many individual files. That being said, using a script can be slow and resource-intensive.
A better approach is to do what many services and tools already offer: Back up an initial copy of the entire VM image to storage, and then apply changed blocks to the image at regular intervals. This approach to VDI disaster recovery can help avoid much of the system overhead associated with the script method and provide an efficient and simple mechanism to back up and restore VMs in their entirety.
Some VM storage services and backup tools support both file-level and image-level backups, often without requiring a change to the VM configuration, such as installing an agent. In this way, you're getting the best of both worlds: You can restore individual files if you need them, or restore the entire VM.
Web Server Security
Web Server Security
Posted by Margaret Rouse
Web server security is the protection of information assets that can be accessed from a Web server.
Web server security is important for any organization that has a physical or virtual Web server connected to the Internet. It requires a layered defense and is especially important for organizations with customer-facing websites.
Separate servers should be used for internal and external-facing applications and servers for external-facing applications should be hosted on a DMZ or containerized service network to prevent an attacker from exploiting a vulnerability to gain access to sensitive internal information.
Penetration tests should be run on a regular basis to identify potential attack vectors, which are often caused by out-of-date server modules, configuration or coding errors and poor patch management. Web site security logs should be audited on a continuous basis and stored in a secure location. Other best practices include using a separate development server for testing and debugging, limiting the number of superuser and administrator accounts and deploying an intrusion detection system (IDS) that includes monitoring and analysis of user and system activities, the recognition of patterns typical of attacks, and the analysis of abnormal activity patterns.
Continue Reading About web server security
Posted by: Margaret Rouse
Z-Wave is a wireless communication technology that is used in security systems and also business and home automation.
Z-Wave is often used in locks, security systems, lighting, heating, cooling and home appliances. Support can be integrated in products or added by retrofitting standard electronics and devices.
Z-Wave communications use low-power radio signals in the 900MHz range, separated fromWi-Fi. The system supports automatic discovery of up to 230 devices per controller. Multiple controllers can also communicate with one another and pass commands to support additional devices. Z-wave is optimized for low latency, with data rates of up to 100KB/s.
Z-Wave is marketed primarily as a security product. However, vulnerabilities have been detected that allow attackers to spoof an access point to gain control, even on encrypted versions. Like most security automation products, Z-Wave increases a system’s attack surface because it adds connected devices and associated software. To prevent networked devices from increasing the overall vulnerability of a system, it’s important to consider the security of any connected element.
Over 80 percent of commercial home security systems use Z-Wave as the protocol by which their components communicate; the Z-Wave Alliance, a global consortium organized to bring compatible devices to market, includes more than 250 manufacturers among its members.
See a Black Hat conference video about hacking Z-Wave automation systems:
# (OPEN SOURCE)
7 plantillas para hacer infografías sin Photoshop
Cuando le quieres presentar información a un colega ¿cómo lo haces? ¿escribes un reporte? ¿usas una plantilla de PowerPoint que ya te tiene aburrido de ver tantas veces? Realmente, esta es una duda que mucha gente tiene. Una buena solución para este problema es utilizar infografías.
¿Por qué una infografía? Una infografía es una herramienta muy eficaz para comunicarse y cautivar la atención del lector - te permiten presentar de forma simple información que de cualquier otra forma sería difícil de comunicar.
Una encuesta reciente nos mostró que este año, profesionales de marketing tienen como prioridad aprender sobre contenido original y visual. Y como también aprendimos en nuestro reporte sobre el estado del Inbound Marketing en Latinoamérica, 17% de las empresas considera el contenido visual una prioridad.
Pero aquí está el problema: ¿cómo aquellos que no tienen experiencia con diseño - o el presupuesto para pagar una agencia, un diseñador o un programa de diseño - van a crear infografías profesionales y atractivas?
Que bueno que preguntaste. Aquí te revelamos un pequeño secreto: puedes ser un diseñador profesional utilizando un programa que probablemente ya tienes en tu computadora hace años: PowerPoint. PowerPoint puede ser tu mejor amigo cuando quieres crear contenido visual.
Y para ayudarte a comenzar, hemos creado 7 plantillas de infografías increíbles que puedes usar gratuitamente.
En el siguiente video te mostraremos cómo editar una de estas plantillas y gacer tu propia infografía. No te olvides de descargar las plantillas para poder personalizarlas.
Herramientas básicas para utilizar en cualquier infografía
Cuando piensas en crear una infografía, tienes que considerar cuatro herramientas esenciales de PowerPoint que te ayudarán a lo largo del proceso de creación:
Una infografía con distintos colores e imágenes
Una vez que ya entendiste cómo funcionan las herramientas básicas tienes que empezar por elegir los colores te gustaría utilizar. La mejor forma de hacer esto es seleccionando dos colores principales y dos secundarios. Trata de que estos colores vayan de acuerdo a tu imagen corporativa.
Si quieres usar formas, íconos y tipos de letra distintos, un buen lugar para encontrarlos es el proprio PowerPoint, que tiene más de 400 opciones de íconos para descargar.
Muestra estadísticas utilizando distintos tipos de letra
Es muy común querer compartir estadísticas dentro de una infografía. Los gráficos pueden ser monótonos y poco atractivos por lo que intenta utilizar distintos colores. Otra cosa que ayuda a destacar esta información es el uso de distintos tipos de letras y distintos tamaños. También le puedes agregar íconos que sean relevantes a cada estadística, o a las que quieras destacar más. Aquí hay un ejemplo de esto:
Una infografía es una muy buena forma de comparar dos cosas distintas porque puedes ponerlas lado a lado y es fácil de visualizar las diferencias. Divide cada slide en dos partes y elige un esquema de colores distintos para cada lado, de esta forma, el contraste será mayor. Incorpora todos los puntos que hablamos en este post - utiliza distintos tipos de letras, tamaños, gráficos e imágenes para hacer la información más clara.
Busca inspiración en Pinterest
Otra buena idea es inspirarse en Pinterest, por ejemplo, utiliza cajas grandes para mostrar información importante y utiliza tamaños distintos, siempre siguiendo la idea de utilizar imágenes.
Algo un poco distinto
Si quieres mostrar información y estadísticas en un formato que no tenga que ser tan formal, puedes utilizar esta plantilla: es divertida pero al mismo tiempo te ayuda a mostrar tu información de una forma clara y cautivadora.
Cuando termines la infografía, guárdala en formato PNG - esto le va a dar mejor calidad de imagen si la quieres utilizar para web.
A (OPEN SOURCE)