Glosario eSalud | eHealth Glossary
Glosario sobre eSalud | eHealth Glossary
Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS
Boost patient experience at first point of contact: The call center
Boost patient experience at first point of contact: The call center
I recently worked with a hospital improve its cancer program. It had wonderful doctors and an up-to-date facility. Nurses were very patient-focused and the staff smiled a lot. What could be better?
Yet new patient volumes were sluggish and growth elusive. The hospital found the highly competitive local market very challenging, especially because differentiation--a meaningful point of difference--was pretty much non-existent. In truth, the area hospitals were all pretty much the same. How could it compete? Most of the ideas focused on the patient experience inside the hospital.
So instead, we decided to see what it was like as an outsider trying to find out more about the hospital options if we were diagnosed with cancer. We began our inquiry, with observational research and shopping the experience. We called hospitals in the region, as well as some nationally recognized leaders in cancer care, hoping to learn something of value.
We contacted 20 hospitals and quickly realized something was clearly missing: The basics of a good (let alone great) customer experience. I invite you to call your own call center and see how it presents your excellent services to your consumer.
The typical call experience went something like this:
We experienced 18 of these types of encounters.
Then we called Cancer Treatment Centers of America, Dana Farber Cancer Institute, Johns Hopkins Medicine and Massachusetts General Hospital. While each was different, they at least had an approach to cancer inquiries and cancer care that demonstrated they might actually care about a caller requesting information.
Of these national brands, Cancer Treatment Centers of America, was clearly in another space. The operator was immediately engaged, showed empathy towards me and expressed concern for my "father." She knew whom to connect me with--their cancer advocate, who introduced herself, expressed her concern for my father and explained how the Centers deliver care for cancer patients. Their well-thought-out call center process was all about making both the patient and the family feel important, cared about and listened to. The process was also easy to understand and made sense.
What startled us, was the sorry state of the rest of the call centers. The basic caring of the other healthcare organizations was totally missing in action at the first point of contact. Any effort to understand the needs of a cancer patient at that crucial point was back in the dark ages. The operators, supposedly, are there to answer a call in three rings and direct the caller to where he/she needs to go. For us, we would have been happy if they had, at the very least, answered the phone in less than 10 rings and greeted us with kindness.
True, most calls to a hospital's central number are from people wanting to be connected to a patient, seeking a physician or looking for an administrative department--billing or admissions. We clearly threw them a curve ball asking for information about their cancer protocols. But was that enough of an excuse not to:
Which led us to wonder: Why? With all the innovative work going on these days to respond to healthcare reform, almost everyone, it seems, forgets the telephone center--a necessary evil.
From our perspective, the call center seems an easy point of differentiation. How can a healthcare institution make a person's overall experience satisfyingly patient-focused and person-centered if they canâ€™t even answer the phones well? And conversely, if they could create an amazing experience at that first touch point, maybe they could do the same throughout the entire patient and family experience.
Overwhelmingly, this whole experience felt like a time to pause and focus on the basics. While not innovative or sexy, the call center is essential. It must reflect well on you and add value to your organization, not dysfunction.
Remember: you don't get a second chance to make a first impression. Your call center is the first contact someone has with you. You certainly don't want to go to a hospital that cannot even get the phones answered satisfactorily nor provide an operator who can genuinely engage with you with emotion and empathy. It may seem small, but really, it is huge. And healthcare organizations better start paying attention, soon.
Andrea J. Simon, Ph.D., is a former marketing, branding and culture change senior vice president at Hurley Medical Center in Flint, Michigan. She also is president and CEO of Simon Associates Management Consultants.
BPM for Dummies
BPM for Dummies
Hace unos cuantos años nadie había oído hablar de Business Process Management (BPM), pero ha irrumpido en la escena global hasta convertirse en la tendencia de gestión empresarial y tecnológica más popular de la década. Si se encuentra en alguna empresa o sector industrial, ya sea público o privado, es casi seguro que habrá oído hablar del movimiento hacia el proceso, o de cuestiones como gestión de procesos o mejora de los procesos. Puede que sepa acerca de métodos de mejora de los procesos como Six Sigma o acerca de nuevas tecnologías como Business Activity Monitoring (BAM), supervisión de la actividad de negocio, o Service-Oriented Architecture (SOA), la arquitectura orientada a servicios. BPM representa la culminación de la experiencia, pensamiento y desarrollo profesional de todo un colectivo en la gestión empresarial durante las pasadas décadas. Coloca al cliente en primer lugar. Se centra en el negocio. Faculta a los individuos de cualquier rincón de una empresa para alcanzar un mayor éxito. Reúne a personas y sistemas. BPM es donde se condensan todas las elevadas ambiciones y mejores estrategias. Junte todo esto y obtendrá una mezcla que puede parecerle bastante confusa. Pero en realidad, BPM es un concepto muy sencillo. Es un conjunto de métodos, herramientas y tecnologías utilizados para diseñar, representar, analizar y controlar procesos de negocio operacionales; un enfoque centrado en los procesos para mejorar el rendimiento que combina las tecnologías de la información con metodologías de proceso y gobierno.
Breakthrough Technologies in Surgery
Building smarter wearables for healthcare - Part 1
Building smarter wearables for healthcare, Part 1: Examining how healthcare can benefit from wearables and cognitive computing
by Robi Sen
In this article, I examine the current trends in wearable computing in healthcare. Also, I explore the gaps between what can be done with current hardware offerings and their analytic capabilities. You'll learn how cognitive computing platforms like Watson can accelerate time to market for wearable device makers and also how Watson can fill the gap between the potential of wearables and their current rather weak offerings.
Wearables and healthcare
One of the hottest trends among hardware developers is the development of small wearable sensors, orwearables, specifically for collecting health and lifestyle data. This trend includes everything from simple devices such as the Fitbit to more sophisticated Lab-on-a-Chip devices that measure everything from blood sugar and hormone levels to complex proteins.
Unfortunately, most of these devices generate data that is underutilized. Either the user cannot derive anything but simple metrics from the devices, such as step count, or the data to is just not accessible to users. This underutilization often occurs because many hardware developers cannot afford to develop either the big data capabilities that are needed to manage all that data or the analytic capabilities that are needed to derive useful information from the wearables.
Services like the IBM Watson API, however, provide developers with the ability to offer valuable information to their users who use wearables, without having to build their own PaaS offerings. With the help of Watson, developers can create solutions that combine and compare data, find patterns and look for trends in that data, and even learn about the patients who are using the wearables.
For this article, I define wearables as a device with central processing capability and sensors that are designed to provide services to the user with the least amount of user interaction as possible for a specific task or need. So, while a smartphone can be worn on your arm as a wearable fitness sensor, it's not designed to do that. So it requires a lot of interaction to do tasks such as downloading an application, enabling the application, and then strapping the setup to your arm in an often cumbersome manner. As such, a smartphone can be worn, but it is not a wearable. A good example is something like a Fitbit device, which is designed to help a user track their steps or activity with the idea of promoting healthy behaviors. Keeping this definition in mind, wearables offer a plethora of potential capabilities within healthcare if the information and data created by these devices can be turned into actionable intelligence and insights.
The analytics gap
The Mi Band, Fitbit, and other wearables can collect a lot of data on a user. However, data such as how many steps you take in a day, no matter the frequency or accuracy, has little actual correlation between fitness and health without being able to contextualize that data. By contextualizing we mean comparing your activity to your age, sex, weight, and overall health. For example, if you're 20 years old and in good health, taking 700 steps a day is not particularly active for your demographic. But, if you're 80 and recovering from knee surgery, it's an impressive amount of activity. Most activity sensors in wearables are not useful: the accelerometers and magnetometers are not accurate, they can't differentiate between activity like walking or strength training, and they are often terrible at counting calories. Yet, with enough computational power and data, you can make data from something like these wearables far more relevant as a personal health and fitness monitoring tool.
Taking advantage of Watson APIs
IBM's Watson offers developers of wearables a sophisticated super computer and cognitive computational system as a service. This service allows savvy developers to rapidly design and develop applications that can fuse data that a user provides on weight, diet, health, and much more. For example, data can be collected from your activity sensors and potentially from other data sources, such as sleep monitors, glucose monitors, an Internet connected scale, and even your electronic medical records. Watson APIs can even help intelligently fuse this data together, but more importantly, they can derive meaningful information from your data. For example, a Fitbit offers data visualization like that shown in Figure 1, which isn't that useful.
Figure 1. Example of the sort of visual analytics from Fitbit
Fusing data from wearables with personal health data
To make data from wearables more useful, you need to not only analyze a user's data from their wearable, but also fuse it with their personal health data. You can contextualize this fused data further with similar data from other individuals with similar metrics, thus providing you with a meaningful statistical analysis.
For example, in Figure 2 you can see an example of Watson combining a patient's wearable data with their electronic medical records and then comparing it to patients with similar criteria. In this case, the goal is to get a sense of the patient's risk of heart disease by following the Framingham Criteria, which is a methodology that is used by physicians to evaluate the risk of cardiac failure. Read the full paper, "Interactive Intervention Analysis," presented by David Gotz and Krist Wongsuphasawat at the American Medical Informatics Association Annual Symposium in 2012. (SeeResources for more information.)
Figure 2. An example of Watson-generated visualization of patient data compared to similar patients. This image is excerpted from this report: "Interactive Intervention Analysis." by David Gotz and Krist Wongsuphasawat. American Medical Informatics Association Annual Symposium (AMIA), Chicago, IL (2012).
Because Watson has an open design as a platform with simple RESTful APIs, developers can pull data from popular sensors and from sites that store a user's DNA analysis. They can access sites where the user enters their diet information, or their medical records, and even get data from the National Institute of Health's updated data sets. Comparing that data based on what times users are active, the type of their activity, where they live, and changes in their weight, opens the door to more advanced statistical analysis beyond simple regression.
Developers can create applications that help users understand their basic wellness and diagnose medical problems. These apps can also potentially predict future medical problems based on early indicators of illness. The apps might even recommend that they see a specialist and have specific tests done based on the analysis. Policy makers and public health officials might also benefit from apps like this, since the apps might recognize disease outbreaks and even potential spikes in illness, before a major problem arises. Cognitive computing platforms, like Watson, can help developers bridge the analytic gap and allow wearables to move from simple devices that collect simple data, to potentially revolutionary platforms for understanding fitness and overall wellness.
The "Quantified Self" movement and cognitive computing
Wearables have in large part been driven by the Quantified Self movement, which focuses on individuals who use technology to monitor their own self to have a greater understanding of their personal health and well-being. Unfortunately, few users have been able to truly benefit from current hardware tools and software offerings due to the previously mentioned analytical gap. This gap caused the Quantified Self movement to be almost completely dominated by a small group of highly technical individuals who have the resources and abilities to extract useful personal information from their wearables. Tools need to be able to help users who are not trained data scientists or physicians to find outliers and trends that are specific to their individual health. Users also need tools that can understand or "learn" about themselves, and guide them to their health and fitness goals.
Currently, a platform like this isn't available to users, partly because it requires a level of intelligence that is hard to develop into software tools. However, IBM Watson is a cognitive computing platform that offers the foundations to help create this new breed of tools. For example, the Question and Answer service coupled with the Text to Speech and Natural Language services can enable individuals to manage, explore, and better understand their own well-being without having to have a sophisticated understanding of statistics, biology, physiology, and technology. With Watson, you can create cognitive applications for wearables that would truly transform the Quantified Self movement into something from a domain of the technology elite to a major fitness and health movement for the masses.
Wearables and cognitive computing applications will help deliver on two key benefits of the Quantified Self movement: patient-centered care and a more efficient and effective healthcare system.
Information about your health and wellness from even the most sophisticated of computer platforms cannot replace dedicated physicians or healthcare specialists anytime soon. Wearable developers need to consider how their devices and associated software platforms can help individuals engage with their healthcare providers to develop a more open and collaborative form of healthcare, which is commonly referred to as patient-centered care.
In patient-centered care, healthcare providers collaborate with patients to help them make not just informed choices, but choices that are best for their particular circumstance and situation. With this new model of collaborative care, developers of healthcare wearables can provide a critical role by making their devices and tools securely accessible to a patient's healthcare providers in formats that healthcare providers regularly use. Furthermore, wearable developers can create interfaces and services that are specifically designed to allow both patient and physician to explore the patient's data, and drill down into it. These services might provide the main user an important tool for monitoring their health. Also, they might provide their caregivers a method to more efficiently monitor their patients' health and collaborate with their patients and other caregivers.
In this patient-centered environment, patients might come in and sit down with their healthcare providers and talk to them about their issues. Then, along with their physician, they might review their medical records alongside of their wearable's data. The system might summarize the patients' medical records along with recent data, pointing out potential outliers to the physician that might require greater analysis. The physician might then walk through those outliers with their patient, calling up past medical tests or records. The physician might even compare recent wearable data to past data to help patients understand the physician's analysis or prognosis.
Even more exciting, wearable platform developers might add predictive modeling capabilities for the physician to show their patient likely outcomes of various treatments, therapies, or regimens. For example, a physician might have the system show a patient the outcome of what a modest exercise and diet change would have on their health, based on their specific medical case and aggregates of other medical cases like them. Wearable devices might help healthcare providers make better decisions faster, allowing them to provide better service for more patients.
A more efficient and effective healthcare system
Currently, the medical community is overwhelmed by both patients and data. Many physicians are spending only 15-30 minutes with new patients, where they must rapidly assess their medical history, often provided orally, and make a diagnosis. The result, according to some studies, is 12 million misdiagnoses a year just in the United States. This issue is exacerbated by poor medical records and often low fidelity and low frequency lab tests that are often not even digitized, resulting in physicians often making informed guesses. Wearable devices and cognitive applications might fundamentally change how physicians diagnose patients, by providing better quality analytics, by helping recommend treatments, and also by providing higher quality and very high frequency data.
With this cognitive computing solution, physicians might review patient records, tapping into sensor data streams to get clearer views of what's really going on with a patient. And, the physician would benefit greatly from the analytic and decision support capabilities from a cognitive computing platform like Watson.
The next generation of wearable device providers might even create notifications for healthcare providers that can allow physicians to create rules to notify them when certain conditions are met. The physicians can follow up remotely by looking directly at a patient's data without having to meet with the patient at all. This enhancement would be extremely powerful, allowing healthcare providers to test various hypotheses and validate them in real time outside of a lab. This scenario is something that is currently only possible and practical under medical or scientific studies. But with wearables and cognitive computing, physicians might manage larger numbers of patients, with clearer visibility into their health, while using better data, and while reducing the potential for tragic mistakes and misdiagnosis.
In this article, we briefly looked at how cognitive computing platforms like IBM Watson can help usher in a new generation of wearables that allow developers to enable better analysis, user interaction, and patient-centered care. We have also looked at how taking advantage of wearables to mix big data, historical user data, and sensor data can be used to more accurately diagnose illness and also predict illness. Finally, we looked at how cognitive applications combined with wearable sensors can help physicians in managing their workloads, reducing misdiagnoses, and providing them with an important tool in understanding their patients' health in real time.
In the next article in this series, "Designing cognitive applications that take advantage of the Watson services," I look at how you might design a cognitive application that uses IBM Watson for wearable sensors.
Building smarter wearables for healthcare - Part 2
Building smarter wearables for healthcare, Part 2: Designing cognitive applications that take advantage of the Watson services
by Robi Sen
In this article, I explain why you want to use IBM Watson, a robust cognitive computing platform, for your wearables project. I also describe how you can integrate Watson into your wearables project.
Traditionally, to build the sort of sophisticated applications that were discussed in Part 1 of this series ("Examining how healthcare can benefit from wearables and cognitive computing"), wearable developers would have to build their own machine learning and data analysis services. This task is no small feat without experienced software engineers and specialized machine learning specialists and data scientists. After they create these expert services, developers would have to develop their application and system architectures, and test them over time, to make sure that they meet the needs of their users. For these reasons, few wearable vendors have provided anything more than the most basic of tools.
IBM Watson and the Watson services available on the IBM Bluemix platform offer wearable developers numerous benefits. Some of these benefits include time to market, minimizing investment, meeting regulatory requirements, built-in security, and of course the cognitive feature set. By using the Watson APIs, you can potentially save companies several man-years of effort. More importantly, though, wearable vendors and service providers can deliver paradigm-changing capabilities without having to hire their own machine learning developers or data scientists. With Watson services, wearable developers can now focus on their sensors and their product visions without being distracted by developing the complex services and infrastructure that are required to make them truly useful.
You can mix these Watson services with your own application code and other services, including those services that are offered by IBM as part of its Bluemix platform. The Watson APIs are RESTful services, which means you can create much more complex applications or systems that are driven by them than you might able to do on your own. You can also use the parts of the Watson APIs that best complement your efforts, while you develop and build out your system that satisfies the needs of your business and application. Or, you can simply use the whole IBM ecosystem and platform within Bluemix to create your applications.
Watson Services RESTful APIs
To get a sense of how Watson services can be used within your wearable offering, you need to understand how Watson services work, how they fit into a larger developer ecosystem, and what benefits you can derive from them. Currently, Watson has a small set of APIs, which you can find in the Watson Services API catalog, that take advantage of its cognitive computing capabilities.
Currently, the Waston Services catalog includes the following services, in various release states (GA, Beta, Experimental):
Designing a wearable offering: A Watson API example
Let's examine a detailed example to more fully understand the benefits of developing wearable offerings with the Watson APIs. Figure 1 shows a diagram of an application that takes data from a wearable sensor and sends it to a smartphone. That smartphone communicates with an application that stores relevant user data and sensor data, but also pulls data from a large healthcare data set for doing statistical comparison.
Figure 1. Simple cognitive application using the Watson APIs that gathers data from wearables
In this simple example, the application uses several of the Watson services. Most notably, it uses the Question and Answer service, which helps users ask natural language questions, such as "How does my data compare others like me?" or "What does it mean to have a prolonged, high resting-heart-rate after I go for a run?" This application also uses the Watson Trade Off Analytics service, which lets users compare their diet, type of activity, weight, caloric burn, and sleeping patterns. Overtime, this comparison can help them visualize what changes to make to their daily regime.
This health app might take many months to build from scratch. For a company that already has a wearable analytic application, however, it can easily be done using the Watson RESTful APIs without making major changes to the existing application. Indeed, developers have used the Watson APIs to make similar cognitive applications in as little as 48 hours without having much more than a basic web programming and development background. For developers who create a new application or service, everything can be done in their cloud of choice, with their programming language of choice, or it can be done directly in Bluemix.
While the Watson APIs are certainly easy to use and integrate into applications, successful applications still require good up-front designs. Because the power of Watson comes from the system's ability to learn, learning from good content and feedback from users and domain experts, you need to carefully consider and decide on the data that Watson will use to learn from and to provide responses back to users.
Guiding users through your application is often key to making a successful cognitive application. Watson has powerful machine learning capabilities that are best exercised in clear information domains, by using clear queries, and that result in clear responses. For example, let's say we have a system that pulls wearable sensor data from a user and compares that data, along with their specific diet, exercise, and demographics, to national healthcare data. We then use the Watson Question and Answer API to let users ask the system a variety of specific health and fitness questions. A good question for the Watson QA API might be "Is my blood pressure too high?" This question is a good question if we designed the application to contextualize the question. For example, if we designed a screen that shows recent blood pressure data, and the system looks at that data and compares it to aggregated data. The system then might respond that the user's blood pressure and other collected metrics are well within national trends for their specific demographics. However, the same question might be a bad question to ask the system if it was not contextualized through good application design. Another major consideration when you design applications for Watson is making sure that you have good content for the system to work with.
Data is still key
Because Watson uses machine learning to derive relationships and find relevant information, having good data that is related to your problem domain is critical to using Watson services. Watson services can pull data from a large variety of data sources, both structured and unstructured. In many cases, you need to prepare your data before you ingest it into a Watson Content Store by acquiring, cleansing, aggregating, and validating that data.
After you have selected and prepared the content that you want to use, you need to look at tuning Watson to use that data more effectively, by using Watson to enrich the data. IBM offers a set of tools that essentially focus on having a human domain expert work with Watson to help develop question and answer pairs and also to train the system on its responses. This tuning of the data helps Watson get better at finding the best response to a user's questions from its knowledge base. Watson can be further trained by its users, who can vote or rank responses by how useful they are. Watson takes this data and further tunes its responses based on all of this information. You can easily accomplish this fine-tuning of your content by ensuring that your Alpha and Beta releases of your applications allow users to respond, thereby tuning the system with real users.
In this article, I presented some reasons why you should use Watson in your wearable offering. I also explored the process of designing a Watson cognitive application.
In the next article in this series, I will look at how to start building a Watson application for a hypothetical wearable offering. I'll explain in greater detail how to integrate Watson into a real world application.
BYOD on the decline in healthcare organizations, survey finds
BYOD on the decline in healthcare organizations, survey finds
Can Google Glass Transform Medical Education?
Can Google Glass Transform Medical Education?
Google Glass looks exciting for the medical world, and presents a particularly powerful opportunity for medical education(for examples, see Forbes article here or Phys.org here). A white paper by the Department of Emergency Medicine, Singapore General Hospital says, “simulation-based training has opened up a new educational application in medicine. It can develop health professionals’ knowledge, skills, and attitudes, whilst protecting patients from unnecessary risks”. Google Glass is taking simulation to the next level and making it more real, as the patients treated are real.
Yet the underlying concept of simulation-based-learning in medicine isn’t new. Neither are the individual components of Google Glass (such as the video recording feature and the possibility of sharing procedures online with any number of students). The biggest innovation might be having all this in one device. As Aristotle said, the whole is more than the sum of its parts.
Medical education is often a two stage process. In stage one, doctors in training need to study voluminous tomes and pass exams; stage one is the collection and storing of knowledge – perhaps too much knowledge. Richard Barker says in his book 2030, the future of medicine, that “as our bio-medical insights continue to fragment traditional diseases into multiple molecular disorders, keeping pace with advances gets tougher and tougher; … ‘head knowledge’ needs to be complemented by online decision support, distilling the wisdom and experience of the best specialist and putting it at the fingertips of the practitioner”. In other words, clinicians are starting to need real-time knowledge on tap.
Stage two focuses on learning through direct patient contact under the guidance of seniors, and Barker’s position suggests that stage two may never really end. Google Glass would support this stage of the curriculum, helping to simulate the practice of medicine, teach decision making, and then allow collaboration long after qualification. With a teacher demonstrating on patients (or that earlier revolution: a mannequin) with a headset camera, the learner is brought straight into the operating theater.
Google Glass is similar to a standard pair of glasses. It has an optical head-mounted display, sitting just above the right eye. Features include a built-in GPS, microphone and Bluetooth, and a camera which can record and live-stream videos to a Google hangout. Particularly useful is voice activation which would allow surgeons to, for example, do a web search for latest research or access EMRs or even real-time patient metrics without “breaking scrub” (compromising operating room sterility). As well as improving the provision of care, this ought to give students a more holistic understanding of each case.
Dr. Rafael J. Grossmann, Surgeon, mHealth Innovator and Google Glass Explorer was the first to perform a Google Glass-aided surgery, including remote teaching contexts and offering clinical advice remotely via Google hang-out. Orthopaedic surgeon Dr. Selene Parekh followed with a demo of foot and ankle surgery, and then plastic surgeon Dr. Anil Shah used the device while carrying out a rhinoplasty. Recently, Medical News Today wrote about a surgeon who live-streamed a procedureusing Google Glass and a tablet device.
Grossman says that exposing students to the real life of a surgeon and their problems is critical for training and students should learn and mimic best practices early on. Furthermore, he adds that Google Glass education goes beyond the operating room, “Google Glass is a great start with practically limitless opportunities. “For example, how to connect with patients, how to teach bedside manner, how to prepare patients for surgery can all be best taught from real life examples. Google Glass records it and demonstrates best practice, from A to Z through the responsibilities of a practitioner,” he says.
Plus, of course, these Google Glass recorded procedures can be shared across the globe. InnovatorArmando Iandolo, co-founder of Surgery Academy and his team have created an application for Google Glass that lets surgeons stream a heads-up view of procedures to students anywhere in the world. The big, bold innovation is to connect these streams in MOOCs (massive open online courses), says Iandolo. He and his co-founder are currently crowd-funding the idea on Indigogo. “Students will access an operating theatre online and watch a surgical intervention, live, for the procedure of their choice”, says Iandolo. “As we enter Universities, we want to become an integral part of the medical student’s study curriculum”.
MOOCs aren’t new either, but with the Surgery Academy everything seems to fall in place. By bringing the learner straight into theatre, simulation via Google Glass makes courses operate more like apprenticeships.
The patient would need to give their approval, but this is surely quite reassuring for the patient: which practitioner – and one good enough to teach – wants to screw up while being live-streamed to hundreds of students and fellow physicians?
The speed at which Google Glass eventually becomes a standard educational support tool is less certain, and we can learn from previous waves of innovation. In 2010, the Northern Ontario School of Medicine introduced a new mobile device program (medical students received laptops, iPhones and iPads). To assess its value, educators there how medical learners use mobile technologies. Their white paper concluded, “Students would adapt their use of mobile devices to the learning cultures and contexts they find themselves in.” Device value needs to be taught. It depends on how welcome new tech is perceived to be in classrooms, by students, teachers, and the wider ecosystem.
A typical fear is that, especially early in the curriculum (stage one above), medical students will miss out on basic knowledge. Search and find functions make it easier to zero in on an answer, but perhaps without the rich context and basic knowledge provided by reading cover to cover. Students – and teachers – could work just ‘for the test’.
Well, books have always had indices. It’s the process of search which has been accelerated, and there is no evidence that students would treat a digital medical textbook differently than its paperback version. In fact digital isn’t a replacement for the traditional textbook; it’s an opportunity to augment it. There is a generational shift in the learning styles of medical students, Mihir Gupta writes in aKevinMD article. Digital allows the stodgy textbook to be augmented with visual and multimedia, which will suit certain learning styles. “Innovative digital resources are vital for helping students retain knowledge and simplify difficult concepts”, says Gupta. These new resources are great for quick access to updated medical knowledge, but “it will not replace textbook learning, nor should it”.
Lucien Engelen, Director of the Radboud Reshape Center at Radboud University Medical Center, is currently working on various applications for Google Glass in medicine. He says that the only way to get Google Glass into education is “to make it part of education innovation”. He says, “Take some high profile doctors, professors and nurses and some patients and have them run some tests. All of a sudden the advantages (of Google Glass) seem to fall in place seamlessly”.
Frances Dare is Managing Director of Accenture Connected Health Services, which has partnered with Philips on a Google Glass proof of concept. She agrees with Engelen, cautioning that it is important to create an environment in which experimentation can take place and to understand the type of training needed to prepare clinicians to use Google Glass effectively and safely in practice.
But don’t bet against Google Glass. After all, educators have argued for decades over calculators in math class. Engelen says that he really doesn’t think of Google Glass as something special: it’s just another computer form-factor facing the same barriers of acceptance. It will take some time and discussion over privacy to achieve it, but the new wave is coming.
by Nick Saalfeld and Ben Heubl, Contributing Writers at HIT Consultant
Nick Saalfeld is a corporate journalist and entrepreneur based in London, UK. He has written for seven years about health-tech for clients including Microsoft, Imprivata and Howareyou.com. He also co-founded Yoodoo, the online platform devoted to delivering behavioral change through learning experiences, with many applications in public health and clinical/pharmaceutical adherence.
Ben Heubl is a digital health advocate, activist and journalist for health 2.0 innovation.Ben Heubl is speaker at various healthcare innovation conferences and events, a TEDMED delegate, founded the non-for-profit organization Health 2.0 Copenhagen, Mentor at the HealthXL accelerator, and currently passionately writes for various online magazines in the context of digital health innovation and technology. Ben also currently supports a UK health innovation SME to change how citizens access healthcare. You can follow him on Twitter at @benheub
Can Google Glass Transform Medical Education? by Nick Saalfeld and Ben Heubl
Can innovative solutions for mobile care meet your needs?
Can innovative solutions for mobile care meet your needs?
The Visiting Nurse Association (VNA) California wanted to update their mobile technology to improve timely access to data. They had a few requirements:
Download the Case Study: Mobile Productivity for a Home Health Workforce to learn how VNA California provided their personnel greater performance flexibility and improved security with Intel®-based mobile solutions.
Cancer Antibodies & Assays
Cancer Antibodies & Assays
Cancer research is dependent on reliable tools for interrogating and identifying the phenotypic differences between cancer cells and corresponding normal cells of the same lineage. The recent output of data from genomic, proteomic, and epigenomic studies comparing tumor and non-tumor cells points to several key traits or “hallmarks”, shared by most tumor types, that drive disease progression.
These hallmarks of cancer are important—not only because they represent opportunities for therapeutic intervention, but because they are opportunities to use tumors as models to decipher the signaling pathways underlying both normal and diseased cellular processes.
Recognizing both the tremendous opportunities and the challenges facing cancer research, Merck Millipore has been dedicated to developing and refining products for the study of cancer. With Merck Millipore’s comprehensive portfolio, including the Upstate, Chemicon, and Calbiochem brands of assay kits, reagents and antibodies, researchers can count on dependable, high quality solutions for analyzing the hallmarks of cancer.
Please read the attached whitepaper.