Glosario eSalud | eHealth Glossary
Glosario sobre eSalud | eHealth Glossary
Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL
Biotechnology Accessible to All
Two Crowdfunded Machines Make Biotechnology Accessible to All
“I predict that the domestication of biotechnology will dominate our lives during the next fifty years at least as much as the domestication of computers has dominated our lives during the previous fifty years.”
Freeman Dyson wrote these words in a piece called “Our Biotech Future” in The New York Times Book Reviewin 2007. By ‘domestication’, he means, quite literally, getting the tools and technology traditionally reserved for high-tech labs and universities “into the hands of housewives and children.”
Seven years later, we have a thriving group of scientists, entrepreneurs, and enthusiastic amateurs bringing us technology to do just that.
A recent Kickstarter campaign called Open qPCR offers a sleek touch screen PCR thermocycler, with a price tag of $1500 (most commercial PCR machines cost upwards of $20,000). The machine not only copies DNA but also converts it to easily understandable data, made by the same team who brought us the original OpenPCR, which was successfully funded in 2010.
Open qPCR is a real-time diagnostic tool able to detect foodborne contaminants like E. coli and Listeria and help track the spread of infections like HIV, malaria and Ebola. It offers a web interface for visually designing experiments, powerful functionality for experimentation, and presents clear positive/negative results to end user. It’s accessible enough that a tech-savvy mother can determine if the tomatoes from the store are truly GMO free or whether her dog truly is true bred.
With under a week to go, the campaign has tripled it’s original goal of $50k and new stretch goals have been announced to help in the fight against Ebola.
A concurrently running Kickstarter, called OpenTrons, offers a $2000 “liquid-handling” robot which automates manual micropipetting work, accounting for much of the tedious and error-prone human labor necessary to carry out an experiment.
OpenTrons allows users to design protocols in software called MixBio, a modular drag and drop interface, and to run the experiment on OpenTrons, which handles the micropipetting work. The software allows you to receive feedback on your protocols from the community, rapidly increasing innovation, collaboration, and progress.
The campaign is down to it’s last few days, but recently hit it’s $100k goal.
Truth is, biohacking is harder and way less glamorous than it sounds. So far, scientists and researchers who don’t have access to high cost robotic equipment to automate these processes spend most of their time moving tiny amounts of liquid around by hand. True progress takes a very long time when you’re running experiments manually and in the vacuum of an isolated lab.
This is precisely what makes OpenTrons and Open qPCR powerful – not only are they are accessible and affordable but they both offer software which makes data analysis and collaboration much easier.
As Dyson stated in his essay, building the tools necessary to carry out valuable scientific experiments while making them widely available and user friendly is the next step towards the domestication of biotech. But he also asked whether the domestication of biotechnology could be and ought to be stopped?
Projects like OpenTrons and Open qPCR make it hard to argue the fact that we are indeed moving swiftly into an age of domesticated biotechnology. In fact, the movement is a diverse decentralized global network. Stanford recently opened up a state-of-the-art bioengineering building on their campus and teenagers from Brooklyn to Paris are taking ‘biohacking’ classes for fun.
There’s no ‘off’ switch to hit to shut momentum like this down, but there’s also no compelling reason to do so as the potential value greatly outweighs the risk.
Ellen Jorgensen makes a good case for this point in her TED talk, saying “[The United Nations] concluded the power of this technology for positive was much greater than the risk for negative, and they even looked specifically at the DIYbio community, and they noted, not surprisingly, that the press had a tendency to consistently overestimate our capabilities and underestimate our ethics.”
In terms of regulation, so far, the community is doing a solid job imposing limits and regulating itself. Jorgensen goes on to say that the global community got together in 2011 and wrote out a common code of ethics, adding “that’s a lot more than conventional science has done.” Of course, this process of setting and regulating limitations will evolve over time, and will be adopted into legislation eventually, but for the time-being it’s happening in a very self-driven, organic way. Groups like Jorgensen’s GenSpace in Brooklyn are working directly with the FBI to educate law-enforcement officials on the work and culture of the movement and identify potentially dangerous situations.
Today, we’ve not even begun to imagine the possible value this technology can bring once in the home of individuals.
Thomas Watson of IBM now-famously said, “I think there is a world market for maybe five computers” in 1943. Of course, the thing he called a computer took up an entire room. We don’t yet know what the personal computer version of biotech and synthetic biology is yet, but we can guess that OpenqPCR and OpenTrons are the forefathers of whatever it turns out to be.
The hope is that coming generations build the beautifully vibrant future that Dyson imagines, where designing genomes becomes a new personal art, as creative as painting or sculpture, and “a new generation of artists will be writing genomes as fluently as Blake and Byron wrote verses.”
[image credit: pipettes courtesy of Shutterstock]
Boost patient experience at first point of contact: The call center
Boost patient experience at first point of contact: The call center
I recently worked with a hospital improve its cancer program. It had wonderful doctors and an up-to-date facility. Nurses were very patient-focused and the staff smiled a lot. What could be better?
Yet new patient volumes were sluggish and growth elusive. The hospital found the highly competitive local market very challenging, especially because differentiation--a meaningful point of difference--was pretty much non-existent. In truth, the area hospitals were all pretty much the same. How could it compete? Most of the ideas focused on the patient experience inside the hospital.
So instead, we decided to see what it was like as an outsider trying to find out more about the hospital options if we were diagnosed with cancer. We began our inquiry, with observational research and shopping the experience. We called hospitals in the region, as well as some nationally recognized leaders in cancer care, hoping to learn something of value.
We contacted 20 hospitals and quickly realized something was clearly missing: The basics of a good (let alone great) customer experience. I invite you to call your own call center and see how it presents your excellent services to your consumer.
The typical call experience went something like this:
We experienced 18 of these types of encounters.
Then we called Cancer Treatment Centers of America, Dana Farber Cancer Institute, Johns Hopkins Medicine and Massachusetts General Hospital. While each was different, they at least had an approach to cancer inquiries and cancer care that demonstrated they might actually care about a caller requesting information.
Of these national brands, Cancer Treatment Centers of America, was clearly in another space. The operator was immediately engaged, showed empathy towards me and expressed concern for my "father." She knew whom to connect me with--their cancer advocate, who introduced herself, expressed her concern for my father and explained how the Centers deliver care for cancer patients. Their well-thought-out call center process was all about making both the patient and the family feel important, cared about and listened to. The process was also easy to understand and made sense.
What startled us, was the sorry state of the rest of the call centers. The basic caring of the other healthcare organizations was totally missing in action at the first point of contact. Any effort to understand the needs of a cancer patient at that crucial point was back in the dark ages. The operators, supposedly, are there to answer a call in three rings and direct the caller to where he/she needs to go. For us, we would have been happy if they had, at the very least, answered the phone in less than 10 rings and greeted us with kindness.
True, most calls to a hospital's central number are from people wanting to be connected to a patient, seeking a physician or looking for an administrative department--billing or admissions. We clearly threw them a curve ball asking for information about their cancer protocols. But was that enough of an excuse not to:
Which led us to wonder: Why? With all the innovative work going on these days to respond to healthcare reform, almost everyone, it seems, forgets the telephone center--a necessary evil.
From our perspective, the call center seems an easy point of differentiation. How can a healthcare institution make a person's overall experience satisfyingly patient-focused and person-centered if they canâ€™t even answer the phones well? And conversely, if they could create an amazing experience at that first touch point, maybe they could do the same throughout the entire patient and family experience.
Overwhelmingly, this whole experience felt like a time to pause and focus on the basics. While not innovative or sexy, the call center is essential. It must reflect well on you and add value to your organization, not dysfunction.
Remember: you don't get a second chance to make a first impression. Your call center is the first contact someone has with you. You certainly don't want to go to a hospital that cannot even get the phones answered satisfactorily nor provide an operator who can genuinely engage with you with emotion and empathy. It may seem small, but really, it is huge. And healthcare organizations better start paying attention, soon.
Andrea J. Simon, Ph.D., is a former marketing, branding and culture change senior vice president at Hurley Medical Center in Flint, Michigan. She also is president and CEO of Simon Associates Management Consultants.
BPM for Dummies
BPM for Dummies
Hace unos cuantos años nadie había oído hablar de Business Process Management (BPM), pero ha irrumpido en la escena global hasta convertirse en la tendencia de gestión empresarial y tecnológica más popular de la década. Si se encuentra en alguna empresa o sector industrial, ya sea público o privado, es casi seguro que habrá oído hablar del movimiento hacia el proceso, o de cuestiones como gestión de procesos o mejora de los procesos. Puede que sepa acerca de métodos de mejora de los procesos como Six Sigma o acerca de nuevas tecnologías como Business Activity Monitoring (BAM), supervisión de la actividad de negocio, o Service-Oriented Architecture (SOA), la arquitectura orientada a servicios. BPM representa la culminación de la experiencia, pensamiento y desarrollo profesional de todo un colectivo en la gestión empresarial durante las pasadas décadas. Coloca al cliente en primer lugar. Se centra en el negocio. Faculta a los individuos de cualquier rincón de una empresa para alcanzar un mayor éxito. Reúne a personas y sistemas. BPM es donde se condensan todas las elevadas ambiciones y mejores estrategias. Junte todo esto y obtendrá una mezcla que puede parecerle bastante confusa. Pero en realidad, BPM es un concepto muy sencillo. Es un conjunto de métodos, herramientas y tecnologías utilizados para diseñar, representar, analizar y controlar procesos de negocio operacionales; un enfoque centrado en los procesos para mejorar el rendimiento que combina las tecnologías de la información con metodologías de proceso y gobierno.
Breakthrough Technologies in Surgery
Building smarter wearables for healthcare - Part 1
Building smarter wearables for healthcare, Part 1: Examining how healthcare can benefit from wearables and cognitive computing
by Robi Sen
In this article, I examine the current trends in wearable computing in healthcare. Also, I explore the gaps between what can be done with current hardware offerings and their analytic capabilities. You'll learn how cognitive computing platforms like Watson can accelerate time to market for wearable device makers and also how Watson can fill the gap between the potential of wearables and their current rather weak offerings.
Wearables and healthcare
One of the hottest trends among hardware developers is the development of small wearable sensors, orwearables, specifically for collecting health and lifestyle data. This trend includes everything from simple devices such as the Fitbit to more sophisticated Lab-on-a-Chip devices that measure everything from blood sugar and hormone levels to complex proteins.
Unfortunately, most of these devices generate data that is underutilized. Either the user cannot derive anything but simple metrics from the devices, such as step count, or the data to is just not accessible to users. This underutilization often occurs because many hardware developers cannot afford to develop either the big data capabilities that are needed to manage all that data or the analytic capabilities that are needed to derive useful information from the wearables.
Services like the IBM Watson API, however, provide developers with the ability to offer valuable information to their users who use wearables, without having to build their own PaaS offerings. With the help of Watson, developers can create solutions that combine and compare data, find patterns and look for trends in that data, and even learn about the patients who are using the wearables.
For this article, I define wearables as a device with central processing capability and sensors that are designed to provide services to the user with the least amount of user interaction as possible for a specific task or need. So, while a smartphone can be worn on your arm as a wearable fitness sensor, it's not designed to do that. So it requires a lot of interaction to do tasks such as downloading an application, enabling the application, and then strapping the setup to your arm in an often cumbersome manner. As such, a smartphone can be worn, but it is not a wearable. A good example is something like a Fitbit device, which is designed to help a user track their steps or activity with the idea of promoting healthy behaviors. Keeping this definition in mind, wearables offer a plethora of potential capabilities within healthcare if the information and data created by these devices can be turned into actionable intelligence and insights.
The analytics gap
The Mi Band, Fitbit, and other wearables can collect a lot of data on a user. However, data such as how many steps you take in a day, no matter the frequency or accuracy, has little actual correlation between fitness and health without being able to contextualize that data. By contextualizing we mean comparing your activity to your age, sex, weight, and overall health. For example, if you're 20 years old and in good health, taking 700 steps a day is not particularly active for your demographic. But, if you're 80 and recovering from knee surgery, it's an impressive amount of activity. Most activity sensors in wearables are not useful: the accelerometers and magnetometers are not accurate, they can't differentiate between activity like walking or strength training, and they are often terrible at counting calories. Yet, with enough computational power and data, you can make data from something like these wearables far more relevant as a personal health and fitness monitoring tool.
Taking advantage of Watson APIs
IBM's Watson offers developers of wearables a sophisticated super computer and cognitive computational system as a service. This service allows savvy developers to rapidly design and develop applications that can fuse data that a user provides on weight, diet, health, and much more. For example, data can be collected from your activity sensors and potentially from other data sources, such as sleep monitors, glucose monitors, an Internet connected scale, and even your electronic medical records. Watson APIs can even help intelligently fuse this data together, but more importantly, they can derive meaningful information from your data. For example, a Fitbit offers data visualization like that shown in Figure 1, which isn't that useful.
Figure 1. Example of the sort of visual analytics from Fitbit
Fusing data from wearables with personal health data
To make data from wearables more useful, you need to not only analyze a user's data from their wearable, but also fuse it with their personal health data. You can contextualize this fused data further with similar data from other individuals with similar metrics, thus providing you with a meaningful statistical analysis.
For example, in Figure 2 you can see an example of Watson combining a patient's wearable data with their electronic medical records and then comparing it to patients with similar criteria. In this case, the goal is to get a sense of the patient's risk of heart disease by following the Framingham Criteria, which is a methodology that is used by physicians to evaluate the risk of cardiac failure. Read the full paper, "Interactive Intervention Analysis," presented by David Gotz and Krist Wongsuphasawat at the American Medical Informatics Association Annual Symposium in 2012. (SeeResources for more information.)
Figure 2. An example of Watson-generated visualization of patient data compared to similar patients. This image is excerpted from this report: "Interactive Intervention Analysis." by David Gotz and Krist Wongsuphasawat. American Medical Informatics Association Annual Symposium (AMIA), Chicago, IL (2012).
Because Watson has an open design as a platform with simple RESTful APIs, developers can pull data from popular sensors and from sites that store a user's DNA analysis. They can access sites where the user enters their diet information, or their medical records, and even get data from the National Institute of Health's updated data sets. Comparing that data based on what times users are active, the type of their activity, where they live, and changes in their weight, opens the door to more advanced statistical analysis beyond simple regression.
Developers can create applications that help users understand their basic wellness and diagnose medical problems. These apps can also potentially predict future medical problems based on early indicators of illness. The apps might even recommend that they see a specialist and have specific tests done based on the analysis. Policy makers and public health officials might also benefit from apps like this, since the apps might recognize disease outbreaks and even potential spikes in illness, before a major problem arises. Cognitive computing platforms, like Watson, can help developers bridge the analytic gap and allow wearables to move from simple devices that collect simple data, to potentially revolutionary platforms for understanding fitness and overall wellness.
The "Quantified Self" movement and cognitive computing
Wearables have in large part been driven by the Quantified Self movement, which focuses on individuals who use technology to monitor their own self to have a greater understanding of their personal health and well-being. Unfortunately, few users have been able to truly benefit from current hardware tools and software offerings due to the previously mentioned analytical gap. This gap caused the Quantified Self movement to be almost completely dominated by a small group of highly technical individuals who have the resources and abilities to extract useful personal information from their wearables. Tools need to be able to help users who are not trained data scientists or physicians to find outliers and trends that are specific to their individual health. Users also need tools that can understand or "learn" about themselves, and guide them to their health and fitness goals.
Currently, a platform like this isn't available to users, partly because it requires a level of intelligence that is hard to develop into software tools. However, IBM Watson is a cognitive computing platform that offers the foundations to help create this new breed of tools. For example, the Question and Answer service coupled with the Text to Speech and Natural Language services can enable individuals to manage, explore, and better understand their own well-being without having to have a sophisticated understanding of statistics, biology, physiology, and technology. With Watson, you can create cognitive applications for wearables that would truly transform the Quantified Self movement into something from a domain of the technology elite to a major fitness and health movement for the masses.
Wearables and cognitive computing applications will help deliver on two key benefits of the Quantified Self movement: patient-centered care and a more efficient and effective healthcare system.
Information about your health and wellness from even the most sophisticated of computer platforms cannot replace dedicated physicians or healthcare specialists anytime soon. Wearable developers need to consider how their devices and associated software platforms can help individuals engage with their healthcare providers to develop a more open and collaborative form of healthcare, which is commonly referred to as patient-centered care.
In patient-centered care, healthcare providers collaborate with patients to help them make not just informed choices, but choices that are best for their particular circumstance and situation. With this new model of collaborative care, developers of healthcare wearables can provide a critical role by making their devices and tools securely accessible to a patient's healthcare providers in formats that healthcare providers regularly use. Furthermore, wearable developers can create interfaces and services that are specifically designed to allow both patient and physician to explore the patient's data, and drill down into it. These services might provide the main user an important tool for monitoring their health. Also, they might provide their caregivers a method to more efficiently monitor their patients' health and collaborate with their patients and other caregivers.
In this patient-centered environment, patients might come in and sit down with their healthcare providers and talk to them about their issues. Then, along with their physician, they might review their medical records alongside of their wearable's data. The system might summarize the patients' medical records along with recent data, pointing out potential outliers to the physician that might require greater analysis. The physician might then walk through those outliers with their patient, calling up past medical tests or records. The physician might even compare recent wearable data to past data to help patients understand the physician's analysis or prognosis.
Even more exciting, wearable platform developers might add predictive modeling capabilities for the physician to show their patient likely outcomes of various treatments, therapies, or regimens. For example, a physician might have the system show a patient the outcome of what a modest exercise and diet change would have on their health, based on their specific medical case and aggregates of other medical cases like them. Wearable devices might help healthcare providers make better decisions faster, allowing them to provide better service for more patients.
A more efficient and effective healthcare system
Currently, the medical community is overwhelmed by both patients and data. Many physicians are spending only 15-30 minutes with new patients, where they must rapidly assess their medical history, often provided orally, and make a diagnosis. The result, according to some studies, is 12 million misdiagnoses a year just in the United States. This issue is exacerbated by poor medical records and often low fidelity and low frequency lab tests that are often not even digitized, resulting in physicians often making informed guesses. Wearable devices and cognitive applications might fundamentally change how physicians diagnose patients, by providing better quality analytics, by helping recommend treatments, and also by providing higher quality and very high frequency data.
With this cognitive computing solution, physicians might review patient records, tapping into sensor data streams to get clearer views of what's really going on with a patient. And, the physician would benefit greatly from the analytic and decision support capabilities from a cognitive computing platform like Watson.
The next generation of wearable device providers might even create notifications for healthcare providers that can allow physicians to create rules to notify them when certain conditions are met. The physicians can follow up remotely by looking directly at a patient's data without having to meet with the patient at all. This enhancement would be extremely powerful, allowing healthcare providers to test various hypotheses and validate them in real time outside of a lab. This scenario is something that is currently only possible and practical under medical or scientific studies. But with wearables and cognitive computing, physicians might manage larger numbers of patients, with clearer visibility into their health, while using better data, and while reducing the potential for tragic mistakes and misdiagnosis.
In this article, we briefly looked at how cognitive computing platforms like IBM Watson can help usher in a new generation of wearables that allow developers to enable better analysis, user interaction, and patient-centered care. We have also looked at how taking advantage of wearables to mix big data, historical user data, and sensor data can be used to more accurately diagnose illness and also predict illness. Finally, we looked at how cognitive applications combined with wearable sensors can help physicians in managing their workloads, reducing misdiagnoses, and providing them with an important tool in understanding their patients' health in real time.
In the next article in this series, "Designing cognitive applications that take advantage of the Watson services," I look at how you might design a cognitive application that uses IBM Watson for wearable sensors.
Building smarter wearables for healthcare - Part 2
Building smarter wearables for healthcare, Part 2: Designing cognitive applications that take advantage of the Watson services
by Robi Sen
In this article, I explain why you want to use IBM Watson, a robust cognitive computing platform, for your wearables project. I also describe how you can integrate Watson into your wearables project.
Traditionally, to build the sort of sophisticated applications that were discussed in Part 1 of this series ("Examining how healthcare can benefit from wearables and cognitive computing"), wearable developers would have to build their own machine learning and data analysis services. This task is no small feat without experienced software engineers and specialized machine learning specialists and data scientists. After they create these expert services, developers would have to develop their application and system architectures, and test them over time, to make sure that they meet the needs of their users. For these reasons, few wearable vendors have provided anything more than the most basic of tools.
IBM Watson and the Watson services available on the IBM Bluemix platform offer wearable developers numerous benefits. Some of these benefits include time to market, minimizing investment, meeting regulatory requirements, built-in security, and of course the cognitive feature set. By using the Watson APIs, you can potentially save companies several man-years of effort. More importantly, though, wearable vendors and service providers can deliver paradigm-changing capabilities without having to hire their own machine learning developers or data scientists. With Watson services, wearable developers can now focus on their sensors and their product visions without being distracted by developing the complex services and infrastructure that are required to make them truly useful.
You can mix these Watson services with your own application code and other services, including those services that are offered by IBM as part of its Bluemix platform. The Watson APIs are RESTful services, which means you can create much more complex applications or systems that are driven by them than you might able to do on your own. You can also use the parts of the Watson APIs that best complement your efforts, while you develop and build out your system that satisfies the needs of your business and application. Or, you can simply use the whole IBM ecosystem and platform within Bluemix to create your applications.
Watson Services RESTful APIs
To get a sense of how Watson services can be used within your wearable offering, you need to understand how Watson services work, how they fit into a larger developer ecosystem, and what benefits you can derive from them. Currently, Watson has a small set of APIs, which you can find in the Watson Services API catalog, that take advantage of its cognitive computing capabilities.
Currently, the Waston Services catalog includes the following services, in various release states (GA, Beta, Experimental):
Designing a wearable offering: A Watson API example
Let's examine a detailed example to more fully understand the benefits of developing wearable offerings with the Watson APIs. Figure 1 shows a diagram of an application that takes data from a wearable sensor and sends it to a smartphone. That smartphone communicates with an application that stores relevant user data and sensor data, but also pulls data from a large healthcare data set for doing statistical comparison.
Figure 1. Simple cognitive application using the Watson APIs that gathers data from wearables
In this simple example, the application uses several of the Watson services. Most notably, it uses the Question and Answer service, which helps users ask natural language questions, such as "How does my data compare others like me?" or "What does it mean to have a prolonged, high resting-heart-rate after I go for a run?" This application also uses the Watson Trade Off Analytics service, which lets users compare their diet, type of activity, weight, caloric burn, and sleeping patterns. Overtime, this comparison can help them visualize what changes to make to their daily regime.
This health app might take many months to build from scratch. For a company that already has a wearable analytic application, however, it can easily be done using the Watson RESTful APIs without making major changes to the existing application. Indeed, developers have used the Watson APIs to make similar cognitive applications in as little as 48 hours without having much more than a basic web programming and development background. For developers who create a new application or service, everything can be done in their cloud of choice, with their programming language of choice, or it can be done directly in Bluemix.
While the Watson APIs are certainly easy to use and integrate into applications, successful applications still require good up-front designs. Because the power of Watson comes from the system's ability to learn, learning from good content and feedback from users and domain experts, you need to carefully consider and decide on the data that Watson will use to learn from and to provide responses back to users.
Guiding users through your application is often key to making a successful cognitive application. Watson has powerful machine learning capabilities that are best exercised in clear information domains, by using clear queries, and that result in clear responses. For example, let's say we have a system that pulls wearable sensor data from a user and compares that data, along with their specific diet, exercise, and demographics, to national healthcare data. We then use the Watson Question and Answer API to let users ask the system a variety of specific health and fitness questions. A good question for the Watson QA API might be "Is my blood pressure too high?" This question is a good question if we designed the application to contextualize the question. For example, if we designed a screen that shows recent blood pressure data, and the system looks at that data and compares it to aggregated data. The system then might respond that the user's blood pressure and other collected metrics are well within national trends for their specific demographics. However, the same question might be a bad question to ask the system if it was not contextualized through good application design. Another major consideration when you design applications for Watson is making sure that you have good content for the system to work with.
Data is still key
Because Watson uses machine learning to derive relationships and find relevant information, having good data that is related to your problem domain is critical to using Watson services. Watson services can pull data from a large variety of data sources, both structured and unstructured. In many cases, you need to prepare your data before you ingest it into a Watson Content Store by acquiring, cleansing, aggregating, and validating that data.
After you have selected and prepared the content that you want to use, you need to look at tuning Watson to use that data more effectively, by using Watson to enrich the data. IBM offers a set of tools that essentially focus on having a human domain expert work with Watson to help develop question and answer pairs and also to train the system on its responses. This tuning of the data helps Watson get better at finding the best response to a user's questions from its knowledge base. Watson can be further trained by its users, who can vote or rank responses by how useful they are. Watson takes this data and further tunes its responses based on all of this information. You can easily accomplish this fine-tuning of your content by ensuring that your Alpha and Beta releases of your applications allow users to respond, thereby tuning the system with real users.
In this article, I presented some reasons why you should use Watson in your wearable offering. I also explored the process of designing a Watson cognitive application.
In the next article in this series, I will look at how to start building a Watson application for a hypothetical wearable offering. I'll explain in greater detail how to integrate Watson into a real world application.
Business Analytics and the Data Complexity Matrix
Business Analytics and the Data Complexity Matrix
Data environments are growing exponentially. IDC reports that compound annual growth in data through 2020 will be almost 50% per year. Not only is there more data, but there are more data sources. According to Ventana Research, 70% of organizations need to integrate more than 6 disparate data sources. When considering a Business Analytics program, different approaches are better suited for each data state.
Please read the attached whitepaper.
BYOD on the decline in healthcare organizations, survey finds
BYOD on the decline in healthcare organizations, survey finds
Can FHIR Spark Health Information Exchange, Interoperability?
¿Puede FHIR Spark soportar el intercambio de información en salud, es decir, la interoperabilidad?
Can FHIR Spark Health Information Exchange, Interoperability?
by Jennifer Bresnick
Health information exchange in the United Sates may suffer from chronic fragmentation, data duplication, misinterpretation, ambiguity, and disunity, but there’s one thing nearly all providers can come together to agree upon: data interoperability could be much, much easier if health IT infrastructures all spoke the same language.
But deciding what language that should be, not to mention implementing it across tens of thousands of systems, has been a lengthy process fraught with difficulty. From vendor mistrust to reluctance from providers to invest even more in their technologies without seeing measurable return, private industry and government organizations alike have stumbled time and again over bumps in the road to truly interoperable data exchange.
Enter FHIR, the Fast Healthcare Interoperability Resources standard from HL7 International that has rekindled hope of a simple, accessible way to use common internet protocols that change the way healthcare views data exchange.
While this wunderkind of the data standards world may not be the magical solution to every health information exchange problem ever identified, it has been lauded as a huge step forward for interoperability by the Office of the National Coordinator, the JASON Taskforce, and private consortiums including the CommonWell Health Alliance, and the Argonaut Project – not to mention experts such as Micky Tripathi, CEO of the Massachusetts eHealth Collaborative and Chair of the eHI Interoperability Workgroup.
“To me, FHIR is currently the best candidate for the next step forward in health information exchange technology,” Tripathi told HealthITAnalytics.com. “We want to move away from where we are now, which is document-based exchange. Right now, interoperability in healthcare is basically just the exchange of Consolidated Clinical Document Architecture (C-CDAs).”
“The exchange of these XML documents has a certain value, because unlike in banking, where I just need raw data, whole documents are really important in clinical care,” he continued. “For banking, you can just tell how many dollars there are, or tell me how many units there are. Give me an account number, and I’m good to go. I don’t need a document. I don’t need as much context around the data in that transaction.”
But in clinical care, context is everything. “If you just send me some lab results or a list of allergies, that’s great,” said Tripathi. “I need those things, but you haven’t told me the story of the patient, and that’s really important for a clinician to understand. Document exchange is important, but so is that data-level exchange. Health information exchange based entirely on C-CDA XML documents doesn’t allow you to access information at a data level as well.”
That’s one place where the healthcare system has stalled, especially when it comes to clinical analytics and liberating big data from confines that limit its usefulness. C-CDAs provide a tidy way to bundle all the patient information required for meaningful use, but even the ONC admits that developing a truly standardized, simplified framework for C-CDA exchange is beyond the capabilities of the EHR Incentive Programs at this time.
In its proposal for updating the 2015 Certified EHR Technology (CEHRT) criteria, the ONC explains that there are two release versions of the C-CDA, and many health IT systems cannot read one or the other. Instead of requiring health IT module vendors to develop a single document that can be read by all systems, 2015 certified products will need to send two separate C-CDAs, one in each version, and vendors are allowed to accomplish the accompanying error tracking and data validation in any manner they choose.
Transmitting duplicate documents may get the job done, but it isn’t really what many stakeholders have in mind when they hear the word “interoperability.” C-CDA data is not easy to extract to use for clinical analytics and taking the next step in population health management, and it’s not something the majority of providers are equipped to do.
“You can get data out of C-CDAs,” Tripathi acknowledges. “My company does a very large business in data warehousing, where we get over 500,000 C-CDA records a month. We parse those and deliver data-level healthcare analytics back to our customers. You can do it. It’s just very inefficient.”
Healthcare must move beyond temporary patches and short-term fixes that enable a certain level of health information exchange, but don’t allow health IT systems access to deeper, richer analytics and integration functionalities. FHIR is one of the protocols that present a new way of thinking about moving clinical information around.
“The next step in health information exchange is what almost every other industry has already started to do, which is developing more open APIs that allow data-level access,” states Tripathi. “These have to be based on internet conventions like the Representational State Transfer (REST), which is a much easier way to exchange information securely between two different entities. You already use it every day.”
“When you order something on Amazon, for example, look at your browser line,” he added. “If you’re logged in and you click on something, what you’ll see is a URL that says “https” and then this huge string of nonsense. That’s a query-retrieve system that’s generated in your browser and sent to Amazon, and then Amazon immediately returns the results securely. That’s essentially a single line command that can be received by all through browser technology, so I don’t need complicated interfaces. I can do it through any internet browser I want.”
“That’s really the next step forward in healthcare. We need to start taking that kind of approach as we think about interoperability instead of building these complicated interfaces that take too much time and too much money.”
But FHIR “isn’t a magic bullet solution,” Tripathi says, and it comes with its own set of challenges that dovetail with the ongoing, contentious debate over EHR usability. Regardless of which protocols are functioning behind the scenes, EHRs and data analytics interfaces must be user-friendly, and must enhance the provider workflow instead of reducing productivity, sapping time, and wasting energy.
FHIR works so well because it is based on the use of discrete data elements, or resources, that are sufficiently standardized. The resources require data to be entered into the system in an expected and specific manner. “FHIR tells people, in fairly specific terms, that if you want this data resource to be interoperable, you’ve got to enter the information in this standardized way,” explains Tripathi. “That means you’re left without a whole lot of options for how the data is entered by the user, but that’s a way of driving standardization.”
Drop-down menus and check boxes are an excellent way to jumpstart data standardization, but these limited input fields have long been the bane of providers complaining about poorly designed EHR interfaces and a complete lack of intuitive design among available products. But that doesn’t mean FHIR is going to make EHR usability worse. Instead, bringing more vendors into the FHIR community and making interoperability a standard feature may drive competition and innovation among the vendor community, which will have to distinguish themselves on something other than their ability to participate in health information exchange.
“Once you have those expectations for a particular set of things you want to do in FHIR, that’s where the EHR vendor comes in and competes with other vendors in terms of usability,” Tripathi believes. “Some may say, ‘Well, I’m going to force the user to enter everything according to a set of pull down menus. And I’m going to lock it all down so that they won’t enter any bad data.’”
“Others might say, ‘I’m going to allow the user some flexibility, and then underneath the covers I’ll do the mapping. I’ll allow a user to enter an ICD-9 code for a problem and then I’ll map it behind the scenes so the user doesn’t have to worry about that.’ The vendors will figure out which is the right balance for the right users, because we have to remember that healthcare is just like any other market when it comes to segmentation.”
“There are a lot of providers who want standardized buttons and drop-down menus. I hear them say, ‘How come my vendor hasn’t locked down the way that I do this so that the data is normalized?’ Others get tremors when they think about an interface that restricts how they enter data, and they want to have a lot of flexibility. I think you’ll see vendors who take different approaches for different types of users, and eventually they’ll develop products that will meet enough expectations and enough needs.”