Wireframes

This entry is part 30 of 30 in the series Words
A wireframe document for a person profile view

A wireframe document for a person profile view

A common technique for prototyping computer screens is to use wireframes. A recent article in UXmatters discusses wireframes, and asks whether wireframe prototypes are used by program designers as a substitute for real collaboration.

That’s a good question. But I think this is a better one: is showing wireframes to people a poor substitute for figuring out what users need to do, and then designing and refining a workflow process that works for them?

Wireframing, also known as paper prototyping (because it can all be done on paper, without wasting a single electron) can be an effective tool during design. However, it is not a substitute for sitting down with some of each class of users, using anthropological techniques to document the tasks they are accomplishing as they work, and using personas to guide the design of the computer-based work process for these classes of users, and then going back and using discount usability testing to refine the process.

Wireframes are good but not a substitute for either collaboration or task analysis.

Share

Suicide

This entry is part 29 of 30 in the series Words

Text MiningData mining has been a topic of interest to businesses and researchers for many decades. For physicians and other clinicians, and those designing systems for clinicians, data mining has been of less interest. Yes, you can use data mining to predict the volume of patients in your ED by day and hour. Yes, you can use data mining to order supplies more intelligently. But to improve patient care? Not so much.

Yes, research using data mining can provide us with some clinical information,  but such retrospective studies, especially subgroup analysis, can lead to egregious error, as summarized in a recent article in JAMA. It’s not a replacement for prospective, blinded and randomized studies.

For decades, designers of clinical information systems have been trying to get physicians and other clinicians to use structured data entry. Point-and-click, or in the iPad/Surface era, touch-and-swipe, is a great way to produce structured data. Computer programmers love structured data. Researchers love structured data. Business managers love structured data. The main problems are that (1) physicians and other clinicians are remarkably expensive data entry clerks, and (2) the resulting medical records are of little clinical utility. If you’re trying to care for a patient, or defending a malpractice suit, and looking at a medical record, you’d rather have a single page of handwritten concise notes that 400 pages of structured data. And, especially given the structured methods of nurse charting these days, 400 pages of structured data is what you get. As noted in a previous post here, the signal-to-noise level is very low. Quantity, not quality.

There are potential advantages to structured data entry for medical encounters. If you try to write a prescription for a medication for which the patient is allergic, a screen will pop up and slap you in the face. Metaphorically speaking. This does prevent medical error. But there may be downsides, including alarm fatigue – when the computer keeps alerting you to potential problems, but it’s often wrong. The computer is, in Aesop’s terms, crying wolf all the time.

When trying to use an app – I guess I have to say that “app” instead of “application,” as we are in a democratic society, and the people have voted overwhelmingly for the shorter term – when trying to use an app for structured charting, given the limited choices, it’s like fitting square pegs into round holes. For example, I just saw a patient who is “allergic” to Augmentin. It gives her vomiting, even if she takes it with food. This had been noted in the electronic medical record (EMR) on previous visits. She can take amoxicillin and penicillin without problems. This had been noted in the EMR on previous visits. So why is the EMR so stupid that it always pops up and slaps me in the face with a “NO, YOU CAN’T PRESCRIBE PEN-V-K TO THIS PATIENT YOU IDIOT!!!!”? A good example of trying to fit the square peg of a drug sensitivity into an EMR round hole that can only deal appropriately with true allergy. Another example that happened a few minutes later: a woman who was “allergic” to NSAIDs, but it was simply that she had a mild case Von Willebrand’s Disease and she needed to take them sparingly due to the possibility of bleeding. This was a good example of how you can – and in my opinion, should – anthropomorphize medical computer apps. Apps that are rude, crude, or stupid need to be reeducated (reprogrammed).

So, for structured data gathering when the data may be more complex than can be captured by the data structure, both pluses and minuses. What about unstructured data?

I’ve dictated my ED charts to a computer for decades, and now that the technology has finally improved enough, I can’t imagine doing them any other way. With the improvements in speech recognition over the years, clinicians are rapidly abandoning structured charting apps. Or trying to, their administrators are resisting). So we might as well resign ourselves, at least for physician and other clinician’s charts, to having to deal with unstructured, speech-recognized text.

Or is it truly “unstructured”?

It’s not structured based on clicks on little radio buttons, or text laboriously typed into little boxes on the screen. But there certainly is structure in medical reports. Whether dictated, handwritten or typed, a medical/surgical history and physical usually has structure: a HPI, ROS, PMH, Physical Exam, and either Assessment and Plan or Medical Decision-Making.

Can we take advantage of this structure, and the grammatical structures normally found in language, to analyze and make sense of it? As Bob the Builder says, “Yes we can!” It’s called “reading.” Natural Language Processing

Sorry for the trick question, the question should really be “can our computers take advantage of this structure” and the answer again is “yes we can” although perhaps not with the explanation point. Can we teach our computers to read? To a limited degree, the answer is yes. It’s called text mining, also known as text analytics. You take unstructured text – also known as free text – and apply natural language processing (NLP). Having an expected format like a HPI, and a context such “ED dictation” helps with the NLP. The output is data. The better the NLP, the better the data; and NLP is getting much, much better, with application of AI techniques such as machine learning.

Back maybe 15 years ago, I had a bit of experience with NLP in a clinical setting. Ray Kurzweil, one of the pioneers in speech recognition, developed an application for Emergency Department charting that evolved into something called Clinical Reporter. The Clinical Reporter team had a NIST grant to develop an add-on NLP module which we used, on an experimental basis, to code our charts. It did a pretty good job.

But now that CMS is talking about getting rid of hospital clinic visit coding, and maybe ED visit coding, by paying the same amount for all of them, maybe we don’t need NLP to help code charts.Apple Siri

Can NLP be helpful clinically? The answer right now is a qualified “yes” and a “not quite yet.”

Can NLP help for clinics who see patients on a regular basis? There are hints that this may soon be useful, useful enough that it may help save lives. An article entitled “Predicting the Risk of Suicide by Analyzing the Text of Clinical Notes” was just published five days ago on the prestigious online journal PLOS ONE. Though clinicians routinely ask psychiatric patients about thoughts of suicide, it turns out that only about a third of suicidal patients will admit it. And identifying patients at high risk of suicide, for more intense monitoring and treatment, remains problematic for mental health professionals. By using a NLP technique called genetic programming, the authors tried to use word and phrase analysis of the electronic medical records to assess risk of suicide. This was a small, retrospective study, and showed only 67–69% accuracy; however, if that is borne out in larger studies, and if it’s prospectively validated, this might make a significant clinical difference in preventing suicide. This study was carried out in the Veteran’s Administration (VA) Health System, and might not be applicable to other health systems; but we won’t know until we try. Speaking of trying, there is something called the Durkheim Project, named after trying to extend this project with VA data to social networks; it’s a project of Dartmouth University and the VA. Quoting from the project website it is named in honor of Emile Durkheim, a founding sociologist whose 1897 publication of Suicide defined early text analysis for suicide risk, and provided important theoretical explanations relating to societal disconnection.

Can NLP help for real-time clinical encounters? Not quite yet. First, in order for NLP to help you with your diagnosis and treatment plan, it needs to analyze your history and physical exam before your formulate your diagnosis and treatmenSay t plan. Right now, how many physicians and other clinicians chart their H+P before they formulate a diagnosis and treatment plan?

For this to work, we need to be able to chart, in free text, as we go along. And , in a busy ED, sometimes you just have to defer charting as something more critical just dragged you away; you want to get the patient discharged rather than having her wait for another hour to be discharged. But, presuming we could get NLP to work before the patient was discharged, perhaps the computer could ask you “did you consider Lower Slobbovian Hemorrhagic Fever?”

Perhaps this is the wrong role for NLP in the acute medical setting.

Do you have an iPhone? with Siri? or an Android phone? or Google Now?Movie Poster: Her

Then you may be using NLP all the time. You can talk to your phone and it will answer to the best of its ability. Remember that anthropomorphosing computer programs is good? So, do it with Siri or Google Now. What kind of a person do you think you’re dealing with? If my experience is any guide, you’ll think it’s an idiot savant. Someone who knows lots of things but has no common sense and no understanding of real life or, half the time, what you’re asking about.

But this model, of consulting your NLP phone when you need help, might have promise for clinical use in the ED or clinic in the future. “Phone, I’ve got a 17-year old girl with a possible history of drug abuse, with new-onset psychosis, and a petechial rash on her lower legs. She came in as a Level I trauma, but as it turns out, there wasn’t really any trauma. But her spleen is big on the trauma pan-scan.  Any ideas?” “Well, Keith, have you thought about TTP? Neurological manifestations are more common, but psychosis has been reported. And TTP has been reported from injecting the oral narcotic Opana. If the platelets are low when the CBC comes back, you’re pretty much got the diagnosis clinched.” “Thanks, phone, I was sort of thinking along those lines, but I didn’t remember the association with Opana. And yes, the CBC just came back, and the platelet count is ‘pending’ which is a almost sure sign it’s going to be low.” Phones aren’t much use at performing a physical exam, but eventually they might be as useful for diagnosing zebras as they are for finding out where the Spike Jonze movie “Her” is playing.  But if your phone is anything like my phone, it’s a long way from the operating system in “Her.”

I was going to close this essay with the phrase “stay tuned.” But there is no need to stay tuned. At some point, when you’re having a hard time with a difficult medical case, your phone will start gently offering you suggestions.

Share

Anthropology

This entry is part 27 of 30 in the series Words
Lawrence of Arabia

Lawrence of Arabia

When doing usability testing (see Discount Usability Testing) we tend to act like anthropologists, observing people using computers as if they were savages performing quaint native rituals.

In a post in UXmatters, Jim Ross argues that we should also use the anthropological technique of participant observation: basically, going native. Or, in other words, trying to accomplish the user’s tasks on the computer ourselves.

There are arguments against this approach.

One is: Who’s doing “going native” to test the program? If it’s the coder who wrote the program in the first place, then it’s hard to argue that this is a legitimate test as the coder already knows the program inside and out.

Or is that true? A guy I know coded a program then six months later, as a user, couldn’t get it to work right without a few tries. While you might think this is a rare occurrence,  my personal usability analysis of many leading medical programs suggests this might occur fairly often.

In fact, you can argue that you should take the coders or tech support people, give them tasks often performed by users, and make them use the program over and over until they can get it right each time. Time them.

Carrying the argument further, the ideal person to “go native” is someone who is naïve to the program, yet has a background in usability: an independent usability analyst. I expect that usability consultants will recommend this highly (think: job security). That doesn’t mean it’s a bad idea.

Share

iPhones

This entry is part 28 of 30 in the series Words

On May 3, Steve Stack, Chair of the American Medical Association (and an emergency physician from Lexington, KY) gave a presentation on electronic health records (EHRs) to the Centers for Medicare and Medicaid Services. The paper is worth a close read. He observes that physicians are technology early-adopters, but that there had to be Federal financial incentives for physicians and hospitals to adopt an EHR. Why? EHRs suck. (I rephrase only slightly.) He points out that EHRs are immature products. If we judge by human development, and want to use a derogatory term, we might call them retarded, in this case invoking the original meaning of the word retarded, as in slowed development, compared to their peers.Signal

Though an 18 month-old child can operate an iPhone, physicians with 7 to 10 years of post-collegiate education are brought to their knees by their EHRs.

In 2010, a quarter of physicians who would not recommend their EHR to others. Two years later,  over a third were “very dissatisfied” with their EHR and would not recommend it.

When an EHR is deployed in a doctor’s office or hospital, physician productivity predictably, consistently and markedly declines. Even after months of use, many physicians are unable to return to their pre-EHR level of productivity – there is a sustained negative impact resulting in the physician spending more time on clerical tasks related to the EHR and less time directly caring for patients. In a way, it ensures the physician practices at the bottom of his degree.

He gives examples of how a physician’s medical note used to be:

  • 24 y/o healthy male. Slipped on ice and landed on right hand. Closed, angulated distal radius fracture. No other injuries. Splint now and to O.R. in a.m. for ORIF.
  • 18 y/o healthy female. Fever and exudative pharyngitis for 2 days. Exam otherwise unremarkable. Strep test +. Rx. Amoxil

He goes on to talk about how malpractice litigation, billing and coding, and CMS and other insurance requirements for payment have bloated the medical record. And further, how EHR features such as templates, macros, and cut-and-paste have homogenized medical records while (with the difficulty of typing or dictation) decreased the visit-appropriate essential information.

Seems to me that over the past 30-40 years (yes I’m that old) the medical-chart signal-to-noise ratio has gone from 0.99 (99% of the chart is signal, that is, clinically useful information) to, at least for EHR inpatient progress notes and ED notes, to 0.1 (10% signal, and 90% noise).Noise

One of his three conclusions was

ONC [Office of the National Coordinator for Health IT] should immediately address EHR usability concerns raised by physicians and take prompt action to add usability criteria to the EHR certification process.

Bravo!

Share

Anti-Data Pixels

This entry is part 25 of 30 in the series Words

Less is More
Mies van der Rohe

In high school English class, many of my generation were forced to study a book about writing known as “Strunk and White.” Compared to many other books we were forced to read, it had many advantages. It was short. It was to-the-point. It was full of pithy sayings, the most pithy: omit needless words.

In Cognitive Friction, we extended the idea to graphical computer user interfaces as “omit needless pixels.” In Performance, Data Pixels, Location, and Preattentive Attributes we looked at Nielsen and Tahir’s analysis of the percentage of a home page’s area devoted to different purposes; in this way, we could determine which were valid data pixels, which were not, and the ratio of data to non-data pixels.

In Lessons from Tufte, we read from The Visual Display of Quantitative Information

The larger the share of a graphic’s ink devoted to data, the better (other relevant matters being equal):

Maximize the data-ink ratio, within reason.

Every bit of ink on a graphic requires a reason. And nearly always that reason should be that the ink presents new information. …

The other side of increasing the proportion of data-ink is an erasing principle:

Erase non-data-ink, within reason.

Ink that fails to depict statistical information does not have much interest to the viewer of a graphic; in fact, sometimes such nondata-ink clutters up the data…

In Menu we discussed “analysis paralysis”: the more choices on a computer screen, the harder it is to use, and the more likely a user will make a mistake; and the importance of paring down the number of choices. We may consider the area of a computer screen devoted to choices that users never or rarely use to be made up of non-data-pixels. What is worse, these supernumerary choices distract from the data pixels, and since they are worse than other non-data pixels (they distract more), we may term them anti-data-pixels.

Want to make a computer screen or web page better? First, omit anti-data pixels. In a future post, I will discuss a heuristic (fancy name for a simple rule) for determining how to do this. Next, omit non-data pixels. What is left should be pure, clean, relevant data.

Death to anti-data pixels!

The Elements of Style (4th Edition)

Price: $9.63

4.7 out of 5 stars (737 customer reviews)

330 used & new available from $1.90

The Visual Display of Quantitative Information

Price: $27.86

4.3 out of 5 stars (134 customer reviews)

168 used & new available from $12.75

Homepage Usability: 50 Websites Deconstructed

Price: $34.97

3.7 out of 5 stars (77 customer reviews)

144 used & new available from $0.39

Share

Giveaway

This entry is part 24 of 30 in the series Words

Dr. Vivek Reddy, a neurologist at the University of Pittsburgh Medical Center, also works on its digital records effort.

In a February 19 article in the New York Times,  Julie Creswell calls the healthcare IT portion of the 2009 stimulus bill (American Recovery and Reinvestment Act of 2009)  ‘a $19 billion government “giveaway”’ resulting from the lobbying of the big HIS vendors. One of the quotes in her article points out the usability limitations of these big HIS systems: ‘“On a really good day, you might be able to call the system mediocre, but most of the time, it’s lousy,” said Michael Callaham, the chairman of the department of emergency medicine at the University of California, San Francisco Medical Center.’

I have to admit, I wouldn’t mind giving a lot of our tax dollars to these big companies, if they would only invest it in usability improvements that would save both lives and money.

Share