research article

Fluency: Evaluation of Residents with Voice Recognition Software

Jaharris A. Collier*, Gregory F. Domson, Nital P. Appelbaum, Kirstin A. Lewis, Raees Seedat, Gregory J. Golladay

*Corresponding author: Jaharris A. Collier, Department of Orthopaedic Surgery, Virginia Commonwealth University School of Medicine, PO Box 980153, Richmond VA 23298, USA

Received Date: 16 April, 2020, 2020; Accepted Date: 13 May, 2020; Published Date: 19 May, 2020

Citation: Collier JA, Domson GF, Appelbaum NP, Lewis KA, Seedat R, et al. (2020) Fluency: Evaluation of Residents with Voice Recognition Software. J Surg 5: 1308. DOI: 10.29011/2575-9760.001308

Abstract

Introduction: Electronic health records have become a core component of managing patient care in healthcare. Use of dictation and voice recognition software by medical professionals for charting is a method that could facilitate more efficient and easier recording of the medical data and records.

Objective: Our aim was to assess the perceived value of voice recognition software on patient encounter documentation in an orthopaedic surgery resident population. This included adoption rate, quality of the records, workflow efficiency, as well as overall value.

Materials and Methods: We surveyed resident expectations of voice recognition software before introduction, and at four and sixteen months following its implementation, inquiring about resident experience. The pre-survey assessed resident expectations of the voice recognition software and general attitudes regarding its usefulness in practice. The post-survey assessed resident experiences when interacting with the tool.

Results: The expectations survey had an 88% response rate (22 out of 25) at pre-implementation, the experiences survey had 80% response rate (20 out of 25) at four months-post, and 84% response rate (21 out of 25) at sixteen months’ post-implementation. We found that overall experience was higher for perceived value of voice recognition software than initial expectations. Conversely, overall expectations for improvement in time spent on record keeping and quality of records were higher than their actual experience.

Conclusion: We concluded that although residents found the voice recognition software to be valuable, they did not believe it increased their efficiency.

Keywords

Charting; Dictation; Documentation; Electronic health record; Orthopaedic residency; Voice recognition

Abbreviations

EHR: Electronic Health Record

Introduction

Clinical documentation is one of the most costly and time-intensive aspects of the electronic health record (EHR) system [1]. Not only is accurate documentation vital for patient safety, it is also required for reimbursement, and is often used in litigation [2]. Since widespread implementation of EHR systems, many developments have been made to make such systems more efficient [3]. Speech recognition technology, which automatically translates voice to text, was initially used for clinical documentation in 1981 [4]. It was first adopted by radiology departments, where a quick turnaround time to documenting in the EHR was required [5]. These initial systems were inelegant and inefficient, requiring users to pause between individual words [6]. Thus, other specialties did not accept speech recognition as readily. As this technology has advanced and become more widespread in everyday applications, such as smart phones and home gadgets, its use in the EHR system has, too, become more widespread [7]. A recent study found that 90% of hospitals plan to expand their overall use of speech recognition technology [8]. There are advantages to using speech recognition technology for documentation in the EHR. Speech recognition can drastically reduce the turnaround times for clinical documentation. Zick and colleagues found the average turnaround time to be 3.65 minutes when speech recognition software was utilized versus 39.6 minutes for traditionally transcribed charts [9]. However, physicians must spend more time on dictation and front-end correction than is required with human transcription [10]. There is a range of reported error rates associated with speech recognition technology. Quint and colleagues note a 22% error rate when using speech recognition, while McGurk and colleagues report a 4.8% error rate [11,12]. Such wide ranges in error rates are also noted between individual physicians, with explanations ranging from pronunciation, speed, and clarity of speech to failure to proofread the documentation [13].

The implementation and success of speech recognition technology can also be tied to physician expectations of such technology. In one study of emergency room physicians, 82% of clinicians were initially optimistic regarding the use of speech recognition technology in the EHR, and 87% maintained that it was a good idea to use speech recognition technology after using it for 6 months [14]. To date, no other group has looked at the implementation and use of speech recognition technology specifically among orthopedic surgeons, or orthopedic surgical residents. We hypothesized speech recognition software would be widely adopted by orthopedic surgical residents based on positive experiences and perceptions of the tool over the course of 18 months.

Materials and Methods

Orthopaedic surgery residents at an urban academic medical center were recruited to complete a pre-expectations survey in February 2018 and post-experience surveys in June 2018 and June 2019, before and after incorporation of voice recognition software for chart documentation. The survey tool was based on the questionnaire developed by Alapetite, et al. [15] and later adapted by Lyons, et al. [14]. The pre-survey assessed residents’ expectations of the voice recognition software and general attitudes regarding its usefulness in practice. The residents were then given a one-hour formal training session on use and utilization of the voice recognition software. Four months later, the post-survey assessed resident experiences when interacting with the tool. Sixteen months following implementation, the same post-survey was administered to the residents to assess their continued experiences with the voice recognition software. Residents completed the survey via paper format during resident-specific meetings. Residents who were absent from the meeting were emailed an electronic version of the survey directly after the meeting. All data were collected anonymously and analyzed at the departmental-level via descriptive statistics.

Results

The expectations survey had an 88% response rate (22 out of 25) and the experiences survey at four months’ post-implementation had 80% response rate (20 out of 25), while the sixteen month-post survey had 84% response rate (21 out of 25). There was wide variability in adoption rates when measured at four and sixteen months’ post implementation of the tool. 26% (n=6) of residents did not use the speech recognition tool at four months compared to 24% (n=5) of residents at sixteen months. The remaining residents used the tool 1-20% (4 months, n=5, 22%; 16 months, n=4, 19%), 21-40% (4 months, n=5, 22%; 16 months, n=9, 43%), 41-60% (4 months, n=3, 13%; 16 months, n=3, 14%), and 61-80% (4 months, n=2, 9%; 16 months, n=0) of the time. None of the residents at either follow up time period used the tool more than 81% of the time. Two residents (9%) did not know how frequently they used the tool at the four months’ survey. As seen in Figure 1, at the four and sixteen-month surveys, respondents who agreed that the introduction of speech recognition software was a good idea for medical record keeping increased from 95% to 100% compared to 91% at the initial survey. 95% of respondents agreed that the department head thought it was a good idea to introduce speech recognition during the initial survey while 100% agreed at four months and 76% agreed at sixteen months. In general, 86%, 95%, and 67% of respondents believed the faculty thought the introduction of speech recognition was a good idea at the initial, four, and sixteen-month surveys respectfully. When asked to comment on their fellow co-residents, 41% of respondents agreed that their colleagues thought it was a good idea to introduce speech recognition for medical record keeping during the initial survey which increased to 75% at four months and 86% at sixteen months.

As seen in Figure 2, at the four and sixteen-month experiences surveys, the percent of respondents who agreed that the quality of the medical record improved after the introduction of speech recognition increased from 35% to 52%. This is compared to the 55% of respondents who held this expectation. 27% of respondents thought voice recognition would improve precision (i.e., that no superfluous information is included) in the medical record initially while 15% agreed at four months and 10% agreed at sixteen months. When asked if there was improvement in the structure (i.e., information is where it is supposed to be) of medical records, 27% agreed voice recognition should become more structured in the expectations survey. 25% and 33% of respondents agreed records were more structured at four and sixteen months respectively. With respect to completeness (i.e., that all required information is included), 50% of respondents expected increased completeness, while 55% agreed at four months and 48% agreed at sixteen months. As seen in Figure 3, 59% of respondents expected that speech recognition would optimize the process of keeping the medical record. This increased to 60% at four months and then declined to 48% at sixteen-months. 64% of respondents expected speech recognition to produce appreciable time savings for the benefit of patient care prior to implementation of speech recognition. At 4 months, this decreased to 50% but then increased to 57% at sixteen months. Lastly, respondents reported on whether speech recognition would decrease the time spent on medical records in the long-term future. In the expectations survey, 64% agreed that it would have such effect, however 35% of respondents agreed they would spend less time in the long-term on medical records at four months and 33% felt the same way after sixteen months.

Table 1 presents open text comments from the experiences surveys, revealing several factors affecting perceived technology effectiveness in practice. These included technical difficulties with the computers and dictaphones, changes in type of information being recorded, and the preference of the dictation service over the voice recognition software.

Discussion

The results of our survey showed that after both a four-month and sixteen-month period of usage, residents held the opinion that voice recognition software was a valuable tool in medical documentation. 100% of respondents felt it was a good idea to introduce speech recognition after sixteen-months of use. This may be due to a collective desire for improvement in medical record keeping among residents. The residents not only held this opinion themselves, but also perceived 86% of their colleagues to share the same sentiment. Interestingly, agreeance rates dropped from 95% to 67% on whether faculty believed implementation of the tool was a good idea. Regarding the department head, a 100% agreeance rate at four months dropped to 76% at sixteen months. This is likely due to the emphasis on voice recognition software at a departmental level generated at the beginning of the study, which had faded after sixteen months. The survey then investigated the residents’ perception of efficiency, time spent on documenting, and precision of the records. The responses indicated that residents felt the quality of medical records increased the longer they used the software (35% at four months to 52% at sixteen months for quality improvement; 25% at four months to 33% at sixteen months for structure). Interestingly, after sixteen months of usage, residents perceived their notes to be both less precise and less complete compared to their experience after four months of use. This may be due to residents’ impression that voice recognition increased the quality of the medical record overall within the department, but negatively affected the quality of their own medical records while using voice recognition. Additionally, there was no retraining between four months and sixteen months which may have led to less than optimal usage of the voice recognition software.

Regardless, residents held the belief that the tool was an asset in medical record documentation. Almost half of the residents believed voice recognition optimized the process of keeping the medical record compared to any other format used in the past. 57% of residents believed speech recognition software produced appreciable time savings at 16 months’ post-implementation. These statistics illustrate an underlying appreciation of speech recognition as well as its practical implication in everyday practice. The decrease in average time spent using Fluency during medical record documentation may have decreased from four to sixteen months due to misplacement of one of dictation devices in the orthopaedic dictation room, as well as Dictaphones not being readily available for use, as noted in the free text comments. These comments also suggested a lack of available platforms for the service, and slow computer network speeds while using the application. Such technical difficulties would obviously impair one’s ability to not only use Fluency, but to use it to an extent that one could overcome the initial learning curve seen with any new software. Residents’ willingness to adopt voice recognition into their everyday practice, outside of the study, may have played a factor in results (i.e., some feel more comfortable typing or using the dictation service). Residents’ baseline savviness with technology may have influenced optimal use of the device.

Study limitations include a small sample size within a single surgical residency program, variation in documentation roles across training years, and varying adoption rates of the tool. Considering our implementation timeline spanned over multiple academic years and the desire for an anonymous survey, we were unable to statistically test for changes in perception within person. In addition, the proficiency curve for this tool is unknown but is likely affected by frequency of use. Further investigation is warranted to understand the learning and adoption curve with speech recognition software and actual differences in quality of documentation through chart review. Likewise, understanding the amount of use with a speech recognition tool before users perceive improvement in efficiency would also be of future interest to inform change management and roll-out strategies.

Conclusion

When comparing the experiences and expectations of the orthopaedic surgery residents, we found that overall experience was higher for perceived value of voice recognition software than initial expectations. Expectations for improvement in time spent on record keeping and quality of records were higher than respondents’ actual experience. Further research is planned to investigate the impact of voice recognition software on timeliness and quality of resident documentation.


Figure 1: Residents’ expectations and experiences of implementation of voice recognition software.



Figure 2: Residents’ expectations and experiences with regards to medical record keeping, after implementation of voice recognition software.



Figure 3: Residents’ expectations and experiences with regards to efficiency of medical record keeping, after implementation of voice recognition software.

Text Comments: Do you have any general feebdack regarding your experience using Fluency to improve medical record documentation? If so, please describe.

4 Months Post

“I much prefer the phone dictation service”

“More long winded wordy HPI, physicals and plans but not necessarily better”

“Most useful for documenting in long form paragraph style text - more difficult to apply to more succinct notation (i.e. bulleted lists, abbreviations, etc). Benefit is that more information is probably included when dictating. Also tends to result in more words to convey the same information and can make notes more time consuming to read.”

“Not on all computers”

16 Months Post

“It would be beneficial if we could dictate from a phone without having to chart open in a computer (send info into patient chart without signing it)”

“The dictaphones in the team room are missing. We used to have 5 and now there is only 1 found in the team room. The app on the phone disconnects frequently that may depend on the wifi in the hospital.”

“User experience depends entirely on the computer being used. There's a 50% chance that any computer will be super “laggy” and it's not worth your time. When it works, it's great. When it doesn't it's a huge [pain]. The recognition errors are a pain. Even worse is the random formatting that the software does.”


Table 1: Open responses regarding experiences while using Fluency for medical record keeping.

References

  1. Poissant L, Pereira J, Tamblyn R, Kawasumi Y (2005) The impact of electronic health records on time efficiency of physicians and nurses: a systematic review.J Am Med Inform Assoc 2005.
  2. Bergeron B (2004) Voice recognition and medical transcription. Med Gen Med. 2004.
  3. Ajami S (2016) Use of speech-to-text technology for documentation by healthcare providers.Natl Med J India 29:148-152.
  4. Leeming BW, Porter D, Jackson JD, Bleich HL, Simon M (1981) Computerized radiologic reporting with voice data-entry.Radiology 138:585-588.
  5. Robbins AH, Horowitz DM, Srinivasan MK, Vincent ME, Shaffer K, et al. (1987) Speech-controlled generation of radiology reports.Radiology 164: 569-573.
  6. du Toit J, Hattingh R, Pitcher R (2015) The accuracy of radiology speech recognition reports in a multilingual South African teaching hospital.BMC Med Imaging 15: 8.
  7. Hodgson T, Coiera E (2016) Risks and benefits of speech recognition for clinical documentation: a systematic review.J Am Med Inform Assoc 23: e169-179.
  8. Zhou L, Blackley SV, Kowalski L, Doan R, Acker WW, et al. (2018) Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists.JAMA Netw Open. 2018.
  9. Zick RG, Olsen J (2001) Voice recognition software versus a traditional transcription service for physician charting in the ED.Am J Emerg Med 19:295-298.
  10. Poder TG, Fisette JF, Déry V (2018) Speech Recognition for Medical Dictation: Overview in Quebec and Systematic Review.J Med Syst 42: 89.
  11. Quint LE, Quint DJ, Myles JD (2008) Frequency and spectrum of errors in final radiology reports generated with automatic speech recognition technology.J Am Coll Radiol 5:1196-1199.
  12. McGurk S, Brauer K, Macfarlane TV, Duncan KA (2008) The effect of voice recognition software on comparative error rates in radiology reports.Br J Radiol 81: 767-770.
  13. Chang CA, Strahan R, Jolley D (2011) Non-clinical errors using voice recognition dictation software for radiology reports: a retrospective audit.J Digit Imaging 24:724-728.
  14. Lyons JP, Sanders SA, Fredrick Cesene D, Palmer C, Mihalik VL, et al. (2015) Speech recognition acceptance by physicians: A temporal replication of a survey of expectations and experiences.Health Informatics J 22: 768-78.
  15. Alapetite A, Andersen HB, Hertzum M (2009) Acceptance of speech recognition by physicians: a survey of expectations, experiences, and social influence. Int J Hum Comput Stud 67: 36-49.

© by the Authors & Gavin Publishers. This is an Open Access Journal Article Published Under Attribution-Share Alike CC BY-SA: Creative Commons Attribution-Share Alike 4.0 International License. With this license, readers can share, distribute, download, even commercially, as long as the original source is properly cited. Read More.

Journal of Surgery

slot starlight princessslot gacor pgsoftakun gacor olympusrtp slot onlinejam gacor slot pg softtrik gacor slot aztecfitur scatter hitam slot mahjongsugar rush modal recehcheat apk engineslot mahjong gokil histerisinfo rtp harianrtp mahjong untungcheat mahjong bandar rungkatmodal receh olympusslot online thailandpola jitu starlightscatter naga hitamrtp gacor banjir wildslot88 jackpot kalitrik pola x5000olympus x500depo dana modal recehpg soft mudah gacorrahasia menang slotrtp balik modalcandu menang slot mahjongslot deposit danatips ampuh bermain slot mahjong waystrik slot sugar rushakun pro mahjong gacorrtp slot terjituslot mahjong ways gacorcara dapetin maxwin olympuspancing scatter mahjong ways 1rekomendasi slot mahjong ways 2scatter mahjong terbarupola mahjong ways hari inimahjong ways modal recehcuan mahjong waysdemo slot pg softnaga awal julyrtp slot awal julymahjong bulan mudamodal receh slotlink slot mahjongwinrate tinggi rtpslot server filipinavolatility pg softwaktu tepat slot gacorjam gacor saldo bancarfitur bonus lucky neko4 simulasi jackpot mahjongtrik sepuh mantan napiamantotorm1131