Falls assessment tools
Fall assessment tools are used to determine a patient’s risk for falls on admission to a unit. There are many fall assessment tools that have been tested as well as assessment tools that are developed by health care organizations and or for specific units. The tools are tested for their ease of use and ability to accurately predict which patients are most likely to fall. The predictive validity of these tools is reported most commonly in terms of their sensitivity and specificity. Sensitivity measures the proportion of actual positives which are correctly identified as having the issue or condition being measured, in our case, patient falls. Specificity measures the proportion of negatives which are correctly identified, in our case the percentage of patients at low risk for falls.
Sensitivity refers to the number of patients with high risk scores who fell divided by the actual number of patients who fell. If during the development of the tool, for example, there were 100 patients with high risk scores for falling and a total of 120 patients fell, the sensitivity would be 83%. The higher the sensitivity the better the predictive value of the tool.
Specificity refers to the patients who are at low risk for falls and those who do not fall. The number of patients with low fall risk scores is divided by the number of patients who did not fall. If, for example, 500 patients had low fall risk scores and the number of patients who did not fall was 650, the specificity score would be 77%. Again the higher the specificity, the better the predictive value of the tool.
Fall assessment tools that have low sensitivity and low specificity will inaccurately indicate fall risk. When a fall assessment tool has low sensitivity, patients who are at risk for falls may not be allocated appropriate resources to prevent the falls. Fall assessment tools which have low specificity may indicate a patient is at risk for falls, when in fact they are not, but many resources have been allocated unnecessarily.
Another important part of the fall assessment tool development is testing for inter-rater reliability. This tests how closely two people using the tool on the same patient would show the same results. If a tool has good inter-rater reliability it indicates the items are clear and easily interpreted. An inter-rater reliability should be 88% or higher to be considered acceptable.
Instant feedback:
There are two fall assessment tools that are used frequently although others are available: the Morse Fall Scale (MFS) and the Hendrich II Fall Risk Model (HFRM). The National Center for Patient Safety recommends this risk assessment tool for hospital inpatients. The MFS focuses on six items:
Each risk factor is scored and those scores are totaled. A total score of 125 is possible. The MFS is divided into ranges: a low fall risk score is below 25, a medium risk is between 25 and 50 and high risk for falling is 51 or higher. The MFS has shown sensitivity scores between 72% and 88% and specificity scores between 29% and 83% (Morse et al. 1996; Eagle et al. 1999; O’Connell & Meyers, 2002; Kim, Siti, Wong, Devi & Evans, 2007). A recent test of specificity (Kim et al., 2007) reported a specificity score of only 48.3%. According to Morse the inter-rater reliability was found to be 96%.
It is recommended that the MFS be used at the time of admission, after a fall, with a significant change in health status or medication, and at discharge or transfer to a new setting.
The HFRM consists of eight independent risk factors:
The presence or absence of a risk factor is scored with a score of 5 or higher indicating high risk for falls. Sensitivity in the 2003 report by Hendrich was 74.9% and specificity 73.9%. In a 2007 study by Kim et al, the sensitivity score was 70% with the specificity score of 61.5%.
Instant feedback:
©RnCeus.com