TEAM- Team Emergency Assessment Measure


Author of Tool: 

Cooper S, Cant R, Porter J, Sellick K, Somers G, Kinsman L, Nestel D.

Key references: 

Cooper S, Cant R, Porter J, Sellick K, Somers G, Kinsman L, Nestel D. Rating medical emergency team performance: development of the Team Emergency Assessment Measure (TEAM). Resuscitation 2010: 81(4): 446-452.

Primary use / Purpose: 

Evaluation of teamwork performance in medical emergencies- e.g., cardiac or other resuscitation teams.


This research developed a valid, reliable, and feasible teamwork assessment measure for emergency resuscitation team performance. Although generic and profession specific team performance assessment measures are available (e.g. anaesthetics) there are no specific measures for the assessment of emergency resuscitation team performance.
METHODS:The instrument was developed and tested with senior nursing and medical students in the stages listed in the section below.
Conclusion: The final 12 item (11 specific and 1 global rating) are rated using a five-point scale and cover three categories leadership, teamwork and task management. In this primary study, TEAM was found to be a valid and reliable instrument and should be a useful addition to clinicians’ tool set for the measurement of teamwork during medical emergencies. Further evaluation of the instrument is warranted to fully determine its psychometric properties.


DEVELOPMENT: (1) An extensive review of the literature for teamwork instruments; (2) development of a draft instrument with an expert clinical team; (3) Review by an international team of seven independent experts for face and content validity; (4) Instrument testing on 56 video-recorded hospital and simulated resuscitation events for construct, consistency, concurrent validity and reliability; and (5) a final set of ratings for feasibility on fifteen simulated ‘real time’ events.
TESTING: Following expert review, selected items were found to have a high total content validity index of 0.96. A single ‘teamwork’ construct was identified with an internal consistency of 0.89. Correlation between the total item score and global rating (rho= 0.95; p < 0.01) indicated concurrent validity. Interrater (k= 0.55) and retest reliability (k= 0.53) were ‘fair’, with positive feasibility ratings following ‘real time’ testing.



PDF iconfinal_team_tool.pdf

Other Information: 

Detailed web information on the the tool is available from

Digital Object Identifier (DOI):


Welcome to the Measurement Instrument Database for the Social Sciences (MIDSS). The site is designed to be a repository for instruments that are used to collect data from across the social sciences. Please use the site to discover instruments you can use in you own research.