BioSpace.com

Biotech and Pharmaceutical
News & Jobs
Search the Site
 
   
Biotechnology and Pharmaceutical Channel Medical Device and Diagnostics Channel Clinical Research Channel BioSpace Collaborative    Job Seekers:  Register | Login          Employers:  Register | Login  

NEWSLETTERS
Free Newsletters
Archive
My Subscriptions

NEWS
News by Subject
News by Disease
News by Date
PLoS
Search News
Post Your News
JoVE

CAREER NETWORK
Job Seeker Login
Most Recent Jobs
Browse Biotech Jobs
Search Jobs
Post Resume
Career Fairs
Career Resources
For Employers

HOTBEDS
Regional News
US & Canada
  Biotech Bay
  Biotech Beach
  Genetown
  Pharm Country
  BioCapital
  BioMidwest
  Bio NC
  BioForest
  Southern Pharm
  BioCanada East
  US Device
Europe
Asia

DIVERSITY

INVESTOR
Market Summary
News
IPOs

PROFILES
Company Profiles

START UPS
Companies
Events

INTELLIGENCE
Research Store

INDUSTRY EVENTS
Biotech Events
Post an Event
RESOURCES
Real Estate
Business Opportunities

PLoS By Category | Recent PLoS Articles
Mathematics - Mental Health - Neuroscience - Science Policy

Repeatability and Reproducibility of Decisions by Latent Fingerprint Examiners
Published: Monday, March 12, 2012
Author: Bradford T. Ulery et al.

by Bradford T. Ulery, R. Austin Hicklin, JoAnn Buscaglia, Maria Antonia Roberts

The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. We tested latent print examiners on the extent to which they reached consistent decisions. This study assessed intra-examiner repeatability by retesting 72 examiners on comparisons of latent and exemplar fingerprints, after an interval of approximately seven months; each examiner was reassigned 25 image pairs for comparison, out of total pool of 744 image pairs. We compare these repeatability results with reproducibility (inter-examiner) results derived from our previous study. Examiners repeated 89.1% of their individualization decisions, and 90.1% of their exclusion decisions; most of the changed decisions resulted in inconclusive decisions. Repeatability of comparison decisions (individualization, exclusion, inconclusive) was 90.0% for mated pairs, and 85.9% for nonmated pairs. Repeatability and reproducibility were notably lower for comparisons assessed by the examiners as “difficult” than for “easy” or “moderate” comparisons, indicating that examiners' assessments of difficulty may be useful for quality assurance. No false positive errors were repeated (n?=?4); 30% of false negative errors were repeated. One percent of latent value decisions were completely reversed (no value even for exclusion vs. of value for individualization). Most of the inter- and intra-examiner variability concerned whether the examiners considered the information available to be sufficient to reach a conclusion; this variability was concentrated on specific image pairs such that repeatability and reproducibility were very high on some comparisons and very low on others. Much of the variability appears to be due to making categorical decisions in borderline cases.
  More...

 

//-->