27-07-2017, 01:40 PM
Pattern classification systems are commonly used in adversarial applications, such as biometric authentication, network intrusion detection and spam filtering, in which data can be deliberately manipulated by humans to undermine their operation. As this adversary scenario is not taken into account by classical design methods, pattern classification systems can present vulnerabilities, whose exploitation can severely affect their performance and, consequently, limit their practical usefulness. The extension of the classification theory of patterns and the methods of design to contradictory contexts is, therefore, a novel and very relevant investigation direction, that has not yet been followed in a systematic way. In this article we address one of the main open problems: to evaluate in the design phase the security of the classifiers of patterns, namely, degradation of the performance under possible attacks that can be incurred during the operation.