27-10-2016, 10:05 AM
1461564013-bistlfsr.docx (Size: 1.22 MB / Downloads: 6)
INTRODUCTION
The emergence of modern system-on-chip technology has led to a continuous increase in quantity and diversity of integrated circuits. As the complexity has grown, testing and calibrating for catastrophic and parametric faults in both product development and mass-production phases have become more challenging, and now constitute a major portion of overall cost. Although the off-chip testing process has been commercialized by means of Automatic Testing Equipment (ATE), this process is expensive and time-consuming. Recent research efforts for on-chip testing in analog mixed-signal circuits have therefore been of much interest, particularly focusing on Built-In Self Test (BIST) techniques in which test stimuli and response analysis are accomplished entirely on-chip through built-in hardware. Neither external test stimulus generator nor external digital signal-processing unit is required, resulting in low cost and low testing time.
1.1 Faults in Integrated Circuits
Failures in electronic components are generally the effects of chemical and physical processes. Failures caused by chemical effects lead to continuous production of defective components over whole production lines. Failures caused by physical effects, however, result in defects in individual components, involving component shorts and breaks, and packaging. Consequently, faults in integrated circuits are typically modeled based on physical failures, and generally classified into two categories, i.e. parametric and catastrophic faults. The catastrophic faults are caused by random defects, for example dust particles, resulting in short and open circuits or large-scale deviation of design parameters.
The parametric faults are representing the parameter variations of the nominal value, which exceeds the attributed tolerance band. Such parametric faults are caused by fluctuation in the manufacturing process. Comparing catastrophic and parametric faults, most electrical failures are catastrophic faults at a percentage of 83.5% (Rajsuman, 2000).
Catastrophic faults are commonly referred to as potential shorts and opens. In the case of shorts, two possible short circuits are bridging defects and Gate-Oxide Short (GOS). The bridging defects appear when two or more metal lines are electrically connected in an integrated circuit. Fig. 1.1 shows examples of bridging and GOS defects in CMOS technology presented by Ohletz (1996). Bridging resistance is important in defect detection since the majority of bridging defects show low resistance while many high resistance-bridging defects do not result in failure. These GOS defects are short circuits between the gate electrode and the active zone through the SiO2 oxide of the device. In the majority of cases, gate-oxide defects cause reliability degradation such as changes in transistor threshold voltage and the increase in switching delay. Fig. 1.1.2 shows examples of open circuits presented by Ohletz (1996), involving a foreign particle causing a line open and a line thinning, and a contaminating particle causing 7-line opens. These open circuits are unconnected or floating inputs that usually high impedance or floating.
Parametric faults are generally referred to as the variation in components and interconnect dimensions. The physical component dimension is relatively susceptible to process variation, such as a Gate-length (L) and a threshold voltage (VT) in CMOS transistors. The critical variations in interconnect dimensions of line-spacing and metal thickness result in different metal properties of the interconnect wires, such as their parasitic resistance, capacitance, and inductance values. As parametric faults typically depend on parameter tolerance band acceptability, modeling of parametric faults is relatively complicated at the physical design level. In order to overcome such impasses of parametric fault modeling, analysis method realizes an acceptable circuit instances based on tolerance specifications (Chang and Lee, 2002). In other words, the variation of parameters is initially injected randomly from ±5% to ±15% deviations from the nominal values, and the distribution of output variable values is subsequently analyzed for region of acceptability. The distribution of output variable is generally a normal distribution. In order to distinguish between acceptable and unacceptable instances, the region of acceptability is on the order of ±5% for the 99% confidential interval (Spink and et al., 2004). Large variations in circuit parameters such as ±10% variations in resistor and capacitor values have been considered as unacceptable, and therefore the Circuit-Under-Test (CUT) is classified as defective component instantly. However, some large variations in circuit parameters may not result in specification violation while some small variation may not cause tremendous specification violation.
1.2.CRITERIA FOR DESIGN:
Four primary parameters must be considered in developing a BIST methodology for embedded systems; these correspond with the design parameters for on-line testing techniques discussed in earlier chapter.
Fault coverage: This is the fraction of faults of interest that can be exposed by the test patterns produced by pattern generator and detected by output response monitor. In presence of input bit stream errors there is a chance that the computed signature matches the golden signature, and the circuit is reported as fault free. This undesirable property is called masking or aliasing.
Test set size: This is the number of test patterns produced by the test generator, and is closely linked to fault coverage: generally, large test sets imply high fault coverage. ƒHardware overhead: The extra hardware required for BIST is considered to be overhead. In most embedded systems, high hardware overhead is not acceptable.
Performance overhead: This refers to the impact of BIST hardware on normal circuit performance such as its worst-case (critical) path delays. Overhead of this type is sometimes more important than hardware overhead.
Issues for BIST
Area Overhead: Additional active area due to test controller, pattern generator, response evaluator and testing of BIST hardware.
Pin Overhead: At least 1 additional pin is needed to activate BIST operation. Input MUX adds extra pin overheads.
Performance overhead: Extra path delays are added due to BIST.
Yield loss increases due to increased chip area.
Design effort and time increases due to design BIST.
The BIST hardware complexity increases when the BIST hardware is made testable.
Benefits of BIST
It reduces testing and maintenance cost, as it requires simpler and less expensive ATE. BIST significantly reduces cost of automatic test pattern generation (ATPG).
It reduces storage and maintenance of test patterns.
It can test many units in parallel.
It takes shorter test application times.
It can test at functional system speed. BIST can be used for non-concurrent, on-line testing of the logic and memory parts of a system
LITERATURE SURVEY
2.1 EXISTING BIST:
Comparisons between Digital and Analog On-Chip Testing
Several early on-chip testing techniques have been applied successfully in digital circuits in which each level of abstractions is verified against the immediate preceding levels (Stound, 2006). Digital testing on-chip is supported by rigorous mathematical expression such as Boolean expressions, high-level programming language constructs, and single stuck-fault models. Therefore, test pattern algorithms and output response analysis in digital domains have been developed broadly with acceptable fault coverage.
As opposed to testing in digital integrated circuits, testing in analog mixed-signal circuits is relatively complicated owing to not only instantaneous continuous-time analog response of signal values, but also non-linear characteristics and broad variations in circuit parameters (Milor, 1998). Test accuracy of analog mixed-signal systems also depends upon the test equipment resolution as well as the accuracy of input testing stimuli. Besides, both analog and digital circuits are recently developed on the same substrate and disturbances, i.e. noises from digital sections may produce influences on the function of the analog parts. These characteristics lead to difficulties in testing analog circuits, including vulnerability to performance degradation and indecipherable standard fault models (Garcia et al., 2001).
2.1.1Comparisons between DFT and BIST Techniques
Two commonly known on-chip testing techniques are Design-for-Test (DFT) and Built-In Self Test. The DFT is a technique to reduce difficulty of testing by adding or modifying some hardware on chip. The scan DFT methodology (Wei et al, 1997), in which the sequential storage elements allow normal operation and test modes, has been a standard DFT practice followed by industry. In normal mode, the storage elements take their stimulus from a combinational logic and the response feeds into a combinational logic. In test mode, the storage elements are reconfigured as one or more shift registers, and each such configuration is known as a scan chain. The stimulus vector can be shifted serially into this scan chain. The chip is consequently allowed to function in normal mode and the responses for a test vector are captured in the storage elements. The response can be shifted out and compared with reference responses in order to test the chip for functional correctness. The use of DFT scan design has two major penalties, i.e. an area overhead due to the additional scan flip-flops and the performance overhead caused by on-path multiplexors in the scan flip-flops. Besides, this scan also has the disadvantage of greater power dissipation as there is generally more switching operation during scan mode than normal operation. Thus, a slow clock is commonly used for scan operation in order to reduce average power dissipation
The BIST refers to as techniques and circuit configurations that enable a chip to test internally and automatically (Cluskey, 1985; Yamani and McCluskey , 2003). In this BIST technique, test patterns are generated and test responses are analyzed completely on chip. Pattern generator logic reduces test data volume through shifting process of the easily detectable faults. The BIST technique offers various advantages over DFT testing techniques and ATE. First, test circuitry is incorporated on chip and no external tester is required. Second, test operations can be performed at normal clock rate. Third, the test operation can be performed after the chip has been incorporated in the system. However, two difficulties encountered in BIST techniques are area overhead and performance degradation. Incorporation of the self-testing capability requires addition of hardware on-chip, which may increase the silicon area and manufacturing costs. Applying BIST techniques for analog mixed-signal circuits can be primarily considered in two approaches, i.e. functional and structural tests. Functional BIST evaluates circuit functionality and compares to a set of functional specifications. This functional BIST requires analog stimulus and measurement of analog outputs. Therefore, the quality of applied input and the precision of measured outputs are relatively important factors, highly depending on the test setup and equipment. On the other hand, structural BIST is based on physical information of the manufactured device. Therefore, the applied test patterns can be optimally chosen and fault coverage can be evaluated based on fault models.
DESING PROCEDURE FOR PROPOSED DESIGN
BIST Techniques Based on Input Vectors
As depicted in, the BIST techniques based on input vectors are considered regarding the input stimulus generator in, and can be classified as vector-based and vectorless techniques. The vector-based BIST techniques are referred to as the testing techniques that require the output signals for fault information processing by means of applying test input stimuli such as DC input stimulus, sinusoidal input stimulus, and other non-sinusoidal signal. For DC input stimuli, the measurement of DC input signal through the use of a comparator (Venuto, 1995) yields low cost and low area overhead. A multiple DC measurement (Sasho, 1998) was also suggested in order to increase the fault coverage. Although these DC testing schemes are relatively simple and sufficient for some cases in the pre-screening process, the percentage of fault coverage is not high since some of hard-to-detect faults do not produce fault signatures through DC characteristics. The sinusoidal signals have also been used as an input vector (Current and Chu, 2001; Marcia, 2005), in which AC characteristics can be verified for the circuit functionality. The use of sinusoidal input vector offers a high fault coverage and multiple-frequency test can be achieved for a functional analysis in high-precision testing. However, the implementation of high-precision sinusoidal signal generation on-chip is relative complex, requiring a large chip area. In addition, long test time for AC analysis is need in order to correctly characterize the circuit. Non-sinusoidal input stimuli such as pulse stimulus (Singh at al.2004) and pseudo-random input stimulus (Marzocca and Corsi, 2002) have also been presented for some analog LTI circuits in LSI systems. On the other hand, the vectorless BIST techniques are referred to as the testing that reconfigures CUT in order to generate output signals automatically with no input stimuli applied. Oscillation-Based Test (OBT) has proposed for testing different classes of analog and mixed-signal circuits (Arabi and Kaminska, 1996; Zarnik, 2000; Harzon and Sun, 2006). During test mode, the circuit is transformed into an oscillator and the frequency of oscillation is measured. Fault detection is based on the comparison of the measured oscillation frequency of the CUT with a reference value obtained from a fault-free circuit, operating under the same test conditions. This OBT test method is suitable for switching circuits such as the switched-capacitor technique that can easily be transformed through switching mechanisms. Another vectorless BIST techniques is the current testing approach which employs the current sensors to detect the magnitude of the DC quiescent current (IDDQ) through a resistor, connecting between the CUT and the supply voltage (Rajsuman, 2000). The detected IDDQ will subsequently be compared with reference currents. Although this current testing approach has successfully been applied to digital circuits and can potentially enhance fault coverage, IDDQ testing suffers from power supply variation and ground shift.
2.1.3 BIST Techniques Based on Test Operation Modes
As depicted in the BIST techniques based on test operation modes are considered regarding the input isolation circuitry and can be classified as off-line and on-line test methods. In off-line BIST test methods, the CUT suspends normal operations, and enters a test mode when the appropriate test method is applied. The off- line test operation can generally be executed either through ATE or through the use of BIST circuitry. This off-line test has been realized widely in most analog mixed-signal testing. For instance, all of those BIST techniques summarized in Section obey off-line test methods. In on-line BIST test methods, the outputs from CUT are tested during normal operation. This on-line test operation can be achieved by coding scheme that has been embedded in the circuit design. For linear analog filters, continuous checksums have been proposed by (Chatterjee, 1993) through a cascade of analog integrators, which generate a non-zero signal in the case of an error in the transfer function of the circuit. An analog checker employs the on-line test method, which verifies an operational amplifier by means of normal input and output signals (Velasco, 1998). An on-line BIST test method for analog biquad filter based on system identification (Cota and et al., 1999) was studied by comparing the observed and expected outputs concurrently through adaptive filter in digital domain. Nonetheless, most BIST techniques are not suitable for on-line test since the circuit has its topology modified during the test or its input signal is being controlled by test mechanism. Therefore, the on-line test method requires the development of test strategies that continuously evaluate the operation of the circuit during normal operation. This problem of on-line monitoring becomes more important due to the use of sub-micron technologies, which are more sensitive to noise and radiation effects.
2.1.4.BIST Techniques Based on Domains of Fault Analysis
As depicted in the BIST techniques based on domains of fault analysis are considered regarding the output response analyzer in and can be classified as either in digital or analog domains. In digital domain, the output characterization process initially converts analog fault signatures into digital signals using a sigma-delta A/D converter (Dufaza and His, 1996) or a voltage comparator (Czaja, 2006). Such digital signals will subsequently be employed for fault detection by means of a digital comparator (Roh and Abraham, 2000) or a digital counter (Cassol et al., 2003), incorporating stored fault-free bit streams. Despite the fact that the characterization in digital domain offers expedience in comparison and storage of digital fault signatures, the implementation of A/D converters and digital counters is relatively complicated, resulting in hardware overhead and ultimately necessitating fault testing. On the other hand, the output characterization process in analog domain generally captures fault signatures by means of sampling process, and detects faults through voltage comparison in allowable tolerance margins (Yu et al., 2004; Stround, 2006). Characterizing output response in analog domain has been realized extensively in most cost-effective BIST systems, as both catastrophic and parametric faults can be detected instantaneously with low area overhead.
Dissertation Developments
Motivations
With references to previously developed BIST techniques described in there are three major motivations that have led to the research and development of this dissertation. Firstly, there is a constant demand for new BIST techniques, especially for recent advanced analog mixed-signal LTI systems with low-cost, low testing time, and low performance degradation. This demand for new BIST techniques has continuously been attractive for research activities since BIST is newly introduced to real chip manufacturing industry, and only a few number of BIST techniques have been implemented. Unlike scan DFT that has been integrated in digital circuits, not many BIST techniques have been integrated in the manufacturing industry yet.
Secondly, there is the need for simple and compact BIST techniques for particular analog mixed-signal systems. Although a number of existing BIST techniques have demonstrated high fault coverage, the extra BIST circuitry is even more complicated than the exiting CUT itself, i.e. difficult operations and the requirement for extra external hardware. Those exiting complex BIST system may not suitable for on-chip integration and therefore simple BIST circuits and operations are still preferable, especially for the case of compact analog mixed-signal CUTs such as small amplifiers or filters. Lastly, there is a lack of studies in BIST circuits with calibration for some sensitive and complicated analog mixed-signal circuits such as oscillators and phase-locked loops. Since extra BIST circuits may introduce some penalties to the CUTs, applying BIST techniques to these circuit types remains a difficulty due to awareness of performance degradation. In addition to the three motivations, the improvement of previously proposed BIST techniques is also important in order to extend better BIST performances and functionalities.
Research Objectives
The objective of this dissertation is to develop new BIST techniques and implementations for catastrophic fault detection and parametric fault calibration. The BIST techniques are expected to be versatile for each specific type of analog and mixed-signal circuits, and capable of yielding high fault coverage. In addition, this dissertation also aims to design and implement corresponding BIST systems in CMOS technology, which yield low area ove1rhead, low power consumption, and low performance degradation.
A Strategy for BIST Architecture
Generally, BIST strategy for analog mixed-signal can be considered in two architectures. the block diagram of classical and recent BIST test strategies. As shown , the classical test strategy exploits a common BIST for all analog sections, which can be considered as functional testing. The input signal stimuli are applied for all analog section and the expected single output is solely employed for fault signature analysis. The BIST circuit uses its own control system and those digital sections are scanned independently from analog sections. Although this classical strategy can simply be implemented, the fault coverage and testability is relatively low since different analog blocks exhibit different characteristics and functionalities. In addition, accessibility for fault localization cannot be achieved since test operation has to be done throughout analog sections.
BASIC ARCHITECTURE OF BIST
mixed-signal functional block is tested independently through different test techniques. These independent tests yield not only higher fault coverage as each specific circuit is tested based on its functions, but also offer accessibility capability to each individual circuit. As new mixed-signal system has been advanced, test control operation can be achieved from the digital section. In addition, the compliance between analog and digital boundary scan modules has been researched intensely and therefore corporation test between digital and analog sections is possible.
2.2. INTRODUCTION TO HAMMING CODE
HAMMING ENCODER AND DECODER
In telecommunication, Hamming codes are a family of linear error-correcting codes that generalize the Hamming(7,4)-code, and were invented by Richard Hamming in 1950. Hamming codes can detect up to two-bit errors or correct one-bit errors without detection of uncorrected errors. By contrast, the simple parity code cannot correct errors, and can detect only an odd number of bits in error. Hamming codes are perfect codes, that is, they achieve the highest possible rate for codes with their block length and minimum distance of three.
In mathematical terms, Hamming codes are a class of binary linear codes. For each integer r ≥ 2 there is a code with block lengthn = 2r − 1 and message length k = 2r − r − 1. Hence the rate of Hamming codes is R = k / n = 1 − r / (2r − 1), which is the highest possible for codes with minimum distance of three (i.e., the minimal number of bit changes needed to go from any code word to any other code word is three) and block length 2r − 1. The parity-check matrix of a Hamming code is constructed by listing all columns of length r that are non-zero, which means that the dual code of the Hamming code is the punctured Hadamard code. The parity-check matrix has the property that any two columns are pairwise linearly independent.
Due to the limited redundancy that Hamming codes add to the data, they can only detect and correct errors when the error rate is low. This is the case in computer memory (ECC memory), where bit errors are extremely rare and Hamming codes are widely used. In this context, an extended Hamming code having one extra parity bit is often used. Extended Hamming codes achieve a Hamming distance of four, which allows the decoder to distinguish between when at most one one-bit error occurs and when any two-bit errors occur. In this sense, extended Hamming codes are single-error correcting and double-error detecting, abbreviated as SECDED.
History
Richard Hamming, the inventor of Hamming codes, worked at Bell Labs in the 1940s on the Bell Model V computer, an electromechanical relay-based machine with cycle times in seconds. Input was fed in on punched cards, which would invariably have read errors. During weekdays, special code would find errors and flash lights so the operators could correct the problem. During after-hours periods and on weekends, when there were no operators, the machine simply moved on to the next job.
Hamming worked on weekends, and grew increasingly frustrated with having to restart his programs from scratch due to the unreliability of the card reader. Over the next few years, he worked on the problem of error-correction, developing an increasingly powerful array of algorithms. In 1950, he published what is now known as Hamming Code, which remains in use today in applications such as ECC memory.
Codes predating Hamming
A number of simple error-detecting codes were used before Hamming codes, but none was as effective as Hamming codes in the same overhead of space.
Parity[edit]
Main article: Parity bit
Parity adds a single bit that indicates whether the number of ones (bit-positions with values of one) in the preceding data was even or odd. If an odd number of bits is changed in transmission, the message will change parity and the error can be detected at this point; however, the bit that changed may have been the parity bit itself. The most common convention is that a parity value of one indicates that there is an odd number of ones in the data, and a parity value of zero indicates that there is an even number of ones. If the number of bits changed is even, the check bit will be valid and the error will not be detected.
Moreover, parity does not indicate which bit contained the error, even when it can detect it. The data must be discarded entirely and re-transmitted from scratch. On a noisy transmission medium, a successful transmission could take a long time or may never occur. However, while the quality of parity checking is poor, since it uses only a single bit, this method results in the least overhead.
Two-out-of-five code
Main article: Two-out-of-five code
A two-out-of-five code is an encoding scheme which uses five bits consisting of exactly three 0s and two 1s. This provides ten possible combinations, enough to represent the digits 0–9. This scheme can detect all single bit-errors, all odd numbered bit-errors and some even numbered bit-errors (for example the flipping of both 1-bits). However it still cannot correct for any of these errors.
Repetition
Main article: Triple modular redundancy
Another code in use at the time repeated every data bit multiple times in order to ensure that it was sent correctly. For instance, if the data bit to be sent is a 1, an n = 3 repetition code will send 111. If the three bits received are not identical, an error occurred during transmission. If the channel is clean enough, most of the time only one bit will change in each triple. Therefore, 001, 010, and 100 each correspond to a 0 bit, while 110, 101, and 011 correspond to a 1 bit, as though the bits count as "votes" towards what the intended bit is. A code with this ability to reconstruct the original message in the presence of errors is known as an error-correcting code. This triple repetition code is a Hamming code withm = 2, since there are two parity bits, and 22 − 2 − 1 = 1 data bit.
Such codes cannot correctly repair all errors, however. In our example, if the channel flips two bits and the receiver gets 001, the system will detect the error, but conclude that the original bit is 0, which is incorrect. If we increase the number of times we duplicate each bit to four, we can detect all two-bit errors but cannot correct them (the votes "tie"); at five repetitions, we can correct all two-bit errors, but not all three-bit errors.
Moreover, the repetition code is extremely inefficient, reducing throughput by three times in our original case, and the efficiency drops drastically as we increase the number of times each bit is duplicated in order to detect and correct more errors.
Hamming codes
If more error-correcting bits are included with a message, and if those bits can be arranged such that different incorrect bits produce different error results, then bad bits could be identified. In a seven-bit message, there are seven possible single bit errors, so three error control bits could potentially specify not only that an error occurred but also which bit caused the error.
Hamming studied the existing coding schemes, including two-of-five, and generalized their concepts. To start with, he developed a nomenclature to describe the system, including the number of data bits and error-correction bits in a block. For instance, parity includes a single bit for any data word, so assuming ASCII words with seven bits, Hamming described this as an (8,7) code, with eight bits in total, of which seven are data. The repetition example would be (3,1), following the same logic. The code rate is the second number divided by the first, for our repetition example, 1/3.
Hamming also noticed the problems with flipping two or more bits, and described this as the "distance" (it is now called the Hamming distance, after him). Parity has a distance of 2, so one bit flip can be detected, but not corrected and any two bit flips will be invisible. The (3,1) repetition has a distance of 3, as three bits need to be flipped in the same triple to obtain another code word with no visible errors. It can correct one-bit errors or detect but not correct two-bit errors. A (4,1) repetition (each bit is repeated four times) has a distance of 4, so flipping three bits can be detected, but not corrected. When three bits flip in the same group there can be situations where attempting to correct will produce the wrong code word. In general, a code with distance k can detect but not correct k − 1 errors.
Hamming was interested in two problems at once: increasing the distance as much as possible, while at the same time increasing the code rate as much as possible. During the 1940s he developed several encoding schemes that were dramatic improvements on existing codes. The key to all of his systems was to have the parity bits overlap, such that they managed to check each other as well as the data.
General algorithm[edit]
The following general algorithm generates a single-error correcting (SEC) code for any number of bits.
• Number the bits starting from 1: bit 1, 2, 3, 4, 5, etc.
• Write the bit numbers in binary: 1, 10, 11, 100, 101, etc.
• All bit positions that are powers of two (have only one 1 bit in the binary form of their position) are parity bits: 1, 2, 4, 8, etc. (1, 10, 100, 1000)
• All other bit positions, with two or more 1 bits in the binary form of their position, are data bits.
• Each data bit is included in a unique set of 2 or more parity bits, as determined by the binary form of its bit position.
Parity bit 1 covers all bit positions which have the least significant bit set: bit 1 (the parity bit itself), 3, 5, 7, 9, etc.
Parity bit 2 covers all bit positions which have the second least significant bit set: bit 2 (the parity bit itself), 3, 6, 7, 10, 11, etc.
Parity bit 4 covers all bit positions which have the third least significant bit set: bits 4–7, 12–15, 20–23, etc.
Parity bit 8 covers all bit positions which have the fourth least significant bit set: bits 8–15, 24–31, 40–47, etc.
In general each parity bit covers all bits where the bitwise AND of the parity position and the bit position is non-zero.
The form of the parity is irrelevant. Even parity is simpler from the perspective of theoretical mathematics, but there is no difference in practice.
Hamming codes with additional parity (SECDED)
Hamming codes have a minimum distance of 3, which means that the decoder can detect and correct a single error, but it cannot distinguish a double bit error of some codeword from a single bit error of a different codeword. Thus, they can detect double-bit errors only if correction is not attempted.
To remedy this shortcoming, Hamming codes can be extended by an extra parity bit. This way, it is possible to increase the minimum distance of the Hamming code to 4, which allows the decoder to distinguish between single bit errors and two-bit errors. Thus the decoder can detect and correct a single error and at the same time detect (but not correct) a double error. If the decoder does not attempt to correct errors, it can detect up to three errors.
This extended Hamming code is popular in computer memory systems, where it is known as SECDED (abbreviated from single error correction, double error detection). Particularly popular is the (72,64) code, a truncated (127,120) Hamming code plus an additional parity bit, which has the same space overhead as a (9,8) parity code.