Wednesday, September 11, 2013

DFT Q&A - Part 25

What is the difference between the coverage reports from dft dsm and from ttmax?
Ans: dft dsm coverage report gives the approximate test coverage while ttmax gives the actual test coverage.
What are ATPG modes?
Ans: TetraMAX offers three different ATPG modes: Basic-Scan, Fast-Sequential, and
Full-Sequential.
Basic-Scan ATPG: In Basic-Scan mode, TetraMAX operates as a full-scan, combinational-only ATPG tool. To get high test coverage, the sequential elements need to be scan elements. Combinational ROMs can be used to gain coverage of circuitry in their shadows in this mode.
Fast-Sequential ATPG : Fast-Sequential ATPG provides limited support for partial-scan designs. In this mode,multiple capture procedures are allowed between scan load and scan unload, allowing data to be propagated through nonscan sequential elements in the design such as functional latches, non-scan flops, and RAMs and ROMs. However, all clock and reset signals to these nonscan elements must still be directly controllable at the primary inputs of the device. You enable the Fast-Sequential mode and specify its effort level by using the -capture_cycles option of the set atpg command.
Full-Sequential ATPG : Full-Sequential ATPG, like Fast-Sequential ATPG, supports multiple capture cycles between scan load and unload, thus increasing test coverage in partial-scan designs. Clock and reset signals to the nonscan elements do not need to be controllable at the primary inputs; and there is no specific limit on the number of capture cycles used between scan load and unload. You enable the Full-Sequential mode by using the -full_seq_atpg option of the set atpg command. The Full-Sequential mode supports an optional feature called Sequential Capture. Defining a sequential capture procedure in the STIL file lets you compose a customized capture clock sequence applied to the device during Full-Sequential ATPG. For example, you can define the clocking sequence for a two-phase latch design, where CLKP1 is followed by CLKP2. This feature is enabled by the –clock -seq_capture option of the set drc command. Otherwise, the tool will create its own sequence of clocks and other signals in order to target the as-yet-undetected faults in the design.

What is Fault Coverage?

Ans: The extents to which that design can be tested for the presence of manufacturing defects, as represented by the single stuck-at fault model. Using this definition, the metric used to measure testability is fault coverage, which is defined as
Number of detected faults
(Total number of faults) - (number of undetectable faults)

What is test coverage?
Ans: Test coverage gives the most meaningful measure of test pattern quality and is the default coverage reported in the fault summary report. Test coverage is defined as the percentage of detected faults out of detectable faults.

What is ATPG effectiveness?
Ans: ATPG effectiveness is defined as the percentage of ATPG-resolvable faults out of the total faults.

Important Definitions:::::::::::::::

At-speed clock:  A pair of clock edges applied at the same effective cycle time as the full operating frequency of the device.

Capture clock/capture clock edge: The clock used to capture the final value resulting from the second vector at the tail of the path.

Capture vector:  The circuit state for the second of the two delay test vectors.

Critical path:  A path with little or no timing margin.

Delay path: A circuit path from a launch node to a capture node through which logic transition is propagated. A delay path typically starts at either a primary input or a flip-flop output, and ends at either a primary output or a flip-flop input.

Detection, robust (of a path delay fault): A path delay fault detected by a pattern providing a robust test for the fault.

Detection, non-robust (of a path delay fault): A path delay fault detected by a pattern providing a non-robust test for the fault.

False path: A delay path that does not affect the functionality of the circuit, either because it is impossible to propagate a transition down the path (combinationally false path) or because the design of the circuit does not make use of transitions down the path (functionally false path).
Launch clock/launch clock edge: The launch clock is the first clock pulse; the launch clock edge creates the state transition from the first vector to the second vector.

Launch vector: The launch vector sets up the initial circuit state of the delay test.

Off-path input: An input to a combinational gate that must be sensitized to allow a transition to flow along the circuit delay path.

On-path input: An input to a combinational gate along the circuit delay path through which a logic transition will flow. On-path inputs would typically be listed as nodes in the Path Delay definition file.

Path: A series of combinational gates, where the output of one gate feeds the input of the next stage.

Path delay fault: A circuit path that fails to transition in the required time period between the launch and capture clocks.

Scan clock: The clock applied to shift scan chains. Typically, this clock is applied at a frequency slower than the functional speed.

Test, non-robust: A pair of at-speed vectors that test a path delay fault; fault
detection is not guaranteed, because it depends on other delays in the circuit.

Test, robust: A pair of at-speed vectors that test a path delay fault independent of other delays or delay faults in the circuit.

What is Adaptive Scan?
Adaptive scan is similar to scan at the chip boundary, but it inserts a combinational decompression structure between the chip scan pins and the numerous short internal scan chains. Compressed scan input values are loaded in the adaptive scan module that distributes them internally through an association between an external scan pin
and an internal scan chain. To maximize test coverage, the association adapts to the needs of ATPG to supply the required values in the scan cells — thus the name "adaptive scan."
Adaptive scan optimizes the traditional scan chains into smaller segments enabling savings in test time, while the adaptive scan module and the output compactor significantly reduce the amount of test data needed to comprehensively test the chip lowering ATE memory requirements and giving room to add DSM Test patterns.

The following key benefits and features are associated with adaptive scan technology:
• Provides 10-50X test time and test volume reduction
• Same high test coverage and ease of use as traditional scan
• No impact on design timing or physical design implementation
1-pass test compression synthesis flow
• Hierarchical Adaptive Scan Synthesis
• Boundary scan synthesis and compliance checking to the 1149.1 standard
What are the design guidelines for getting good error coverage and error free dft?
Ans:
q Use fully synchronous design methodology using ‘pos-edge’ only if possible.
Generated or gated clocks should be properly planned, documented and collected in
one module at the top-level.

q For gated clocks or derived clocks: a test mode should be implemented that will drive all the Flip-Flop (FF) clocks from a single test clock during this test mode. Also, clock skew for this test clock should be properly balanced so there are no hold violations on any of the registers both during scan shift and normal mode.

q Provide proper synchronization for signals crossing clock domains, different edges of the same clock or asynchronous inputs. Such un-testable synchronization logic should be isolated in a separate module. Create patterns for this asynchronous logic
separately.
q A lock-up latch should be inserted in a signal crossing a clock-domain or clock edge, if all the FFs are to be part of same scan chain.

q Don’t use clocks in combinational logic or as data/set/reset input to a FF.

q All the asynchronous set/resets should be directly controllable from the primary input of the chip. If there are any internally derived set/resets, it should be possible to disable them using one or more primary inputs.

q Don’t use asynchronous set/resets in combinational logic or as data input to a FF.

q Avoid using registers with both Set/Reset functionality.

q Avoid latches in your design. If there are any latches in the design, care should be
taken to make them transparent during the scan test.

q Avoid internal tri-state buses (instead consider a MUX architecture). If not, then
implement bus control logic to ensure one and only one driver (no bus conflicts or no
floating buses) is active on the bus during scan test.

q For external tri-states, bring out 3 signals for each tri-state pad: input, output and
enable.

q No combinational feedback loops (especially when integrating the sub-blocks).

q Avoid large logic cones. It increases the ATPG runtimes exponentially.

q Avoid redundant logic (re-convergent fan-outs are good indication of redundant logic). It creates undetectable faults in the net-list. Add transparent or observability FFs in the design to increase fault coverage.

q Use ‘logic_high’ and ‘logic_low’ port signals for all the sub-blocks instead of VDD/VSS or ‘tie_1/tie_0’ cells.
·         Put ‘don’t_touch’ on these ports so that they will not be removed and can be used to connect the ‘tie_off’ nets generated during the synthesis. These ‘logic_high’ and ‘logic_low’ ports can be connected to a ‘tie_macro’ (these macros will contain scan-able FFs) at the top-level that make all these nets testable.

q Default values on the buses should use alternate ‘logic_high’ and ‘logic_low’
connections to increase fault coverage.

q Balance the scan chains to be of approximately equal length and limit this length
according to tester memory limitation. Here is one generic formula for calculating the
# of scan patterns based on tester memory and maximum scan chain length.

#scan_patterns < (tester_mem_per_chain-max_chain_length)/max_chain_length)

q Make sure the ‘scan_enable’ signal is properly buffered so that scan tests can be run at higher frequencies.

q Use only cells (including memories) which have the appropriate DFT/ATPG models
available.

q Provide for debug capability of a memory failure as part of the memory test
methodology.

q Disable the memory during scan shift and bypass mode for allowing fault coverage around the memory I/O.

q Provide a power-down mode for the memory, analog and other power consuming
blocks during the IDDQ test.

q Verify there are no hold time violations on any registers in both scan shift and normal mode after test insertion.

q Plan chip level scan issues before starting the block level design.

q Proper review of the memory, analog or other custom blocks test methodology. For analog IP blocks or hardened IP blocks (with no internal scan chains) provide a bypass option so the surrounding logic can be tested.
q Proper review of all the different test modes of the chip and detailed documentation of the primary pin assignments during these test modes

q Proper review of any additional test mode control logic.

Important formulas:::::::::::

Test Application Time Reduction = (Length of longest scan chain in scan mode) / (Length of longest scan chain in ScanCompression_mode)

Scan Test Data Volume = 3 * Length of longest scan chain * number of scan chains

Scan Compression Test Data Volume = Length of longest scan chain * (Number of scan inputs + 2 * Number of scan outputs)

Test Data Volume Reduction = (Scan Test Data Volume) / (Scan Compression Test Data Volume)
Why we loose coverage when we constrain pins?
Ans: In general, whenever you constrain any pins of your device, you take away the ability of the ATPG to toggle that pin and check it (and its effects) in both states. Sometimes when you constrain a pin, it will have negligble effect. Sometimes it will have far ranging effects on fault coverage.  
Why at-speed(LOC) pattern count is more than stuck-at pattern count?
Ans: For a full scan design, stuck-at ATPG looks at generating patterns in the combinational logic blocks in between scan flip-flops. This is because every scan flip-flop can be treated as a primary input and a primary output for stuck-at ATPG purpose.
However, when testing for at-speed failures, 2 patterns are needed to launch a transition and capture the effect into a scan flip-flop. Therefore, ATPG needs to trace back beyond one level of scan flip-flops to figure out how to get the appropriate "2nd pattern" in a 2-pattern test, which means it has less degree of freedom in how to assign the scan flip-flops to detect a fault, which leads to higher pattern count
Say in my design some flops work at low frequency, in that case,
How can we take care of flops of lower frequency when we do an at speed testing?
Ans: It depends upon whether you have independent scan clocks to control the different clock domains. If so you can generate patterns that cover all the domains, and you just need to mask the boundaries between domains.

But that's not the normal case. Many times people will use one scan clock to drive the whole circuit - and in this case, you will need to generate patterns for each clock domain separately, while masking or black boxing all the other domains.
What all needs to be taken care in scan stitching to get the good coverage?

Ans: If you are using Mentor DFTAdvisor or Synopsys DFT Compiler, cleaning up pre-stitch drc errors and most of the warnings (especially clock warnings) will generally lead to good fault coverage.

If coverage is still low after cleaning drc errors/warnings, then there may be issues inherent to the design that causes low coverage (redundant logic, complex reconvergent fanouts, black boxes, constrained nets, etc.)
Both Mentor and Synopsys tools provide ways to analyze low fault coverage in their ATPG tools. Also, some RTL analysis tools may be useful to find these kinds of problems (see http://www.dftdigest.com/miscellaneous/rtl-design-for-test/ )
How to toggle reset to get coverage?
Ans: If the reset is asynchronous (and properly bypassed during scan), you can declare the reset pin as a clock during ATPG, and ATPG will toggle it accordingly to get faults on reset pin.
If the reset is synchronous, you can treat the reset pin as a normal data pin, and ATPG should be able to cover faults on the reset.

Be careful, however, if you run transition fault ATPG. Reset usually cannot toggle at-speed, so you may not want to declare the reset as a clock when running transition fault ATPG.
You can also try to run the patterns that toggle the reset as a clock pin at a reduced speed on the tester, if you worry about transition fault coverage on reset.
During the process of ATPG, I encountered a term called clocked PO pattern. Could someone throw some light on what are these patterns?
Ans: Clock PO patterns are special patterns meant to test primary output values when those primary outputs are connected, directly or indirectly, to one of the scan clocks (usually through combinational logic or just buffers).
What is a BUS Primitive and clock_PO pattern?
Ans: A bus primitive is just a DFT model of a bus - a net that has more than one driver. It's important that you constrain it during test.

A clock PO pattern is a pattern that measures a primary output that has connectivity to a clock. So if a clock signal propagates through combinational logic to a primary output (PO), an ATPG vector can be created to measure the results of that propagation.

What are all the advantages and disadvantages of LBIST over Scan?
Ans: Logic BIST lack the full controllability and observability that scan provides, which may be essential for debug, or coverage improvement.
I understood that there should be seperate clock control circuitry to select two at-speed clock pulses from free-running PLLs for transition fault testing. What about stuck-at testing, in this case we need only one pulse from PLLs to capture response. Will there be any signal to control this behavior of clock control circuitry?

Ans: Well, it's not strictly necessary to have on-chip control. You can source the clock from an I/O, just as long as your ATE can handle the speed you need, and the device can distribute the clock well enough. The advantage of having on-chip control is that you can use a slow tester.

As far as the stuck-at clock goes, remember, you're using the slow external clock to shift data through the scan chains. For stuck-at, you can bypass the at-speed clock control and use only the slow external clock. Yes, you'd need a control signal. You can also, if you want to increase your flexibility, and get better fault coverage, design your clock control to be able to output 2, 3, or even 4 pulses, to catch faults that only get caught with sequential patterns. I've not done it myself, but I've read about such designs.

Could I know what is the major difference between transition and path delay fault models and which of them is an industrial standard?

Ans: Transition delay and path delay fault models both seek to describe defects which cause slow transitions on gate inputs or outputs. The difference is that a path delay is more specific - the whole path, from point a to point b is specified. It's a more stringent case than a transition fault. To detect a transition fault, it is only required that the ATPG generates a pattern to propagate that fault to a scanned flop. The path selected may not be the worst case path.

Most people in the industry begin with transition faults, because the ATPG can generate patterns with decent coverage more easily. Then if desired, those patterns are augmented with a much smaller set of path delays that are determined to be critical (maybe those with the least margin as determined by the timing tool).

Did you find this post helpful