Wednesday, September 11, 2013

DFT Q & A - Part 2

6. If we provide controllability and observability to the ports of area under test, will it improve the test coverage? If yes, how much %age of improvement can we see?

Ans: yes, we can see an improvement in coverage. Think about the bounding logic that we usually give for macros...

the improvement in coverage will depend on the initial coverage without bound and the no of faults in the design under test.

7. When both the Transition-delay and path_delay target for transition faults, why we should have two different methods to detect the same?

Ans: Yes both Transition and Path delay will target the transition faults, but they are different in the following ways

1) Transition-delay is related to slow-to-rise or slow-to-fall faults at a particular node. Whereas path-delay is related to slow-to-rise or slow-to-fall faults of a particular path.

2) The reason for transition-delay at a node is some manufacturing defect at that node (more resistive node). The reason for path-delay is some manufacturing defect that is distributed through out the path (more resistive path). Let me explain this in detail with an example.

Let us assume that in a path there are some 10 nodes, and the transition delay at each node is that much less which won't create any timing violation in the transition ATPG. But in path-delay these delays at each node might get accumulated and result in not meeting the timing. So it is must to have path-delay besides Transition-delay.

Transition delay is similar to stuck-at atpg, except that it attempts to detect slow-to-rise and slow-to-fall nodes, rather than stuck-at-0 and stuck-at-1 nodes. A slow-to-rise fault at a node means that a transition from 0 to 1 on the node doesn’t produce the correct results at the maximum operating speed of the design. Similarly a slow-to-fall fault means that a transition form 1 to 0 on a node doesn’t produce the correct results at the maximum speed of the design. Transition delay fault targets single point defects.

The Path delay fault model is useful for testing and characterizing critical timing paths in the design. Path delay fault tests exercise critical paths at-speed to detect whether the path is too slow because of manufacturing detects or variations.

Path delay fault testing targets physical defects that might affect distributed region of a chip. For example, incorrect field oxide thicknesses could lead to slower signal propagation times, which could cause transitions along a critical path to arrive too late.

If we attain 100% coverage with transition atpg test then we don’t need to do the path delay test. But this is not the case in most of our designs. Then how can we GUARANTEE that all the critical paths are targeted by Transition ATPG

so we give the critical path to the atpg tool and tell it to generate patterns for it

this is called path delay.

8. What is Burnin test? Why is it done?

Ans:  Burn-in is the process by which device infant mortality failures are

accelerated through application of temperature and stress voltages for

specific periods of time. The goal of burn-in is to stress the device as much

as possible thereby accelerating device infant mortality rate. Since scan

based test has higher coverage, scan test vectors are used to stress the logic

portion of the device. This can either be done by performing device level

ATPG by applying external test patterns on onchip LBIST. Memory BIST is

used for memories.

Burnin test is the technique of estimating the life time of the chip by providing stress in the form temperature , voltage and current.

High toggled atpg vectors is generated to sensitize the internal nodes of the chip so that chip will get more stress

. Burnin monior pin is availabe for every SoC to check the internal status in the form of pulses.Chip will give pulses till the device burn out..

9. Why do we have different kinds of SDF's like PrimeTime SDF and Verilog SDF when there is an Open Verilog International standard ?

Ans: There are no different kind of SDF like Primetime SDF or Verilog SDF.

When a design is synthesized it creates netlist, which is made of library cells only. The library cells have path delays(Input to Output) and there are also interconnect delay in netlist. SDF means Standard Delay Format. So SDF basically contains the cell path delay and Interconnect delays between cells. The SDF format can be read/understood by all STA/simulation tools. Generarally (1) the SDF can be generated using Synthesis(dc_shell)/STA(pt_shell). This SDFs are used for initial Timing analysis and gate-simulation. (2) post-route tools also generates SDF (or spef and then you can generate SDF from spef) files. These SDF are used for final Timing Analysis and final gate simulation for sign-off/tapeout.

Each libary cell can have max,min or typical delay. so you can generated SDF based on this delay and have different SDF for max delay or min delay or typical delay, but you can not have primetime SDF or Verilog SDF.

10. If for a DFT production set we have both PATH Delay and Transition patterns which scheme should be done first ?

Ans: Its always better to do Path_delay first and then the transition delay.

Path_delay- start flop and end flop are given by the user.(i.e) transition through a well defined path-(More effective)

Transition-Start and end flop is decided by the tool,so trasition may happen through a shorter or longer path-(Less effective)

1)path delay basically targets for all the critical paths in the design.

2)Generate the path_delay patterns for all the critical paths in the design and write down the faults for the same.

3)Generate the trans delay pattern by loading the faults on the path_delay, such that the faults detected in the path_delay are not Re-targeted.

4)The major reason to follow the above sequence is in trans-delay we are not sure weather a transition has really happened through the critical path.

(A)-If we do the transition-ATPG first, we are not sure whether the transition has happened through the critical path,In such case we may not have pattern which may catch the fault through the critical path,but the fault may be detected through some other shorter paths which may add the faults to the detected list.

(B)If we run a path_delay after the above step(A) then we may have a pattern for the critical path,but it leads to the pattern redundancy because we have already a transition

pattern for the same fault.

11. What do we mean by fault simulation in DFT terminology?

Ans:  Fault simulation consists of simulating a circuit in the presence of faults ( logical faults ).

Comparing the fault simulation results with those of the fault-free simulation of

the same circuit simulated with the same applied test, we can determine the

faults detected by that test.

There are several types of fault simulation techniques such as serial fault simulation , parallel fault simulation , deductive fault simulation e.t.

12. What is meant by coverage in DFT terminology?

Ans: The term coverage can be braodly classified as:

1. Fault Coverage: This is the total number of detcted faults (duing testing) divided by the total number of faults in the design.

2. Test coverage: This is the total number of detected faults divided by the number of testable faults in the design.

Testable faults are the faults in the design that can be tested and observed by the patterns used during testing.

Please note that, here the word "faults" refers to manufacturing defects, as DFT does not merely target functional faults, but rather focusses on circuit structure.

Coverage Calculations are used to measure the effectiveness of test patterns and test generation for a given set of faults.

Ex-

A Basic AND Gate has six faults

SA1/SA0----Input-A

SA1/SA0----Input-B

SA1/SA0----Output-Y

Total faults for a basic AND gate is 6

Coverage calculation goes on how many faults we are able to detect on the same.

Test Coverage is a percentage detected of all detectable faults and gives the most meaningful measure of test pattern quality.

Test Coverage=DT+(NP+AP)*PT_credit)/(total faults-UD-(AN*AU_credit).

Fault Coverage:the fault coverage is the percentage detected of all faults.it gives no credit for undetectable faults.

Fault Coverage=(DT+(NP+AP)*PT_credit)/total faults.

DT-Detected

-DR-detected robustly

-DS-detected by simulation

-DI-Detected by implication

PT-Possiblty detected

-AP-ATPG untestable possibly detected.

NP-not analyzed,possibly detected.

UD-Undetectable

-UU-undectable unused

-UT-Undectable tied

-UB undetectable tied

-UR undettable redundant

AU-ATPG untestable

-AN-ATPG untestable not detected.

ND- not detected

-NC-not controlled.

-NO-not observed.

13. What is IDDQ Testing? Why is it done?

Ans: IDDQ Testing can detect certain types of circuit faults in CMOS circuits that are difficult or impossible to detect by other methods.

IDDQ testing, when used with standard functional or scan testing, provides an additional measure of quality assurance against defective devices.

IDDQ testing refers to the integrated circuit (IC) testing method based upon measurement of steady state power-supply current. Iddq stands for quiescent Idd, or

quiescent power-supply current. Majority of IC’s are manufactured using complementary metal–oxide–semiconductor (CMOS) technology. In steady state, when all switching transients are settled-down, a CMOS circuit dissipates almost zero static current. The leakage current

in a defect-free CMOS circuit is negligible (on the order of few nano amperes). However, in case of a defect such as gate-oxide short or short between two metal lines, a conduction

path from power-supply (Vdd) to ground (Gnd) is formed and subsequently the circuit dissipates significantly high current. This faulty current is a few orders of magnitude

higher than the fault-free leakage current. Thus, by monitoring the power-supply current, one may distinguish between faulty and fault-free circuits.

Why do IDDQ Testing?

For functional testing, a tester applies a sequence of input data and detects the results in the sequence of output data. Then, the output sequence is compared against the expected behavior of the device. An advantage of functional testing is that it exercises the device as it would actually be used in the target application. However, this type of testing has only a limited ability to tests the integrity of a device's internal nodes.

with functional testing only, an internal defect could slide by undetected.

The methodology for scan testing is all the sequential elements of the device are connected into chains and used as primary inputs and primary outputs for testing purposes. Using automatic test-pattern generation (ATPG) techniques, you have the capability to test a much larger number of internal faults than with functional testing alone. The goal of ATPG is to set all nodes of the circuit to both 0 and 1, and to propagate any defects to nodes where they can be detected by test equipment.

Using both functional and scan testing you greatly increases your odds at finding an internal defect, but what if the defect is not controllable or can't be observed? That is where IDDQ testing can help.

14. If one needs to do synthesis/STA with scan replaced FF (not stitched) and need do generate timing and other reports. What should be values of SE, SI and SO pins since design is not stitched?

Ans:  We need not constrain the SE, SI and SO pins for synthesis / STA of a scan replaced but not stitched design. But we will not be able to do any test related STA.

15. Can you briefly describe the points to be considered, while re-ordering the scan chain in Physical Design?

Ans: Scan chain reordering needs to respect 3 important Logical constraints.

1. Clock domain timing constraints

2. User specified scan segment positions

3. Minimizing clock domain traversals

each active edge of each clock is considered to be in a separate clock domain. Both edges of a clock and clocks with different timings may be used to control edge-triggered scan flip flops of a scan chain.

In order to construct functional scan chains, two consecutive scan flip flops A and B (A serially driving B)

1) must be clocked at the same time or

2) B must be clocked before A.

In the first case, we say that A and B have compatible clock domains.

In the second case, we say that A and B have incompatible clock domains.

The precedence relationship between scan flip-flops imposed by clock domain timings is translated at the scan segment level. Capture and launch times for a scan segment are respectively deduced from the capture time of its first scan cell (driven by its scan input) and the launch time of its last scan cell (driving its scan output). Therefore, the precedence relationship between scan segments can be established, and thus respected during scan segments reordering. User-specified scan segment positions are respected during scan reordering unless they violate clock domain timing constraints.

The last constraint, minimizing clock domain traversals, takes priority on physical design information because we want our approach to be minimally intrusive in term of adding synchronization latches. Only scan segments with compatible clock domains are reordered.

Reordering a set of scan segments with compatible clock domains consists of:

1. identifying and marking the set of clusters containing the scan segments.

2. Determining the entry and exit points between which the scan segments are going

to be reordered.

3. Ordering the previously identified clusters between the entry point and exit points.

4. Reordering scan segments within each of the ordered clusters.

Did you find this post helpful