Wednesday, September 11, 2013

DFT Q&A - Part 26

What all things in DFT can be of a concern whenever there is a technology node change?

Ans: There are many changes in DFT with the nodes.
In fact DFT is expanding as the technology is shrinking.
Some of the things to take care of are :

1. Power : This is the biggest concern now-a-days. As the requirements to decrease power increases, the DFT gets more complex. Its part of DFT to make sure that DFT coverage has no impact with all the added logic due to new logic added to save power, adding different power domains and retaning values of logic during power down. Apart from that power consumption during ATPG could also lead to generation of Patterns for each power domain/clock domain separately.

2. Faults : As the technology shrinks new types of faults starts playing a role. Some of the faults which industry is working on now-a-days is Bridging faults, fast-to-rise/fast-to-fall faults, transient faults, many yet unknown memory faults are to name a few.

3. Programmability : The biggest ask for DFT today is programmability, be it IOs, logic or memory tests.

4. Meeting standards : As chip gets integrated with other chips on board, it becomes inevitable to meet various standards floating around.

5. Reliability Checks : Many old methodologies are getting redundant with time, e.g. with increased leakage IDDQ got redundant. Many new methodologies are developed now and then to check for reliability of circuits.

6. Starting early : May be moving to RTL for even pattern generation may help. With so much to do in DFT, its never too early to start. Many companies are working to come up with many new solutions.

Can I get some information regarding the IO BIST?

Ans: IO BIST is the terminology normally used for High Speed I/Os with a loop back circuitry on TX/RX.
Since capturing data from such high speed I/Os is not possible for low cost tester, a BIST logic which transfers some data through TX and then reads back the data looped back at I/O level through RX, is added. Based on this received data it activates the pass/fail flag.

Memory cell is not observable or controllable during a normal scan mode. how do we test the logic around memory , i.e how is this coverage taken care of ? are the Ram sequential patterns used for this ? what is done in these patterns ?

Ans: RAM sequential patterns are normally used to test very small memories for which you do not want to put memory BIST logic. It can also be used to test logic around memories, but normally a wrapper around memory, such that Memory output is bypassed with Memory Input signals, is used to test logic around memories.

During RAM sequential patterns, you need to make sure that memories can be enabled, read enable can be set and Valid data can be put on data and address bits using SCAN Flops. Moreover it should also be made sure that during shift phase data written in memory should not get corrupted, i.e. Write Enable should not toggle during scan shifting.
Generating Ram seq patterns can be tricky at times - as the user needs to define the read and write cycles properly....

As an alternative to Ram sequential patterns you can try generating clock seq patterns to test the shadow logic around the memories.
You can do the following things

1. Increase the sequential depth for the design

2. Also (for fastscan) you need to define the read and write clocks for the memories
Commands:
add read control
add write control
The clock_name mentioned above is the clock that is used to clock the memories during scan mode. for more details you can refer to the tool document.

Another important thing to take care before generating clock seq patterns ...... make sure you have the ATPG (functional) models for the memories.

How to generate patterns if we have both compressed chains (of size say 100 flops each) and compressed chains (of size 1K flops each) in the same design
1. Do we generate patterns for each of these chains together?
In this case the compression would be lost
2. Do we generate patterns compressed chains only and then generate for uncompressed chains?
While generating patterns for compressed chains should the uncompressed chains be defined or not?

Ans: preferred way of generating patterns would be:
For final Tester Run (To have minimum number of patterns, hence minimum test time):
a. Generate patterns with compressed chains.
b. Save above fault list
c. Generate top-up patterns with uncompressed chains just to cover faults not covered by above run (load fault list from above run and target au faults only)

For debugging purposes you will need to generate whole set of patterns in both modes. In case anything fails on tester it is always easier to debug in uncompressed mode.

How does the ATPG compression tool take care of head/tail flops inserted outside the compression engine(e.g flops residing outside decompressor and compactor in Test Compress)?

Ans: In case of Test compress specially, the following commands take care of the registers outside the chains:

set edt pins input_channel -pipeline_stages
set edt pins output_channel -pipeline_stages

I am assuming the flops inserted outside the compression engine are in form of shift register.

Why do we need more patterns in At-speed testing compared to DC-scan. is it because of achieving the same coverage requires more patterns ?

Ans: Regarding your question, the difference in pattern count between Stuck-at fault testing and At-Speed Fault testing exists due to the difference in type of faults.
For Stuck-at fault testing you need to set a node to value opposite to the fault you are testing and propagate it to a flop, however for transition fault testing you first need to set the node to particular value and then toggle it to opposite value and capture it at a flop. Hence you can see that in stuck-at, only setting a node at particular value and making sure other nodes are set to such a value that fault can propagate to a flop is enough. While for at-speed testing you need to make sure that data reaches next flop within the desired clock period, so you need to toggle it to opposite value during capture to check for timing.
So you can see that generating patterns for At-Speed testing is little bit more tricky than at-speedfault testing and therefore results in more patterns.
To reduce your total pattern count (transition + stuck-at) you can fault grade your patterns.
1. use the patterns generated to transition faults and simulate them to get the the coverage for stuck-at fault model
2. save the fault list after simulating transition faults for stuck-at fault
3. load the fault list and generate the patterns for the remaining stuck-at faults

The final result would lead to a reduction in total pattern count.

Why do we need more patterns in At-speed testing compared to DC-scan. is it because of achieving the same coverage requires more patterns?
Or
Why at-speed(LOC) pattern count is more than stuck-at pattern count?
Ans: For a full scan design, stuck-at ATPG looks at generating patterns in the combinational logic blocks in between scan flip-flops. This is because every scan flip-flop can be treated as a primary input and a primary output for stuck-at ATPG purpose.
However, when testing for at-speed failures, 2 patterns are needed to launch a transition and capture the effect into a scan flip-flop. Therefore, ATPG needs to trace back beyond one level of scan flip-flops to figure out how to get the appropriate "2nd pattern" in a 2-pattern test, which means it has less degree of freedom in how to assign the scan flip-flops to detect a fault, which leads to higher pattern count.


LOC does indeed result in many more patterns that stuck-at patterns. However at-speed patterns can also be applied using a launch-on-shift (LOS) timing protocol. This does not suffer from the sequential ATPG drawbacks as the LOC protocol and therefore results in little to no pattern increase over stuck-at. LOS can be more tricky to implement due to the need for at-speed scan-enable signals (these are necessary as LOS requires going from shift-mode to functional mode within one system clock cycle).

Regarding your question, the difference in pattern count between Stuck-at fault testing and At-Speed Fault testing exists due to the difference in type of faults.
For Stuck-at fault testing you need to set a node to value opposite to the fault you are testing and propagate it to a flop, however for transition fault testing you first need to set the node to particular value and then toggle it to opposite value and capture it at a flop. Hence you can see that in stuck-at, only setting a node at particular value and making sure other nodes are set to such a value that fault can propagate to a flop is enough. While for at-speed testing you need to make sure that data reaches next flop within the desired clock period, so you need to toggle it to opposite value during capture to check for timing.
So you can see that generating patterns for At-Speed testing is little bit more tricky than at-speedfault testing and therefore results in more patterns

Why scan chain is deleted before placement and reconnected after routing ?
Ans: It's called scan chain reordering. It is an optimisation to ensure the scan chain is connected in the most efficient way - based upon the placement of the flip-flops.

At initial stage , we dont have the placement information. So we just stich the flops register by register. But after placement it might be possible (and generally in pratice happens) that the two flops stiched at initial stage of a different block sits far or too far from each other when the placement is done. So if we keep the scan chanin stiched same as a initial stage, very long and wearied and inefficient scan path results(i.e. one flop at this end of chip and next flop in chain is on ohter corener of chip). So before we do placement, remove the scan chain stiching, and once you done with the placement we re-stich the whole scan chain. (called scan chain re-ordering as well). So tool will systematically stich the scan flops.

Please let me know the Launch and capture path in Launch On Capture(LOC) and Launch On Shift(LOS) in At-Speed mode?
Ans: For at-speed fault there is a requirement that we launch the fault. By launching it means that we give a transition at the fault site. So for slow-to-rise fault, the launch will be to change the value from 0 to 1. Once the fault has been launched , the changed value is captured At-speed in a scan flop.

Now there are two ways to launch a fault. Either when the scan chain is still getting loaded or once the scan chain load is over.

If the scan chain is still getting loaded, then last shift is used to launch the fault and then in the capture mode, the transition on the fault location is captured. The thing to note here the last but 1 shift should be able to put a value of 0 at the fault and with the last shift the value at the fault site should change to 1.

In the launch on capture at the end of the shift , the fault site will have a value of 0, and then there will be two clock pulses, one to change the fault site location to 1 and the other to capture the transition....

what is Compression ratio and how to fix the compression ratio for any design?
Ans: The compression ratio in DFT is basically used for TAT and TDV

TAT : Tester application Time
TDV : Test data volume. ( Size of the patterns)

It is the reduction in these two number when compared to a design which has just the scan chains and no compression techniques.

The scan compression technique available with most of the comercial tool today is to have multiple scan chains inside the core with a limited number of top level scan ports used for loading and unloading these chains. So, you require hardware in your design to support. This , there will be a decompressor to feed many internal chains from limited number of top level scan input ports and a compressor to unload the value from many internal scan chains to limited number of top scan outputs.

The TAT and TDV is achieved by lesser number of cycles needed to load the internal chains. In most curde form the compression ratio for such technique is

# External chain = #internal chains * compression ratio * β

Where β = to account for some pattern inflation. You need to know this β and then can control the compression ratio.

Can we have the shift frequency same as functional frequency?
Ans: The maximum shift frequency will depend on what is the maximum peak power, the chip can support.

When you are shifting the data on the scan chain, all the flops in the chains are operational the same time. The peak power requirment goes high, also too much of heat is generated , if you dont account for it the chip will burn up.

In the functional mode , you will never get a chance where all the flops will be operational at the same time...
 

What are the factors which decide the number of scan chains?
Ans: Factors such as availability of chip I/O pins, available tester channels, and on-chip routing congestion caused by chaining storage elements in test mode limit the number of scan
chains.

LOS is combinational ATPG algorithm and LOC is Sequential ATPG algorithm. Why?
Ans: LOC is sequential, since it is essentially a double capture, and the ATPG tool needs to be able to store the state of the circuit after the last shift and first clock pulse of the capture in order to know what is expected after the second capture clock.

LOS is essentially the same as a simple stuck-at, since there is only one clock pulse during the capture. No second state needs to be stored by the ATPG to determine how the circuit will react after the second clock pulse.


Did you find this post helpful