71. What will be the test coverage of
a design before creating patterns by ATPG(immediately after changing to atpg
mode).
When you change from setup mode to ATPG mode
in pattern generation, the tool performs learning analysis on the design and
identify some of the faults in the design (classifies themas DI faults- Detect
Implication), most probabily these faults located in scan chain path. At this
time ATPG tool generates no patterns.
My
guess is that the fault coverage is basically all stuck at faults along the
scan path, plus half the scan enable, scan mode, and some clock and reset
faults, right? Basically anything that would be detected by the scan shift
pattern, right?
If
you write out a complete fault list before starting to create patterns and
analyze it, maybe that would give you some clues.
This
is typically the coverage that you would get along scan-shift-path (SI, Q pins
of flop, part of clock, reset, scan-enable, some part of iopads used scan
scanin/scanout). I have seen it be roughly between 12% and 16% depending on
design size.
Also,
you nomenclature indicates that you are referring to Mentor ATPG tool. All
other ATPG tools do the same (btw).
It
is not exactly correct to say that we will get coverage without generating any
patterns (or before creating patterns).
If
we observe the coverage, it will be on scan shift path as you mentioned. The Mentor tool classifies all
these faults as DI -Detected by Implication only after generating Chain
patterns. So we need at-least one pattern (ie Chain pattern) to detect thses DI
faults and get some covergae number.
72. I am trying to improve coverage by
appending functional vectors.
What tool are you using? I heard fault
simulation time is almost intolerable.
Fault
grading is a "grade" on the quality of the fault detection provided
by functional vectors. I think you fault grade functional vectors against any
kind of fault models. But i have done stuck at only.
Say
in my design some flops work at low frequency, in that case,
How
can we take care of flops of lower frequency when we do an at speed testing?
It
depends upon whether you have independent scan clocks to control the different
clock domains. If so you can generate patterns that cover all the domains, and
you just need to mask the boundaries between domains.
But
that's not the normal case. Many times people will use one scan clock to drive
the whole circuit - and in this case, you will need to generate patterns for
each clock domain separately, while masking or black boxing all the other
domains.
First
thing, you have make sure that there is no launch in lower frequcny and capture
at high frequency domain. if you have it, add a observable flop and capture it
in low frequecy. use a named capture procedure, to launch and capture one clock
domain at a time. shift is not a problem bcoz you may have separate scan clocks
to shift out.
73. Why at-speed(LOC) pattern count is
more than stuck-at pattern count?
For
a full scan design, stuck-at ATPG looks at generating patterns in the
combinational logic blocks in between scan flip-flops. This is because every
scan flip-flop can be treated as a primary input and a primary output for
stuck-at ATPG purpose.
However,
when testing for at-speed failures, 2 patterns are needed to launch a
transition and capture the effect into a scan flip-flop. Therefore, ATPG needs
to trace back beyond one level of scan flip-flops to figure out how to get the
appropriate "2nd pattern" in a 2-pattern test, which means it has
less degree of freedom in how to assign the scan flip-flops to detect a fault,
which leads to higher pattern count.
LOC
does indeed result in many more patterns that stuck-at patterns. However
at-speed patterns can also be applied using a launch-on-shift (LOS) timing
protocol. This does not suffer from the sequential ATPG drawbacks as the LOC
protocol and therefore results in little to no pattern increase over stuck-at.
LOS can be more tricky to implement due to the need for at-speed scan-enable
signals (these are necessary as LOS requires going from shift-mode to
functional mode within one system clock cycle).
74. I am facing issue w.r.t coverage
in compression as it is lesser than bypass mode. However by decreasing the
abort limit we are getting better results.But the same is not applicable when I
start my initial run with default abort limit. Any reason why this is happening
in compression alone?
I do know that increase in abort limit
increases runtime as well as pattern count.
It
seems to me that fault coverage w/ compression will always be less than bypass
mode. In bypass mode there is no masking at the compactor.
Decreasing
the abort limit to get better coverage doesn't seem right though, but I'm not
an expert at what goes on behind the scenes of ATPG tools. Is that what you
meant by 'better results'?
75. What is the difference between
redundant, undetectable and untestable faults?
How will a ATPG tool identify a fault
to be redudant or untestable or undetectable?
How to analyze redundant faults?
write
the fault list of each fault class separately and save it in a file. Now load
back the same for which you want to analyze like UD,UR ,AU..etc. There is a command option in
almost all the tools to analyze the fault which will show the schematic of the
fault site.You can use graphical interface options to proceed further.Syntax
depends on which tool you are using.
ex-
analyze fault /path of fault site/ -stuck 0|1
Now
you can start analyzing how many ever faults you want. Its better to have a
hierarchical/modular coverage numbers so that you will know which block is
giving lesser coverage and analyze those fault sites accordingly.