You are on page 1of 5

A Methodology for Timing Model Characterization for

Statistical Static Timing Analysis



Zhuo Feng
Department of ECE
Texas A&M University
College Station, TX 77843
fengzhuo@ece.tamu.edu
Peng Li
Department of ECE
Texas A&M University
College Station, TX 77843
pli@neo.tamu.edu
ABSTRACT
While the increasing need for addressing process variability
in sub-90nm VLSI technologies has sparkled a large body of
statistical timing and optimization research, the realization
of these techniques heavily depends on the availability of
timing models that feed the statistical timing analysis en-
gine. To target at this critical but less explored territory,
in this paper, we present numerical and statistical model-
ing techniques that are suitable for the underlying timing
model characterization infrastructure of statistical timing
analysis. Our techniques are centered around the under-
standing that while the widening process variability calls
for accurate non-rst-order timing models, their deployment
requires well-controlled characterization techniques to cope
with the complexity and scalability. We present a methodol-
ogy by which timing variabilities in interconnects and non-
linear gates are translated eciently into quadratic timing
models suitable for accurate statistical timing analysis. Spe-
cic parameter reduction techniques are developed to con-
trol the characterization cost that is a function of number of
variation sources. The proposed techniques are extensively
demonstrated under the context of logic stage timing char-
acterization involving interactions between logic gates and
interconnects.
1. INTRODUCTION
Process uctuations of nano-scale IC technologies impose
an increasing demands on accurately predicting the resul-
tant performance variations. In timing analysis, various sta-
tistical static timing analysis (SSTA) algorithms [1, 2, 3, 4,
5] have been proposed to compute the statistical variations
of timing performance due to the underlying process param-
eters. To capture accurately large-range parametric varia-
tions, second order SSTA algorithms [3, 4] have also been
proposed, which are based upon more accurate quadratic
timing models. By exploiting a statistical timing engine,
statistical optimization techniques have also been proposed
[6, 7, 8].
For any of the above statistical timing analysis or op-
timization task, an ecient characterization methodology
that can provide parameterized interconnect and gate tim-
ing models and facilitate ecient logic stage delay calcula-
tion is critical. Increasingly large process variations are ex-
pected for upcoming VLSI technologies, e.g., ITRS predicts
the Vt variations have already reached more than 30% at
the 65nm technology node [9]. Hence, we propose to charac-
terize quadratic timing models that can capture large range
process variations accurately. However, the extraction of the
quadratic timing models for all the gates and interconnects

This work was support in part by SRC under contract 2005-


TJ-1298.
can be extremely time consuming, when considering numer-
ous inter/intra-die variation sources. For example, if there
are n variation sources to be considered, a direct quadratic
modeling will require a minimum of O(n
2
) simulation runs
to sample in the process parameter space. The minimum
quadratic complexity of the above approach makes the sta-
tistical timing model characterization much more expensive
than the nominal characterization ow and signicantly hin-
ders the overall eciency of the statistical circuit analysis
and optimization.
In this paper, we propose a methodology for characteriz-
ing non-rst-order timing models at a well controlled cost
by eciently exploiting statistical parameter dimension re-
duction techniques. Based upon the parameter dimension
reduction method for variational interconnect modeling via
ecient parametric moment computation in [10], a general
timing characterization ow of nonlinear logic gates and
logic stage delays is presented. The rest of this paper is
organized as follows. In Section 3, we propose two param-
eter dimension reduction techniques, namely, an iterative
reduced-rank regression (RRR) based parameter reduction
algorithm and a new K-moment based algorithm, speci-
cally for statistical timing characterization applications. In
Section 4, an ecient quadratic timing model characteri-
zation ow is demonstrated to signicantly reduce the to-
tal number of simulation samples needed, thus facilitate the
desired near-linear complexity. Extensive numerical results
are shown in Section 5 where the tradeo between eciency
and accuracy is also analyzed.
2. BACKGROUND
To be able to apply static timing analysis to compute the
full-chip timing, the signal delays (slews) of each logic stage
must be characterized in a delay calculator. Traditionally,
ecient delay calculation has been done by combining re-
duced order interconnect models, e.g. produced by AWE al-
gorithm, [11] with a gate delay model [12]. In this case, the
delay associated with each timing arc in the timing graph,
such as the delay between the input and output pins of the
driver and the delay from the output pin of the driver to the
input pin of a receiver, must be characterized for static tim-
ing analysis. To capture the important timing variability in
statistical static timing analysis, delay calculation must be
done in a way to produce parameterized timing models. To
enable parameterized characterization, parameterized inter-
connect and gate models are needed [13, 14, 10, 15].
In this paper, we assume that proper parameterized inter-
connects and gate models are available. Our focus is to e-
ciently characterize a parameterized quadratic timing model
for each timing arc in the following form such that accurate
statistical static timing analysis can be enabled [4, 3]
(X) = X
T
AX +
T
X +d, (1)
where d is the nominal value, X is vector containing n local
or global process variables, R
n
and A R
nn
represent
the rst and second order parameter dependencies of the
quadratic model. Our goal is to be able to extract such
timing model at a well controlled cost under the presence
of a large number of variations so as to provide a practi-
cal timing characterization for SSTA. The key for realizing
the most ecient way of quadratic model characterization
is to adopt suitable parameter dimension reduction tech-
niques that provide smart guidance for data samplings. All
the process variation parameters considered in the following
sections are assumed to be Gaussian.
3. NONLINEARPARAMETERDIMENSION
REDUCTIONSWITHLINEARMAPPINGS
In this section, as a preliminary step of quadratic model
characterization, two parameter reduction methods are pro-
posed for accurately capturing the nonlinear eects in the
multivariate performance space while maintaining simple
linear mappings from the original parameters. Firstly, an
iteration-based nonlinear RRR algorithm for handling mod-
erate variations and the corresponding nonlinear eects is
introduced. Then a closed form, moment based dimension
reduction algorithm is developed to facilitate more challeng-
ing cases where variation ranges are large.
3.1 Iteration-based nonlinear RRR
3.1.1 Algorithm
Denote Y R
m
and X R
n
the response and predic-
tor vectors respectively. Ideally we want to nd some low-
rank approximations of the regression matrices

Ar
1
R
mr
,

Ar
2
R
mr
2
, and

Br R
rn
such that the error of the fol-
lowing reduced-rank regression model can be minimized:
Y
_

Ar
1

Ar
2

_

BrX
_

BrX
_

_

BrX
_
_
. (2)
However, it turns out that an optimal model in the form of
(2) can not be derived directly by any existing technique.
Thus we propose an iterative procedure for nding the ap-
proximated reduced set of parameters under a quadratic
RRR model in Algorithm 1.
The initial step of this algorithm is to perform one time
rst order sensitivity analysis and apply the traditional lin-
ear RRR to nd the initial guess for

A
(0)
r1
and

B
(0)
r matrices.
Then the dimension of the reduced parameter set, say r,
can be found and O(r
2
) samples Sr in

Z
(0)
space are subse-
quently generated. After transforming Sr into the original
parameter space via the mapping S
f
=

T
(0)
Sr, where

T
(0)
is pseudo inverse of

B
(0)
r
, we simulate the samples in S
f
.
Then the k
th
iteration of this algorithm can be depicted as
follows: a nonlinear regression algorithm is adopted to nd
the coecient matrix

A
(k)
r2
for the quadratic portion of Y
(k)
quad
with respect to the quadratic portion of the reduced param-
eter set

Z
(k1)
=

B
(k1)
r X; the updated the linear portion
Y
(k)
lin
of Y is given by Y
(k)
lin
= Y

A
(k)
r2
_

Z
(k1)


Z
(k1)
_
;
another traditional linear RRR is conducted to obtain the
updated

A
(k)
r1
and

B
(k)
r based on the linear portion Y
(k)
lin
and
X. In our experiments, it has been observed that an op-
timal Br matrix can be obtained after only two or three
iterations. Thus this approach is rather ecient for cop-
ing with the moderate nonlinear eects while maintaining a
simple linear mapping between X and Z.
3.1.2 Application in delay characterization
Algorithm 1 Iteration-based nonlinear RRR Algorithm
Input: Standardized response vector Y, predictor vector X, the
error tolerance
0
and the maximum number of iterations Nmax.
Output:

A
r1
,

A
r2
and

Br matrices in (2).
1: Do rst order sensitivity analysis and perform linear RRR to
nd the initial

A
(0)
r1
and

B
(0)
r
;
2: Detect the dimension r of the reduced parameter set

Z
(0)
=

B
(0)
r
X and generate O(r
2
) samples Sr in

Z
(0)
space;
3: Use the pseudo inverse of

B
(0)
r
to transform the samples Sr
into full parameter space samples S
f
by S
f
=

T
(0)
Sr;
4: Simulate the samples S
f
and go through the following steps
with the simulation data to update the Br matrix; Set k = 1;
5: while
_

(k)

0
_
&(k Nmax) do
6: Set Y
(k)
quad
Y

A
(k1)
r1

B
(k1)
r
X;
7: Do nonlinear regression for Y
(k)
quad
with respect to the re-
duced parameter set

Z
(k1)
=

B
(k1)
r
X, which gives

A
(k)
r2
that best satises: Y
(k)
quad


A
(k)
r2
_

Z
(k1)

Z
(k1)
_
;
8: Set

Y
(k)
quad


A
(k)
r2
_

Z
(k1)

Z
(k1)
_
;
9: Set Y
(k)
lin
Y

Y
(k)
quad
;
10: Do linear RRR for Y
(k)
lin
and X to get the updated

A
(k)
r1
and

B
(k)
r
matrices;
11: Set
(k+1)

_
_
_B
(k)
r
B
(k1)
r
_
_
_; Set k k +1;
12: end while
13: Return

A
r1
,

A
r2
,

Br and the number of iterations k.
Before characterizing the parameterized timing model, the
rst order sensitivity analysis (e.g. adjoint sensitivity anal-
ysis [16] ) is performed and the closed form formulas for lin-
ear RRR [10] are used to nd the initial guess of Br matrix.
Subsequently, based upon the dimension of the reduced pa-
rameter set, we simulate few more samples to feed into the
iteration based RRR algorithm to update Br matrix. As-
sume the original parameters have been reduced to r reduced
parameters by Z = BrX, then only O(n + r
2
) simulation
samples are required to t the new quadratic model.
3.1.3 Complexity
The iterative dimension reduction algorithm has a main
computational cost attributable to the O(n + r
2
) simula-
tions for sample data collection and O(r
6
) tting cost for
the quadratic model. On the other hand, the full parame-
ter model requires O(n
2
) simulation samples and thus has
a O(n
6
) tting cost. Thus the iterative RRR procedure can
signicantly reduce the required sample size as well as the
tting cost compared with directly quadratic modeling in
the original parameter space.
3.2 Moment based dimension reduction
As IC technologies further scale down, more challenging
nonlinear eects of process variations in the performance
space have to be captured. A competent candidate, the re-
cently developed moment based dimension reduction tech-
nique [17] is investigated in this work. Denote the stan-
dardized performance and predictor vectors by Y and X,
respectively. This approach involves nding the central k-th
moment dimension reduction subspace (DRS) for multivari-
ate regression using marginal moments. It aims to provide
an r n mapping Br, r < n, such that the random vector
BrX contains all the information about Y which is available
from E(Y |X), V ar(Y |X), ..., M
(k)
(Y |X), where M
(k)
(Y |X)
for k 2 is the centered k th conditional moment. For
typical quadratic modeling problems, the rst two condi-
tional moments of E(Y |X) are to be preserved by the new
Algorithm 2 K-moment dimension reduction Algorithm
Input: Quadratic forms of the standardized response vec-
tor Y R
m
in terms of the standardized predictor vector
X R
n
: Y
i
= X
T
A
i
X+
T
i
X+d
i
for i = 1 : m, where
A
i
= [ap,q]
nn
R
nn
and
i
R
n
; Output: The dimension
reduction matrix Br.
1: Set: M
1
= E
_
XY
T
_
[
1

2
m ];
2: for i, j = 1 : m do
3: Set:
i,j
2
_
A
i

j
+A
j

i
_
+
i
d
j
+
j
d
i
;
4: end for
5: Set: M
2
= E
_
XY
T
Y
T
_
[
1,1

i,j
m,m ];
6: Set: K21c [ M
1
M
2 ];
7: Set: Br = Ur which include r left singular vectors u
1
, ..., ur
of matrix K21c that correspond to the r largest singular val-
ues;
8: Return the transform matrix Br.
predictor vector BrX. Theoretically introduced in [17], two
matrices, K21c and K22c are of particular interest:
K21c =
_
E
_
XY
T
_
, E
_
XY
T
Y
T
_
,
K22c =
_
E
_
XX
T
Y
T
_
.
(3)
Denote the central 1st and 2nd moment dimension reduction
subspace (DRS) by Sub
(1)
Y |X
and Sub
(2)
Y |X
, which are spanned
by the conditional mean and conditional variance. It has
been shown in [18, 17] that the subspace Sub(K21c) spanned
by K21c, and the subspace Sub(K22c) spanned by K22c
satises :
Sub(K21c) Sub
(2)
Y |X
, Sub(K22c) Sub
(1)
Y |X
. (4)
It may be of interest to notice that the matrix K22c is the
extended version of the Principal Hessian Direction (PHD)
[19] to multivariate response regression. It has been demon-
strated that in practical applications, K21c can better de-
tect the linear trends or odd functions (cross terms), while
K22c is better at revealing symmetric trends or even func-
tions (second order self terms).
Unfortunately, the original theoretical work in [18, 17]
does not assume a known model (mappings between the re-
sponses and the predictors) but rely on sample-based es-
timators to evaluate K21c and K22c, which is prohibitive
for practical circuit applications. In this work, we derive
the closed-form formulas to compute the dimension reduc-
tion subspace (DRS) based upon a given quadratic response
model. That is, if a quadratic model relating circuit perfor-
mances with the process variables is given, parameter reduc-
tion can be performed rather eciently without numerous
samplings. The detailed algorithm is depicted in Algorithm
2. Due to the scope of this paper, the formula for computing
K22c is not shown, which is more complex than K21c and
is less useful in typical circuit applications.
We have conducted extensive experiments on various dig-
ital circuits examples, concluding that K21c can nd pretty
accurate dimension reduction subspace (DRS) even when
quite large process variations (e.g. 30% V
th
variation) ex-
ist. A limitation of this dimension reduction algorithm is
that the quadratic model has to be known in advance.
4. QUADRATIC TIMING MODEL
CHARACTERIZATION FOR SSTA
A comprehensive timing model extraction ow is described
in this section. Both interconnect and driver timing models
capturing process variations are eciently extracted using
the dimension reduction techniques described in Section 3.
Assume the parameter reduction for the linear intercon-
nect circuits has been performed using the closed form for-
mulas [10]. Knowing the linear mappings between the (large)
original set of parameter (say p

i
s) and a reduced set of new
interconnect parameters (say Z
int
), a parameterized inter-
connect model can be eciently generated based on this
small set of new parameters Z
int
.
Next, the quadratic timing model characterization for the
stage starts. Here the goal is to compute a quadratic expres-
sion in the process parameters for each timing arc. The pro-
cess parameters considered here include variation variables
contributed by the gates and the interconnect, the latter
of which have been already reduced to Z
int
. Unlike linear
interconnects, where transfer function moments can be ef-
ciently computed to aid parameter reduction, parameter
dimension reduction for the characterization of the entire
logic stage becomes more complex. To well control the cost,
we follow the algorithms in Section 3 to iteratively perform
parameter reduction and subsequently build the quadratic
timing model:
1. Find the reduced parameter set Z
int
for the linear in-
terconnect circuit using linear RRR;
2. Generate parameterized reduced order interconnect mod-
els in the reduced parameters Z
int
;
3. Combine a parameterized driver model and the re-
duced interconnect model to perform rst order (ad-
joint) sensitivity analysis of the complete stage under
the combined driver and interconnect parameter set,
say X
gat
and Z
int
, denoted as X
t
, to nd the initial
guess of the reduced parameter set Z
t
= BrX
t
as well
as its dimension, say r;
4. Generate O(r
2
) samples in the initial reduced parame-
ter space (Z
t
) and transform them back to the original
parameter samples for simulations;
5. Update the Br matrix in Algorithm 1 and repeat till
convergence.
6. Extract the quadratic timing models in the reduced
parameter set via response surface modeling (RSM).
Once the quadratic timing model in the reduced parame-
ter space is obtained as follows:
(Z) = Z
T
AZ +
T
Z +d, (5)
where R
r
and A R
rr
include the rst and second
order coecients, the quadratic model for the full parameter
set can be given by:
(X) = X
T
_
B
T
r
ABr
_
X +
_

T
Br
_
X +d. (6)
Note that in the last step of the above procedure, RSM is ap-
plied to produce the quadratic models. This step is less ex-
pensive than ever before since the model extraction is based
on the reduced parameter set. During the above steps, we
can apply the iterative nonlinear RRR algorithm (for mod-
erate variations) or the moment based algorithm (for highly
nonlinear modelings) to perform parameter reduction.
5. EXPERIMENTAL RESULTS
In this section, we rst compare various dimension re-
duction algorithms on two ISCAS benchmark circuits, say
ISCAS85 C17 and ISCAS89 S27. Dierent magnitudes of
variations (3 = 15% 30% V
th
variations) of the threshold
voltages are considered. In the last, we show the results of
several extracted timing models for some interconnects com-
bined with drivers extracted from realistic routed circuits
10 5 0 5
0
10
20
30
40
50
Relative errors (%)

C
o
u
n
t
s
1.5 1 0.5 0 0.5
0
20
40
60
80
Relative errors (%)

C
o
u
n
t
s
Figure 1: Relative errors of the linear RRR (left)
and iteration based RRR (right) (C17: 3 = 15%V
th
variation).
15 10 5 0 5 10
0
5
10
15
20
25
30
35
Relative errors (%)

C
o
u
n
t
s
8 6 4 2 0 2
0
10
20
30
40
50
60
Relative errors (%)

C
o
u
n
t
s
Figure 2: Relative errors of the iteration based RRR
(left) and k-moment method (right) (C17 : 3 =
30%V
th
variation).
[20]. For the interconnects, 3 = 30% geometrical varia-
tions
1
(the wire width and thickness) are considered using
the distance-based spatial correlation models. For the driver
that is driving the interconnect, independent random thresh-
old voltage (V
th
) variation for each transistor is considered.
All the drivers are implemented using TSMC 0.18um tech-
nology.
5.1 Digital circuit parameter reduction
In this section, we compare four dimension reduction tech-
niques on two digital circuits. Each transistors threshold
voltage is considered as an independent gaussian variable
with 15% 30% 3 V
th
variations. Fig. 1 and Fig. 2
show the distributions of the relative delay errors introduced
by parameter reductions for C17 circuit, indicating that for
a typical variation range, the iteration-based RRR method
may signicantly improve the accuracy of parameter reduc-
tion, while the moment based method is more accurate for
handling large variations.
The composition of the reduced parameter set (the map-
ping coecients in Br ) obtained by each method is plotted
in Fig. 3. It is observed that for moderate variation range,
the compositions found by the iteration based RRR and mo-
ment based algorithm are quite similar, indicating that the
iterative RRR can detect better dimension reduction direc-
tions (DRS) than the simple linear RRR approach.
Next, we perform 1K Monte Carlo simulations on the S27
circuit, using the reduced parameter set computed by each
of the three dimension reduction methods, respectively. The
probability distributions of the relative slew errors are shown
in Fig. 4. More statistics of relative delay/slew errors are
1
Interconnect variations are set larger than typical values for
demonstrating the capability and accuracy of this algorithm.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
0
5
10
15
20
Transistor Vth 1~24

M
a
g
n
i
t
u
d
e
Lin. RRR
Iter. RRR
Mom.
Figure 3: Composition of the new parameters (C17:
3 = 15%V
th
variation).
0.1 0.08 0.06 0.04 0.02 0 0.02 0.04 0.06
0
10
20
30
40
50
60
Relative Error

P
D
F
3p Mom.
2p RRR
2p iter. RRR
Figure 4: Relative slew errors of the moment based
method, linear RRR and iteration based RRR (S27:
3 = 30%V
th
variation)) .
depicted in Table 1. Obviously, even for very large variation
magnitudes, the moment based dimension reduction method
can still provide very accurate dimension reductions.
5.2 Timing model characterization
Finally, we demonstrate the results of quadratic timing
model characterizations on several routed nets of the IS-
CAS85 benchmark circuits. In each case, both driver and
interconnect are included for the stage delay and slew char-
acterizations. The threshold voltage of each transistor in
the driver is considered while the spatially correlated geo-
metrical variations (wire width and thickness) are taken into
account for the interconnects.
In Table 2, we compare the accuracies (relative delays/slews
errors) of the models (quadratic and linear models) extracted
Table 1: Relative delay/slew errors of four dimen-
sion reduction algorithms on 1K simulations of IS-
CAS89 S27 (3 = 30%Vth variation).
# of Redu. Para. Alg. Avg. Err. Max. Err.
2 RRR 0.7/1.1 3.9/7.8
2 Iter. RRR 0.5/0.9 3.4/7.6
3 PHD 3.2/2.1 13.6/10.8
3 K-Mom. 0.4/0.3 3.4/2.3
Table 2: Results of ISCAS85 benchmark circuits (200 random simulations)
Nets c1355 : N886 c432 : N360 c499 : N445 c1908:N1188 c3540 : N2158
# of Sinks 12 9 12 16 16
# of Transistors 4 2 12 4 4
# of Full Inter. Paras. 20 20 20 20 20
# of Redu. Inter. Paras. 5 6 5 7 7
# of Final Redu. Paras. 4 5 4 5 5
Avg. Err. via Iter. RRR (Q.M.) 0.49/0.83 0.66/0.71 0.84/0.92 0.51/0.53 0.67/0.73
Max. Err. via Iter. RRR (Q.M.) 1.46/2.71 1.89/3.05 1.93/2.16 1.21/2.51 1.06/2.58
Avg. Err. via Lin. RRR (Q.M.) 0.94/1.51 1.28/1.97 1.02/2.12 1.07/1.83 1.41/1.98
Max. Err. via Lin. RRR (Q.M.) 3.02/5.14 3.59/4.12 3.63/3.86 2.16/2.74 2.45/3.01
Avg. Err. via Iter. RRR (L.M.) 1.03/2.15 1.21/2.20 1.37/2.84 0.93/1.85 1.49/2.35
Max. Err. via Iter. RRR (L.M.) 4.86/11.05 5.10/12.34 4.78/10.05 4.80/11.45 4.24/9.22
# of Simu. w/ Dim. Redu. 40 50 40 50 50
# of Simu. w/o Dim. Redu. 400 400 900 400 400
Extr. T. Speedup (Redu. vs Full) 10X 8X 22X 8X 8X
using the iterative RRR and the linear RRR algorithms.
The model extraction speedups brought by parameter re-
ductions are also shown. # of Sinks, Transistors, Full Inter.
Paras., Redu. Inter. Paras. and Final Redu. Paras. rep-
resent the numbers of sinks, transistors in the circuit, the
original geometrical interconnect parameters, the reduced
interconnect parameters and the nal reduced parameters
(for the driver and interconnects), respectively. Q.M. de-
notes the quadratic timing model while L.M. denotes the
linear timing model. # of Simu. w/ Dim. Redu. means
the numbers of simulations required for building a quadratic
model while # of Simu. w/o Dim. Redu. represents the
required simulation samples without parameter dimension
reduction. Extr. T. Speedup (Redu. vs Full) means the the
model extraction time reductions achieved by adopting the
dimension reduction techniques.
From the table we can nd that the quadratic timing mod-
els of the reduced parameters computed by the iteration
based approach maintain the best accuracy. As observed,
the linear timing model may lead to more than 10% rela-
tive errors while the maximum relative errors given by the
quadratic timing models are mostly less than 5%. As shown
in the table, the overall model extraction cost has been
signicantly reduced by adopting the dimension reduction
techniques, since the most time-consuming part, hundreds
of sample simulations in the original high parameter dimen-
sion space, is avoided.
We would like to emphasize that the parameter dimension
reduction technique will not only dramatically simplify the
model extraction ow, but also reduce the circuit analysis
and optimization cost, in which only a few most important
combinations of the original parameters need to be consid-
ered.
6. CONCLUSIONS
In this work, a methodology for quadratic timing model
characterization for Statistical Static Timing Analysis (SSTA)
is proposed. By adopting the powerful parameter dimension
reduction techniques, timing model extraction can be per-
formed in the reduced parameter space, thus provide sig-
nicant reductions on the required simulation samples for
constructing accurate quadratic timing models. Extensive
experiments are conducted on various types of realistic cir-
cuits, showing very accurate results.
7. REFERENCES
[1] H. Chang and S. Sapatnekar. Statistical timing analysis
considering spatial correlations using a single pert-like
traversal. In Proc. IEEE/ACM ICCAD, pages 621625,
November 2003.
[2] C. Visweswariah, K. Ravindran, K. Kalafala, S. Walker, and
S. Narayan. First-order incremental block-based statistical
timing analysis. In Proc. IEEE/ACM DAC, pages 331336,
June 2004.
[3] H. Chang, V. Zolotov, S. Narayan, and C. Visweswariah.
Parameterized block-based statistical timing analysis with
non-gaussian parameters, nonlinear delay functions. In Proc.
IEEE/ACM DAC, pages 7176, June 2005.
[4] Y. Zhan, A. Strojwas, X. Li, and L. Pileggi. Correlation-aware
statistical timing analysis with non-gaussian delay
distributions. In Proc. IEEE/ACM DAC, pages 7782, June
2005.
[5] L. Zhang, W. Chen, Y. Hu, A. Gubner, and C. Chen.
Correlation-preserved non-gaussian statistical timing analysis
with quadratic timing model. In Proc. IEEE/ACM DAC,
pages 8388, June 2005.
[6] M. Mani, A. Devgan, and M. Orshansky. An ecient algorithm
for statistical minimization of total power under timing yield
constraints. In Proc. IEEE/ACM DAC, pages 309314, June
2005.
[7] J. Singh, V. Nookala, Z. Luo, and S. Sapatnekar. Robust gate
sizing by geometric programming. In Proc. IEEE/ACM Design
Automation Conf., pages 315320, June 2005.
[8] M. R. Guthaus, N. Venkateswaran, C. Visweswariah, and
V. Zolotov. Gate sizing using incremental parameterized
statistical timing analysis. In Proc. IEEE/ACM ICCAD, pages
10291036, November 2005.
[9] International technology roadmap for seminconductors, 2006
updates. http://www.itrs.net, 2006.
[10] Z. Feng and P. Li. Performance-oriented statistical parameter
reduction of parameterized systems via reduced rank regression.
In Proc. IEEE/ACM ICCAD, pages 868875, November 2006.
[11] L. Pillage and R. Rohrer. Asymptotic waveform evaluation for
timing analysis. IEEE Trans. Computer-Aided Design,
9(4):352366, Apr. 1990.
[12] F. Dartu, N. Menezes, J. Qian, and L. Pillage. A gate-delay
model for high speed cmos circuits. In Proc. IEEE/ACM DAC,
pages 576580, June 1994.
[13] L. Daniel, O. Siong, L. Chay, K. Lee, and J. White. A
multiparameter moment-matching model-reduction approach
for generating geometrically parameterized interconnect
performance models. IEEE Trans. Computer-Aided Design,
23(5):678693, May 2004.
[14] X. Li, P. Li, and L. Pileggi. Parameterized interconnect order
reduction with explicit-and-implicit multi-parameter moment
matching for inter/intra-die variations. In Proc. IEEE/ACM
ICCAD, pages 806812, 2005.
[15] I. Keller, K. Tseng, and N. Verghese. A robust cell-level
crosstalk delay change analysis. In Proc. IEEE/ACM ICCAD,
pages 147154, November 2004.
[16] S. W. Director and R. A. Rohrer. The generalized adjoint
network and network sensitivities. IEEE Trans. on Circuits
and Systems, 16(3):318323, Aug. 1969.
[17] X. Yin and E. Bura. Moment based dimension reduction for
multivariate response regression. Journal of Statistical
Planning and Inference, 130(10):36753688, October 2006.
[18] X. Yin and R. D. Cook. Dimension reduction for the
conditional k-th moment in regression. Journal of the Royal
Statistical Society B, 64(Part 2):159175, 2002.
[19] K. C. Li. On principal hessian directions for data visualization
and dimension reduction: another application of steins lemma.
J. Ameri. Stat. Assoc., 87(420):10251039, December 1992.
[20] http://dropzone.tamu.edu/ xiang/iscas.html.

You might also like