TY - GEN
T1 - Local Lipschitz Constant Computation of ReLU-FNNs
T2 - 2024 European Control Conference, ECC 2024
AU - Ebihara, Yoshio
AU - Dai, Xin
AU - Yuno, Tsuyoshi
AU - Magron, Victor
AU - Peaucelle, Dimitri
AU - Tarbouriech, Sophie
N1 - Publisher Copyright:
© 2024 EUCA.
PY - 2024
Y1 - 2024
N2 - This paper is concerned with the computation of the local Lipschitz constant of feedforward neural networks (FNNs) with activation functions being rectified linear units (ReLUs). The local Lipschitz constant of an FNN for a target input is a reasonable measure for its quantitative evaluation of the reliability. By following a standard procedure using multipliers that capture the behavior of ReLU s, we first reduce the upper bound computation problem of the local Lipschitz constant into a semidefinite programming problem (SDP). Here we newly introduce copositive multipliers to capture the ReLU behavior accurately. Then, by considering the dual of the SDP for the upper bound computation, we second derive a viable test to conclude the exactness of the computed upper bound. However, these SDPs are intractable for practical FNNs with hundreds of ReLUs. To address this issue, we further propose a method to construct a reduced order model whose input-output property is identical to the original FNN over a neighborhood of the target input. We finally illustrate the effectiveness of the model reduction and exactness verification methods with numerical examples of practical FNNs.
AB - This paper is concerned with the computation of the local Lipschitz constant of feedforward neural networks (FNNs) with activation functions being rectified linear units (ReLUs). The local Lipschitz constant of an FNN for a target input is a reasonable measure for its quantitative evaluation of the reliability. By following a standard procedure using multipliers that capture the behavior of ReLU s, we first reduce the upper bound computation problem of the local Lipschitz constant into a semidefinite programming problem (SDP). Here we newly introduce copositive multipliers to capture the ReLU behavior accurately. Then, by considering the dual of the SDP for the upper bound computation, we second derive a viable test to conclude the exactness of the computed upper bound. However, these SDPs are intractable for practical FNNs with hundreds of ReLUs. To address this issue, we further propose a method to construct a reduced order model whose input-output property is identical to the original FNN over a neighborhood of the target input. We finally illustrate the effectiveness of the model reduction and exactness verification methods with numerical examples of practical FNNs.
KW - copositive multiplier
KW - exactness verification
KW - feedforward neural networks (FNNs)
KW - local Lipschitz constant
KW - model reduction
KW - rectified lin-ear units (ReLUs)
KW - upper bound
UR - http://www.scopus.com/inward/record.url?scp=85200592182&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85200592182&partnerID=8YFLogxK
U2 - 10.23919/ECC64448.2024.10590974
DO - 10.23919/ECC64448.2024.10590974
M3 - Conference contribution
AN - SCOPUS:85200592182
T3 - 2024 European Control Conference, ECC 2024
SP - 2506
EP - 2511
BT - 2024 European Control Conference, ECC 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 25 June 2024 through 28 June 2024
ER -