In mathematics, the progressive-iterative approximation method is an iterative method of data fitting with geometric meanings.[1] Given a set of data points to be fitted, the method obtains a series of fitting curves (or surfaces) by iteratively updating the control points, and the limit curve (surface) can interpolate or approximate the given data points.[2] It avoids solving a linear system of equations directly and allows flexibility in adding constraints during the iterative process.[3] Therefore, it has been widely used in geometric design and related fields.[2]
The study of the iterative method with geometric meaning can be traced back to the work of scholars such as Dongxu Qi and Carl de Boor in the 1970s.[4][5] In 1975, Qi et al. developed and proved the "profit and loss" algorithm for uniform cubic B-spline curves,[4] and in 1979, de Boor independently proposed this algorithm.[5] In 2004, Hongwei Lin and coauthors proved that non-uniform cubic B-spline curves and surfaces have the "profit and loss" property.[3] Later, in 2005, Lin et al. proved that the curves and surfaces with normalized and totally positive basis all have this property and named it progressive iterative approximation (PIA).[1] In 2007, Maekawa et al. changed the algebraic distance in PIA to geometric distance and named it geometric interpolation (GI).[6] In 2008, Cheng et al. extended it to subdivision surfaces and named the method progressive interpolation (PI).[7] Since the iteration steps of the PIA, GI, and PI algorithms are similar and all have geometric meanings, they are collectively referred to as geometric iterative methods (GIM).[2]
PIA is now extended to several common curves and surfaces in the geometric design field,[8] including NURBS curves and surfaces,[9] T-spline surfaces,[10] and implicit curves and surfaces.[11]
Generally, progressive-iterative approximation (PIA) can be divided into interpolation and approximation schemes.[2] In interpolation algorithms, the number of control points is equal to that of the data points; in approximation algorithms, the number of control points can be less than that of the data points. Specifically, there are some representative iteration methods—such as local-PIA,[12] implicit-PIA,[11] fairing-PIA,[13] and isogeometric least-squares progressive-iterative approximation (IG-LSPIA)[14]—that are specialized for solving the isogeometric analysis problem.[15]
In interpolation algorithms of PIA,[1][3][9][16] every data point is used as a control point. To facilitate the description of the PIA iteration format for different forms of curves and surfaces, the following formula is uniformly used: P ( t ) = ∑ i = 1 n P i B i ( t ) . {\displaystyle \mathbf {P} (\mathbf {t} )=\sum _{i=1}^{n}\mathbf {P} _{i}B_{i}(\mathbf {t} ).} For example:
Additionally, this can be applied to NURBS curves and surfaces, T-spline surfaces, and triangular Bernstein–Bézier surfaces.[18]
Given an ordered data set Q i {\displaystyle \mathbf {Q} _{i}} with parameters t i {\displaystyle t_{i}} satisfying t 1 < t 2 < ⋯ {\displaystyle t_{1}<t_{2}<\cdots } for i = 1 , 2 , ⋯ , n {\displaystyle i=1,2,\cdots ,n} , the initial fitting curve is:[1] P ( 0 ) ( t ) = ∑ i = 1 n P i ( 0 ) B i ( t ) {\displaystyle \mathbf {P} ^{(0)}(t)=\sum _{i=1}^{n}\mathbf {P} _{i}^{(0)}B_{i}(t)} where the initial control points of the initial fitting curve P i ( 0 ) {\displaystyle \mathbf {P} _{i}^{(0)}} can be randomly selected. Suppose that after the k {\displaystyle k} th iteration, the k {\displaystyle k} th fitting curve P ( k ) ( t ) {\displaystyle \mathbf {P} ^{(k)}(t)} is generated by
To construct the ( k + 1 ) {\displaystyle (k+1)} st curve, we first calculate the difference vectors, Δ i ( k ) = Q i − P ( k ) ( t i ) , i = 1 , 2 , ⋯ , n {\displaystyle \mathbf {\Delta } _{i}^{(k)}=\mathbf {Q} _{i}-\mathbf {P} ^{(k)}(t_{i}),\quad i=1,2,\cdots ,n} and use them to update the control points by P i ( k + 1 ) = P i ( k ) + Δ i ( k ) {\displaystyle \mathbf {P} _{i}^{(k+1)}=\mathbf {P} _{i}^{(k)}+\mathbf {\Delta } _{i}^{(k)}} which leads to the ( k + 1 ) {\displaystyle (k+1)} st fitting curve: P ( k + 1 ) ( t ) = ∑ i = 1 n P i ( k + 1 ) B i ( t ) . {\displaystyle \mathbf {P} ^{(k+1)}(t)=\sum _{i=1}^{n}\mathbf {P} _{i}^{(k+1)}B_{i}(t).} In this way, we obtain a sequence of curves P ( α ) ( t ) , α = 0 , 1 , 2 , ⋯ {\textstyle \mathbf {P} ^{(\alpha )}(t),\alpha =0,1,2,\cdots } , which converges to a limit curve that interpolates the give data points,[1][9] i.e., lim α → ∞ P ( α ) ( t i ) = Q i , i = 1 , 2 , ⋯ , n . {\displaystyle \lim \limits _{\alpha \rightarrow \infty }\mathbf {P} ^{(\alpha )}(t_{i})=\mathbf {Q} _{i},\quad i=1,2,\cdots ,n.}
For the B-spline curve and surface fitting problem, Deng and Lin proposed a least-squares progressive–iterative approximation (LSPIA),[10][19] which allows the number of control points to be less than the number of the data points and is more suitable for large-scale data fitting problems.[10]
Assume there exists m {\displaystyle m} data points and n {\displaystyle n} control points, where n ≤ m {\displaystyle n\leq m} . Start with equation (1), which gives the k {\displaystyle k} th fitting curve as P ( k ) ( t ) = ∑ j = 1 n P j ( k ) B j ( t ) . {\displaystyle \mathbf {P} ^{(k)}(t)=\sum _{j=1}^{n}\mathbf {P} _{j}^{(k)}B_{j}(t).} To generate the ( k + 1 ) {\displaystyle (k+1)} th fitting curve, first compute the difference vectors for the data points[10][19] δ i ( k ) = Q i − P ( k ) ( t i ) , i = 1 , 2 , ⋯ , m {\displaystyle {\boldsymbol {\delta }}_{i}^{(k)}=\mathbf {Q} _{i}-\mathbf {P} ^{(k)}(t_{i}),\quad i=1,2,\cdots ,m} and then the difference vectors for the control points Δ j ( k ) = ∑ i ∈ I j c i B j ( t i ) δ i ( k ) ∑ i ∈ I j c i B j ( t i ) , j = 1 , 2 , ⋯ , n {\displaystyle \mathbf {\Delta } _{j}^{(k)}={\frac {\sum _{i\in I_{j}}{c_{i}B_{j}(t_{i}){\boldsymbol {\delta }}_{i}^{(k)}}}{\sum _{i\in I_{j}}c_{i}B_{j}(t_{i})}},\quad j=1,2,\cdots ,n} where I j {\displaystyle I_{j}} is the index set of the data points in the j {\displaystyle j} th group, whose parameters fall in the local support of the j {\displaystyle j} th basis function, i.e., B j ( t i ) ≠ 0 {\displaystyle B_{j}(t_{i})\neq 0} . The c i {\displaystyle c_{i}} are weights that guarantee the convergence of the algorithm, usually taken as c i = 1 , i ∈ I j {\displaystyle c_{i}=1,i\in I_{j}} .
Finally, the control points of the ( k + 1 ) {\displaystyle (k+1)} th curve are updated by P j ( k + 1 ) = P j ( k ) + Δ j ( k ) , {\displaystyle \mathbf {P} _{j}^{(k+1)}=\mathbf {P} _{j}^{(k)}+\mathbf {\Delta } _{j}^{(k)},} leading to the ( k + 1 ) {\displaystyle (k+1)} th fitting curve P ( k + 1 ) ( t ) {\displaystyle \mathbf {P} ^{(k+1)}(t)} . In this way, we obtain a sequence of curve, and the limit curve converges to the least-squares fitting result to the given data points.[10][19]
In the local-PIA method,[12] the control points are divided into active and fixed control points, whose subscripts are denoted as I = { i 1 , i 2 , ⋯ , i I } {\textstyle I=\left\{i_{1},i_{2},\cdots ,i_{I}\right\}} and J = { j 1 , j 2 , ⋯ , j J } {\textstyle J=\left\{j_{1},j_{2},\cdots ,j_{J}\right\}} , respectively. Assume that, the k {\textstyle k} th fitting curve is P ( k ) ( t ) = ∑ j = 1 n P j ( k ) B j ( t ) {\textstyle \mathbf {P} ^{(k)}(t)=\sum _{j=1}^{n}\mathbf {P} _{j}^{(k)}B_{j}(t)} , where the fixed control points satisfy P j ( k ) = P j ( 0 ) , j ∈ J , k = 0 , 1 , 2 , ⋯ . {\displaystyle \mathbf {P} _{j}^{(k)}=\mathbf {P} _{j}^{(0)},\quad j\in J,\quad k=0,1,2,\cdots .} Then, on the one hand, the iterative formula of the difference vector Δ h ( k + 1 ) {\textstyle \mathbf {\Delta } _{h}^{(k+1)}} corresponding to the fixed control points is Δ h ( k + 1 ) = Q h − ∑ j = 1 n P j ( k + 1 ) B j ( t h ) = Q h − ∑ j ∈ J P j ( k + 1 ) B j ( t h ) − ∑ i ∈ I ( P i ( k ) + Δ i ( k ) ) B i ( t h ) = Q h − ∑ j = 1 n P j ( k ) B j ( t h ) − ∑ i ∈ I Δ i ( k ) B i ( t h ) = Δ h ( k ) − ∑ i ∈ I Δ i ( k ) B i ( t h ) , h ∈ J . {\displaystyle {\begin{aligned}\mathbf {\Delta } _{h}^{(k+1)}&=\mathbf {Q} _{h}-\sum _{j=1}^{n}\mathbf {P} _{j}^{(k+1)}B_{j}(t_{h})\\&=\mathbf {Q} _{h}-\sum _{j\in J}\mathbf {P} _{j}^{(k+1)}B_{j}(t_{h})-\sum _{i\in I}\left(\mathbf {P} _{i}^{(k)}+\mathbf {\Delta } _{i}^{(k)}\right)B_{i}(t_{h})\\&=\mathbf {Q} _{h}-\sum _{j=1}^{n}\mathbf {P} _{j}^{(k)}B_{j}(t_{h})-\sum _{i\in I}\mathbf {\Delta } _{i}^{(k)}B_{i}(t_{h})\\&=\mathbf {\Delta } _{h}^{(k)}-\sum _{i\in I}\mathbf {\Delta } _{i}^{(k)}B_{i}(t_{h}),\quad h\in J.\end{aligned}}} On the other hand, the iterative formula of the difference vector D l ( k + 1 ) {\textstyle \mathbf {D} _{l}^{(k+1)}} corresponding to the active control points is Δ l ( k + 1 ) = Q l − ∑ j = 1 n P j ( k + 1 ) B j ( t l ) = Q l − ∑ j = 1 n P j ( k ) B j ( t l ) − ∑ i ∈ I Δ i ( k ) B i ( t l ) = Δ l ( k ) − ∑ i ∈ I Δ i ( k ) B i ( t l ) = − Δ i 1 ( k ) B i 1 ( t l ) − Δ i 2 ( k ) B i 2 ( t l ) − ⋯ + ( 1 − B l ( t l ) ) Δ l ( k ) − ⋯ − Δ i I ( k ) B i I ( t l ) , l ∈ I . {\displaystyle {\begin{aligned}\mathbf {\Delta } _{l}^{(k+1)}&=\mathbf {Q} _{l}-\sum _{j=1}^{n}\mathbf {P} _{j}^{(k+1)}B_{j}(t_{l})\\&=\mathbf {Q} _{l}-\sum _{j=1}^{n}\mathbf {P} _{j}^{(k)}B_{j}(t_{l})-\sum _{i\in I}\mathbf {\Delta } _{i}^{(k)}B_{i}(t_{l})\\&=\mathbf {\Delta } _{l}^{(k)}-\sum _{i\in I}\mathbf {\Delta } _{i}^{(k)}B_{i}(t_{l})\\&=-\mathbf {\Delta } _{i_{1}}^{(k)}B_{i_{1}}(t_{l})-\mathbf {\Delta } _{i_{2}}^{(k)}B_{i_{2}}(t_{l})-\cdots +\left(1-B_{l}(t_{l})\right)\mathbf {\Delta } _{l}^{(k)}-\cdots -\mathbf {\Delta } _{i_{I}}^{(k)}B_{i_{I}}(t_{l}),\quad l\in I.\end{aligned}}} Arranging the above difference vectors into a one-dimensional sequence, D ( k + 1 ) = [ Δ j 1 ( k + 1 ) , Δ j 2 ( k + 1 ) , ⋯ , Δ j J ( k + 1 ) , Δ i 1 ( k + 1 ) , Δ i 2 ( k + 1 ) , ⋯ , Δ i I ( k + 1 ) ] T , k = 0 , 1 , 2 , ⋯ , {\displaystyle \mathbf {D} ^{(k+1)}=\left[\mathbf {\Delta } _{j_{1}}^{(k+1)},\mathbf {\Delta } _{j_{2}}^{(k+1)},\cdots ,\mathbf {\Delta } _{j_{J}}^{(k+1)},\mathbf {\Delta } _{i_{1}}^{(k+1)},\mathbf {\Delta } _{i_{2}}^{(k+1)},\cdots ,\mathbf {\Delta } _{i_{I}}^{(k+1)}\right]^{T},\quad k=0,1,2,\cdots ,} the local iteration format in matrix form is, D ( k + 1 ) = T D ( k ) , k = 0 , 1 , 2 , ⋯ , {\displaystyle \mathbf {D} ^{(k+1)}=\mathbf {T} \mathbf {D} ^{(k)},\quad k=0,1,2,\cdots ,} where T {\textstyle \mathbf {T} } is the iteration matrix: T = [ E J − B 1 0 E I − B 2 ] , {\displaystyle \mathbf {T} ={\begin{bmatrix}\mathbf {E} _{J}&-\mathbf {B} _{1}\\0&\mathbf {E} _{I}-\mathbf {B} _{2}\end{bmatrix}},} where E J {\textstyle \mathbf {E} _{J}} and E I {\textstyle \mathbf {E} _{I}} are the identity matrices and B 1 = [ B i 1 ( t j 1 ) B i 2 ( t j 1 ) ⋯ B i I ( t j 1 ) B i 1 ( t j 2 ) B i 2 ( t j 2 ) ⋯ B i I ( t j 2 ) ⋮ ⋮ ⋮ ⋮ B i 1 ( t j J ) B i 2 ( t j J ) ⋯ B i I ( t j J ) ] , B 2 = [ B i 1 ( t i 1 ) B i 2 ( t i 1 ) ⋯ B i I ( t i 1 ) B i 1 ( t i 2 ) B i 2 ( t i 2 ) ⋯ B i I ( t i 2 ) ⋮ ⋮ ⋮ ⋮ B i 1 ( t i I ) B i 2 ( t i I ) ⋯ B i I ( t i I ) ] . {\displaystyle \mathbf {B} _{1}={\begin{bmatrix}B_{i_{1}}\left(t_{j_{1}}\right)&B_{i_{2}}\left(t_{j_{1}}\right)&\cdots &B_{i_{I}}\left(t_{j_{1}}\right)\\B_{i_{1}}\left(t_{j_{2}}\right)&B_{i_{2}}\left(t_{j_{2}}\right)&\cdots &B_{i_{I}}\left(t_{j_{2}}\right)\\\vdots &\vdots &\vdots &\vdots \\B_{i_{1}}\left(t_{j_{J}}\right)&B_{i_{2}}\left(t_{j_{J}}\right)&\cdots &B_{i_{I}}\left(t_{j_{J}}\right)\\\end{bmatrix}},\mathbf {B} _{2}={\begin{bmatrix}B_{i_{1}}\left(t_{i_{1}}\right)&B_{i_{2}}\left(t_{i_{1}}\right)&\cdots &B_{i_{I}}\left(t_{i_{1}}\right)\\B_{i_{1}}\left(t_{i_{2}}\right)&B_{i_{2}}\left(t_{i_{2}}\right)&\cdots &B_{i_{I}}\left(t_{i_{2}}\right)\\\vdots &\vdots &\vdots &\vdots \\B_{i_{1}}\left(t_{i_{I}}\right)&B_{i_{2}}\left(t_{i_{I}}\right)&\cdots &B_{i_{I}}\left(t_{i_{I}}\right)\\\end{bmatrix}}.} The above local iteration format converges and can be extended to blending surfaces[12] and subdivision surfaces.[20]
The PIA format for implicit curve and surface reconstruction is presented in the following.[11] Given an ordered point cloud { Q i } i = 1 n {\textstyle \left\{\mathbf {Q} _{i}\right\}_{i=1}^{n}} and a unit normal vector { n i } i = 1 n {\textstyle \left\{\mathbf {n} _{i}\right\}_{i=1}^{n}} on the data points, we want to reconstruct an implicit curve from the given point cloud. To avoid a trivial solution, some offset points { Q l } l = n + 1 2 n {\textstyle \left\{\mathbf {Q} _{l}\right\}_{l=n+1}^{2n}} are added to the point cloud.[11] They are offset by a distance σ {\textstyle \sigma } along the unit normal vector of each point Q l = Q i + σ n i , l = n + i , i = 1 , 2 , ⋯ , n . {\displaystyle \mathbf {Q} _{l}=\mathbf {Q} _{i}+\sigma \mathbf {n} _{i},\quad l=n+i,\quad i=1,2,\cdots ,n.} Assume that ϵ {\textstyle \epsilon } is the value of the implicit function at the offset point f ( Q l ) = ϵ , l = n + 1 , n + 2 , ⋯ , 2 n . {\displaystyle f\left(\mathbf {Q} _{l}\right)=\epsilon ,\quad l=n+1,n+2,\cdots ,2n.} Let the implicit curve after the α {\textstyle \alpha } th iteration be f ( α ) ( x , y ) = ∑ i = 1 N u ∑ j = 1 N v C i j ( α ) B i ( x ) B j ( y ) , {\displaystyle f^{(\alpha )}(x,y)=\sum _{i=1}^{N_{u}}\sum _{j=1}^{N_{v}}C_{ij}^{(\alpha )}B_{i}(x)B_{j}(y),} where C i j ( α ) {\textstyle C_{ij}^{(\alpha )}} is the control point.
Define the difference vector of data points as[11] δ k ( α ) = 0 − f ( α ) ( x k , y k ) , k = 1 , 2 , ⋯ , n , δ l ( α ) = ϵ − f ( α ) ( x l , y l ) , l = n + 1 , n + 2 , ⋯ , 2 n . {\displaystyle {\begin{aligned}{\boldsymbol {\delta }}_{k}^{(\alpha )}&=0-f^{(\alpha )}(x_{k},y_{k}),\quad k=1,2,\cdots ,n,\\{\boldsymbol {\delta }}_{l}^{(\alpha )}&=\epsilon -f^{(\alpha )}(x_{l},y_{l}),\quad l=n+1,n+2,\cdots ,2n.\end{aligned}}} Next, calculate the difference vector of control coefficients Δ i j ( α ) = μ ∑ k = 1 2 n B i ( x k ) B j ( y k ) δ k ( α ) , i = 1 , 2 , ⋯ , N u , j = 1 , 2 , ⋯ , N v , {\displaystyle {\boldsymbol {\Delta }}_{ij}^{(\alpha )}=\mu \sum _{k=1}^{2n}B_{i}(x_{k})B_{j}(y_{k}){\boldsymbol {\delta }}_{k}^{(\alpha )},\quad i=1,2,\cdots ,N_{u},\quad j=1,2,\cdots ,N_{v},} where μ {\textstyle \mu } is the convergence coefficient. As a result, the new control coefficients are C i j ( α + 1 ) = C i j ( α ) + Δ i j ( α ) , {\displaystyle C_{ij}^{(\alpha +1)}=C_{ij}^{(\alpha )}+{\boldsymbol {\Delta }}_{ij}^{(\alpha )},} leading to the new algebraic B-spline curve f ( α + 1 ) ( x , y ) = ∑ i = 1 N u ∑ j = 1 N v C i j ( α + 1 ) B i ( x ) B j ( y ) . {\displaystyle f^{(\alpha +1)}(x,y)=\sum _{i=1}^{N_{u}}\sum _{j=1}^{N_{v}}C_{ij}^{(\alpha +1)}B_{i}(x)B_{j}(y).} The above procedure is carried out iteratively to generate a sequence of algebraic B-spline functions { f ( α ) ( x , y ) , α = 0 , 1 , 2 , ⋯ } {\textstyle \left\{f^{(\alpha )}(x,y),\quad \alpha =0,1,2,\cdots \right\}} . The sequence converges to a minimization problem with constraints when the initial control coefficients C i j ( 0 ) = 0 {\textstyle C_{ij}^{(0)}=0} .[11]
Assume that the implicit surface generated after the α {\textstyle \alpha } th iteration is f ( α ) ( x , y , z ) = ∑ i = 1 N u ∑ j = 1 N v ∑ k = 1 N w C i j k ( α ) B i ( x ) B j ( y ) B k ( z ) , {\displaystyle f^{(\alpha )}(x,y,z)=\sum _{i=1}^{N_{u}}\sum _{j=1}^{N_{v}}\sum _{k=1}^{N_{w}}C_{ijk}^{(\alpha )}B_{i}(x)B_{j}(y)B_{k}(z),} the iteration format is similar to that of the curve case.[11][21]
To develop fairing-PIA, we first define the functionals as follows:[13] F r , j ( f ) = ∫ t 1 t m B r , j ( t ) f d t , j = 1 , 2 , ⋯ , n , r = 1 , 2 , 3 , {\displaystyle {\mathcal {F}}_{r,j}(f)=\int _{t_{1}}^{t_{m}}B_{r,j}(t)fdt,\quad j=1,2,\cdots ,n,\quad r=1,2,3,} where B r , j ( t ) {\textstyle B_{r,j}(t)} represents the r {\textstyle r} th derivative of the basis function B j ( t ) {\textstyle B_{j}(t)} ,[8] (e.g. B-spline basis function).
Let the curve after the k {\textstyle k} th iteration be P [ k ] ( t ) = ∑ j = 1 n B j ( t ) P j [ k ] , t ∈ [ t 1 , t m ] . {\displaystyle \mathbf {P} ^{[k]}(t)=\sum _{j=1}^{n}B_{j}(t)\mathbf {P} _{j}^{[k]},\quad t\in [t_{1},t_{m}].} To construct the new curve P [ k + 1 ] ( t ) {\textstyle \mathbf {P} ^{[k+1]}(t)} , we first calculate the ( k + 1 ) {\textstyle (k+1)} st difference vectors for data points,[13] d i [ k ] = Q i − P [ k ] ( t i ) , i = 1 , 2 , ⋯ , m . {\displaystyle \mathbf {d} _{i}^{[k]}=\mathbf {Q} _{i}-\mathbf {P} ^{[k]}(t_{i}),\quad i=1,2,\cdots ,m.} Then, the fitting difference vectors and the fairing vectors for control points are calculated by[13] δ j [ k ] = ∑ h ∈ I j B j ( t h ) d h [ k ] , j = 1 , 2 , ⋯ , n η j [ k ] = ∑ l = 1 n F r , l ( B r , j ( t ) ) P l [ k ] , j = 1 , 2 , ⋯ , n {\displaystyle {\begin{aligned}{\boldsymbol {\delta }}_{j}^{[k]}&=\sum _{h\in I_{j}}B_{j}(t_{h})\mathbf {d} _{h}^{[k]},\quad j=1,2,\cdots ,n\\{\boldsymbol {\eta }}_{j}^{[k]}&=\sum _{l=1}^{n}{\mathcal {F}}_{r,l}\left(B_{r,j}(t)\right)\mathbf {P} _{l}^{[k]},\quad j=1,2,\cdots ,n\\\end{aligned}}} Finally, the control points of the ( k + 1 ) {\displaystyle (k+1)} st curve are produced by[13] P j [ k + 1 ] = P j [ k ] + μ j [ ( 1 − ω j ) δ j [ k ] − ω j η j [ k ] ] , j = 1 , 2 , ⋯ , n , {\displaystyle \mathbf {P} _{j}^{[k+1]}=\mathbf {P} _{j}^{[k]}+\mu _{j}\left[\left(1-\omega _{j}\right){\boldsymbol {\delta }}_{j}^{[k]}-\omega _{j}{\boldsymbol {\eta }}_{j}^{[k]}\right],\quad j=1,2,\cdots ,n,} where μ j {\displaystyle \mu _{j}} is a normalization weight, and ω j {\displaystyle \omega _{j}} is a smoothing weight corresponding to the j {\displaystyle j} th control point. The smoothing weights can be employed to adjust the smoothness individually, thus bringing great flexibility for smoothness.[13] The larger the smoothing weight is, the smoother the generated curve is. The new curve is obtained as follows P [ k + 1 ] ( t ) = ∑ j = 1 n B j ( t ) P j [ k + 1 ] , t ∈ [ t 1 , t m ] . {\displaystyle \mathbf {P} ^{[k+1]}(t)=\sum _{j=1}^{n}B_{j}(t)\mathbf {P} _{j}^{[k+1]},\quad t\in [t_{1},t_{m}].} In this way, we obtain a sequence of curves { P [ k ] ( t ) , k = 1 , 2 , 3 , ⋯ } {\textstyle \left\{\mathbf {P} ^{[k]}(t),\;k=1,2,3,\cdots \right\}} . The sequence converges to the solution of the conventional fairing method based on energy minimization when all smoothing weights are equal ( ω j = ω {\textstyle \omega _{j}=\omega } ).[13] Similarly, the fairing-PIA can be extended to the surface case.
Isogeometric least-squares progressive-iterative approximation (IG-LSPIA).[14] Given a boundary value problem[15] { L u = f , in Ω , G u = g , on ∂ Ω , {\displaystyle \left\{{\begin{aligned}{\mathcal {L}}u=f,&\quad {\text{in}}\;\Omega ,\\{\mathcal {G}}u=g,&\quad {\text{on}}\;\partial \Omega ,\end{aligned}}\right.} where u : Ω → R {\textstyle u:\Omega \to \mathbb {R} } is the unknown solution, L {\textstyle {\mathcal {L}}} is the differential operator, G {\textstyle {\mathcal {G}}} is the boundary operator, and f {\textstyle f} and g {\textstyle g} are the continuous functions. In the isogeometric analysis method, NURBS basis functions[8] are used as shape functions to solve the numerical solution of this boundary value problem.[15] The same basis functions are applied to represent the numerical solution u h {\textstyle u_{h}} and the geometric mapping G {\textstyle G} : u h ( τ ^ ) = ∑ j = 1 n R j ( τ ^ ) u j , G ( τ ^ ) = ∑ j = 1 n R j ( τ ^ ) P j , {\displaystyle {\begin{aligned}u_{h}\left({\hat {\tau }}\right)&=\sum _{j=1}^{n}R_{j}({\hat {\tau }})u_{j},\\G({\hat {\tau }})&=\sum _{j=1}^{n}R_{j}({\hat {\tau }})P_{j},\end{aligned}}} where R j ( τ ^ ) {\textstyle R_{j}({\hat {\tau }})} denotes the NURBS basis function, u j {\textstyle u_{j}} is the control coefficient. After substituting the collocation points[22] τ ^ i , i = 1 , 2 , . . . , m {\textstyle {\hat {\tau }}_{i},i=1,2,...,{m}} into the strong form of PDE, we obtain a discretized problem[22] { L u h ( τ ^ i ) = f ( G ( τ ^ i ) ) , i ∈ I L , G u h ( τ ^ j ) = g ( G ( τ ^ j ) ) , j ∈ I G , {\displaystyle \left\{{\begin{aligned}{\mathcal {L}}u_{h}({\hat {\tau }}_{i})=f(G({\hat {\tau }}_{i})),&\quad i\in {\mathcal {I_{L}}},\\{\mathcal {G}}u_{h}({\hat {\tau }}_{j})=g(G({\hat {\tau }}_{j})),&\quad j\in {\mathcal {I_{G}}},\end{aligned}}\right.} where I L {\textstyle {\mathcal {I_{L}}}} and I G {\textstyle {\mathcal {I_{G}}}} denote the subscripts of internal and boundary collocation points, respectively.
Arranging the control coefficients u j {\textstyle u_{j}} of the numerical solution u h ( τ ^ ) {\textstyle u_{h}({\hat {\tau }})} into an 1 {\textstyle 1} -dimensional column vector U = [ u 1 , u 2 , . . . , u n ] T {\textstyle \mathbf {U} =[u_{1},u_{2},...,u_{n}]^{T}} , the discretized problem can be reformulated in matrix form as A U = b {\displaystyle \mathbf {AU} =\mathbf {b} } where A {\textstyle \mathbf {A} } is the collocation matrix and b {\textstyle \mathbf {b} } is the load vector.
Assume that the discretized load values are data points { b i } i = 1 m {\textstyle \left\{b_{i}\right\}_{i=1}^{m}} to be fitted. Given the initial guess of the control coefficients { u j ( 0 ) } j = 1 n , n < m {\textstyle \left\{u_{j}^{(0)}\right\}_{j=1}^{n},n<m} , we obtain an initial blending function[14] U ( 0 ) ( τ ^ ) = ∑ j = 1 n A j ( τ ^ ) u j ( 0 ) , τ ^ ∈ [ τ ^ 1 , τ ^ m ] , {\displaystyle U^{(0)}({\hat {\tau }})=\sum _{j=1}^{n}A_{j}({\hat {\tau }})u_{j}^{(0)},\quad {\hat {\tau }}\in [{\hat {\tau }}_{1},{\hat {\tau }}_{m}],} where A j ( τ ^ ) {\textstyle A_{j}({\hat {\tau }})} , j = 1 , 2 , ⋯ , n {\textstyle j=1,2,\cdots ,n} , represents the combination of different order derivatives of the NURBS basis functions determined using the operators L {\textstyle {\mathcal {L}}} and G {\textstyle {\mathcal {G}}} A j ( τ ^ ) = { L R j ( τ ^ ) , τ ^ in Ω p i n , G R j ( τ ^ ) , τ ^ in Ω p b d , j = 1 , 2 , ⋯ , n , {\displaystyle A_{j}({\hat {\tau }})=\left\{{\begin{aligned}{\mathcal {L}}R_{j}({\hat {\tau }}),&\quad {\hat {\tau }}\ {\text{in}}\ \Omega _{p}^{in},\\{\mathcal {G}}R_{j}({\hat {\tau }}),&\quad {\hat {\tau }}\ {\text{in}}\ \Omega _{p}^{bd},\quad j=1,2,\cdots ,n,\end{aligned}}\right.} where Ω p i n {\textstyle \Omega _{p}^{in}} and Ω p b d {\textstyle \Omega _{p}^{bd}} indicate the interior and boundary of the parameter domain, respectively. Each A j ( τ ^ ) {\textstyle A_{j}({\hat {\tau }})} corresponds to the j {\textstyle j} th control coefficient. Assume that J i n {\textstyle J_{in}} and J b d {\textstyle J_{bd}} are the index sets of the internal and boundary control coefficients, respectively. Without loss of generality, we further assume that the boundary control coefficients have been obtained using strong or weak imposition and are fixed, i.e., u j ( k ) = u j ∗ , j ∈ J b d , k = 0 , 1 , 2 , ⋯ . {\displaystyle u_{j}^{(k)}=u_{j}^{*},\quad j\in J_{bd},\quad k=0,1,2,\cdots .} The k {\textstyle k} th blending function, generated after the k {\textstyle k} th iteration of IG-LSPIA,[14] is assumed to be as follows: U ( k ) ( τ ^ ) = ∑ j = 1 n A j ( τ ^ ) u j ( k ) , τ ^ ∈ [ τ ^ 1 , τ ^ m ] . {\displaystyle U^{(k)}({\hat {\tau }})=\sum _{j=1}^{n}A_{j}({\hat {\tau }})u_{j}^{(k)},\quad {\hat {\tau }}\in [{\hat {\tau }}_{1},{\hat {\tau }}_{m}].} Then, the difference vectors for collocation points (DCP) in the ( k + 1 ) {\textstyle (k+1)} st iteration are obtained using δ i ( k ) = b i − ∑ j = 1 n A j ( τ ^ i ) u j ( k ) = b i − ∑ j ∈ J b d A j ( τ ^ i ) u j ( k ) − ∑ j ∈ J i n A j ( τ ^ i ) u j ( k ) , i = 1 , 2 , . . . , m . {\displaystyle {\begin{aligned}{\boldsymbol {\delta }}_{i}^{(k)}&=b_{i}-\sum _{j=1}^{n}A_{j}({\hat {\tau }}_{i})u_{j}^{(k)}\\&=b_{i}-\sum _{j\in J_{bd}}A_{j}({\hat {\tau }}_{i})u_{j}^{(k)}-\sum _{j\in J_{in}}A_{j}({\hat {\tau }}_{i})u_{j}^{(k)},\quad i=1,2,...,m.\end{aligned}}} Moreover, group all load values whose parameters fall in the local support of the j {\textstyle j} th derivatives function, i.e., A j ( τ ^ i ) ≠ 0 {\textstyle A_{j}({\hat {\tau }}_{i})\neq 0} , into the j {\textstyle j} th group corresponding to the j {\textstyle j} th control coefficient, and denote the index set of the j {\textstyle j} th group of load values as I j {\textstyle I_{j}} . Lastly, the differences for control coefficients (DCC) can be constructed as follows:[14] d j ( k ) = μ ∑ h ∈ I j A j ( τ ^ h ) δ h ( k ) , j = 1 , 2 , . . . , n , {\displaystyle d_{j}^{(k)}=\mu \sum _{h\in I_{j}}A_{j}({\hat {\tau }}_{h}){\boldsymbol {\delta }}_{h}^{(k)},\quad j=1,2,...,n,} where μ {\textstyle \mu } is a normalization weight to guarantee the convergence of the algorithm.
Thus, the new control coefficients are updated via the following formula, u j ( k + 1 ) = u j ( k ) + d j ( k ) , j = 1 , 2 , . . . , n , {\displaystyle u_{j}^{(k+1)}=u_{j}^{(k)}+d_{j}^{(k)},\quad j=1,2,...,n,} Consequently, the ( k + 1 ) {\textstyle (k+1)} st blending function is generated as follows: U ( k + 1 ) ( τ ^ ) = ∑ j = 1 n A j ( τ ^ ) u j ( k + 1 ) . {\displaystyle U^{(k+1)}({\hat {\tau }})=\sum _{j=1}^{n}A_{j}({\hat {\tau }})u_{j}^{(k+1)}.} The above iteration process is performed until the desired fitting precision is reached and a sequence of blending functions is obtained { U ( k ) ( τ ^ ) , k = 0 , 1 , … } . {\displaystyle \left\{U^{(k)}({\hat {\tau }}),k=0,1,\dots \right\}.} The IG-LSPIA converges to the solution of a constrained least-squares collocation problem.[14]
Let n be the number of control points and m be the number of data points.
If n = m {\textstyle n=m} , the PIA iterative format in matrix form is [1] P ( α + 1 ) = P ( α ) + Δ ( α ) = P ( α ) + Q − B P ( α ) = ( I − B ) P ( α ) + Q {\displaystyle {\begin{aligned}\mathbf {P^{(\alpha +1)}} &=\mathbf {P^{(\alpha )}} +\mathbf {\Delta } ^{(\alpha )}\\&=\mathbf {P} ^{(\alpha )}+\mathbf {Q} -\mathbf {B} \mathbf {P} ^{(\alpha )}\\&=\left(\mathbf {I} -\mathbf {B} \right)\mathbf {P} ^{(\alpha )}+\mathbf {Q} \end{aligned}}} where Q = [ Q 1 , Q 2 , ⋯ , Q m ] T P ( α ) = [ P 1 ( α ) , P 2 ( α ) , ⋯ , P n ( α ) ] T Δ ( α ) = [ Δ 1 ( α ) , Δ 2 ( α ) , ⋯ , Δ n ( α ) ] T B = [ B 1 ( t 1 ) B 2 ( t 1 ) ⋯ B n ( t 1 ) B 1 ( t 2 ) B 2 ( t 2 ) ⋯ B n ( t 2 ) ⋮ ⋮ ⋱ ⋮ B 1 ( t m ) B 2 ( t m ) ⋯ B n ( t m ) ] . {\displaystyle {\begin{aligned}\mathbf {Q} &=\left[\mathbf {Q} _{1},\mathbf {Q} _{2},\cdots ,\mathbf {Q} _{m}\right]^{T}\\\mathbf {P^{(\alpha )}} &=\left[\mathbf {P} _{1}^{(\alpha )},\mathbf {P} _{2}^{(\alpha )},\cdots ,\mathbf {P} _{n}^{(\alpha )}\right]^{T}\\\mathbf {\Delta } ^{(\alpha )}&=\left[\mathbf {\Delta } _{1}^{(\alpha )},\mathbf {\Delta } _{2}^{(\alpha )},\cdots ,\mathbf {\Delta } _{n}^{(\alpha )}\right]^{T}\\\mathbf {B} &={\begin{bmatrix}B_{1}(t_{1})&B_{2}(t_{1})&\cdots &B_{n}(t_{1})\\B_{1}(t_{2})&B_{2}(t_{2})&\cdots &B_{n}(t_{2})\\\vdots &\vdots &\ddots &\vdots \\B_{1}(t_{m})&B_{2}(t_{m})&\cdots &B_{n}(t_{m})\\\end{bmatrix}}.\end{aligned}}} The convergence of the PIA is related to the properties of the collocation matrix. If the spectral radius of the iteration matrix I − B {\displaystyle \mathbf {I} -\mathbf {B} } is less than 1 {\displaystyle 1} , then the PIA is convergent. It has been shown that the PIA methods are convergent for Bézier curves and surfaces, B-spline curves and surfaces, NURBS curves and surfaces, triangular Bernstein–Bézier surfaces, and subdivision surfaces (Loop, Catmull-Clark, Doo-Sabin).[2]
If n < m {\textstyle n<m} , the LSPIA in matrix form is[10][19] P ( α + 1 ) = P ( α ) + μ B T Δ ( α ) = P ( α ) + μ B T ( Q − B P ( α ) ) = ( I − μ B T B ) P ( α ) + μ B T Q . {\displaystyle {\begin{aligned}\mathbf {P^{(\alpha +1)}} &=\mathbf {P^{(\alpha )}} +\mu \mathbf {B} ^{T}\mathbf {\Delta } ^{(\alpha )}\\&=\mathbf {P} ^{(\alpha )}+\mu \mathbf {B} ^{T}\left(\mathbf {Q} -\mathbf {B} \mathbf {P} ^{(\alpha )}\right)\\&=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\mathbf {P} ^{(\alpha )}+\mu \mathbf {B} ^{T}\mathbf {Q} .\end{aligned}}} When the matrix B T B {\textstyle \mathbf {B} ^{T}\mathbf {B} } is nonsingular, the following results can be obtained:[23]
Lemma—If 0 < μ < 2 λ 0 {\textstyle 0<\mu <{\frac {2}{\lambda _{0}}}} , where λ 0 {\textstyle \lambda _{0}} is the largest eigenvalue of the matrix B T B {\textstyle \mathbf {B} ^{T}\mathbf {B} } , then the eigenvalues of μ B T B {\textstyle \mu \mathbf {B} ^{T}\mathbf {B} } are real numbers and satisfy 0 < λ ( μ B T B ) < 2 {\textstyle 0<\lambda (\mu \mathbf {B} ^{T}\mathbf {B} )<2} .
Proof Since B T B {\textstyle \mathbf {B} ^{T}\mathbf {B} } is nonsingular, and μ > 0 {\textstyle \mu >0} , then λ ( μ B T B ) > 0 {\textstyle \lambda (\mu \mathbf {B} ^{T}\mathbf {B} )>0} . Moreover, λ ( μ B T B ) = μ λ ( B T B ) < 2 λ ( B T B ) λ 0 < 2. {\displaystyle \lambda (\mu \mathbf {B} ^{T}\mathbf {B} )=\mu \lambda (\mathbf {B} ^{T}\mathbf {B} )<2{\frac {\lambda (\mathbf {B} ^{T}\mathbf {B} )}{\lambda _{0}}}<2.} In summary, 0 < λ ( μ B T B ) < 2 {\textstyle 0<\lambda (\mu \mathbf {B} ^{T}\mathbf {B} )<2} .
Theorem—If 0 < μ < 2 λ 0 {\textstyle 0<\mu <{\frac {2}{\lambda _{0}}}} , LSPIA is convergent, and converges to the least-squares fitting result to the given data points.[10][19]
Proof From the matrix form of iterative format, we obtain the following: P ( α + 1 ) = ( I − μ B T B ) P ( α ) + μ B T Q , = ( I − μ B T B ) [ ( I − μ B T B ) P ( α − 1 ) + μ B T Q ] + μ B T Q , = ( I − μ B T B ) 2 P ( α − 1 ) + ∑ i = 0 1 ( I − μ B T B ) μ B T Q , = ⋯ = ( I − μ B T B ) α + 1 P ( 0 ) + ∑ i = 0 α ( I − μ B T B ) α μ B T Q . {\displaystyle {\begin{aligned}\mathbf {P^{(\alpha +1)}} &=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\mathbf {P} ^{(\alpha )}+\mu \mathbf {B} ^{T}\mathbf {Q} ,\\&=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\left[\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\mathbf {P} ^{(\alpha -1)}+\mu \mathbf {B} ^{T}\mathbf {Q} \right]+\mu \mathbf {B} ^{T}\mathbf {Q} ,\\&=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{2}\mathbf {P} ^{(\alpha -1)}+\sum _{i=0}^{1}\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\mu \mathbf {B} ^{T}\mathbf {Q} ,\\&=\cdots \\&=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{\alpha +1}\mathbf {P} ^{(0)}+\sum _{i=0}^{\alpha }\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{\alpha }\mu \mathbf {B} ^{T}\mathbf {Q} .\\\end{aligned}}} According to above Lemma, the spectral radius of the matrix μ B T B {\textstyle \mu \mathbf {B} ^{T}\mathbf {B} } satisfies 0 < ρ ( μ B T B ) < 2 {\displaystyle 0<\rho \left({\mu \mathbf {B} ^{T}\mathbf {B} }\right)<2} and thus the spectral radius of the iteration matrix satisfies 0 < ρ ( I − μ B T B ) < 1. {\displaystyle 0<\rho \left({\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} }\right)<1.} When α → ∞ {\textstyle \alpha \rightarrow \infty } ( I − μ B T B ) ∞ = 0 , ∑ i = 0 ∞ ( I − μ B T B ) α = 1 μ ( B T B ) − 1 . {\displaystyle \left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{\infty }=0,\ \sum _{i=0}^{\infty }\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{\alpha }={\frac {1}{\mu }}\left(\mathbf {B} ^{T}\mathbf {B} \right)^{-1}.} As a result, P ( ∞ ) = ( B T B ) − 1 B T Q , {\displaystyle \mathbf {P} ^{(\infty )}=\left(\mathbf {B} ^{T}\mathbf {B} \right)^{-1}\mathbf {B} ^{T}\mathbf {Q} ,} i.e., B T B P ( ∞ ) = B T Q {\textstyle \mathbf {B} ^{T}\mathbf {B} \mathbf {P} ^{(\infty )}=\mathbf {B} ^{T}\mathbf {Q} } , which is equivalent to the normal equation of the fitting problem. Hence, the LSPIA algorithm converges to the least squares result for a given sequence of points.
Lin et al. showed that LSPIA converges even when the iteration matrix is singular.[18]
Since PIA has obvious geometric meaning, constraints can be easily integrated in the iterations. Currently, PIA has been widely applied in many fields, such as data fitting, reverse engineering, geometric design, mesh generation, data compression, fairing curve and surface generation, and isogeometric analysis.
For implicit curve and surface reconstruction, PIA avoids the additional zero level set and regularization term, which greatly improves the speed of the reconstruction algorithm.[11]
Firstly, the data points are sampled on the original curve. Then, the initial polynomial approximation curve or rational approximation curve of the offset curve is generated from these sampled points. Finally, the offset curve is approximated iteratively using the PIA method.[33]
Given a triangular mesh model as input, the algorithm first constructs the initial hexahedral mesh, then extracts the quadrilateral mesh of the surface as the initial boundary mesh. During the iterations, the movement of each mesh vertex is constrained to ensure the validity of the mesh. Finally, the hexahedral model is fitted to the given input model. The algorithm can guarantee the validity of the generated hexahedral mesh, i.e., the Jacobi value at each mesh vertex is greater than zero.[34]
First, the image data are converted into a one-dimensional sequence by Hilbert scan. Then, these data points are fitted by LSPIA to generate a Hilbert curve. Finally, the Hilbert curve is sampled, and the compressed image can be reconstructed. This method can well preserve the neighborhood information of pixels.[35]
Given a data point set, we first define the fairing functional, and calculate the fitting difference vector and the fairing vector of the control point; then, adjust the control points with fairing weights. According to the above steps, the fairing curve and surface can be generated iteratively. Due to the sufficient fairing parameters, the method can achieve global or local fairing. It is also flexible to adjust knot vectors, fairing weights, or data parameterization after each round of iteration. The traditional energy-minimization method is a special case of this method, i.e., when the smooth weights are all the same.[13]
The discretized load values are regarded as the set of data points, and the combination of the basis functions and their derivative functions is used as the blending function for fitting. The method automatically adjusts the degrees of freedom of the numerical solution of the partial differential equation according to the fitting result of the blending function to the load values. In addition, the average iteration time per step is only related to the number of data points (i.e., collocation points) and unrelated to the number of control coefficients.[14]