# generalized eigenvalue problem julia

A is overwritten by its inverse. Note that Supper will not be equal to Slower unless A is itself symmetric (e.g. + beta*C or alpha*A. The lengths of dl and du must be one less than the length of d. returns a Tridiagonal array based on (abstract) matrix A, using its first lower diagonal, main diagonal, and first upper diagonal. jpvt must have length greater than or equal to n if A is an (m x n) matrix. See also normalize and vecnorm. A is the LU factorization from getrf!, with ipiv the pivoting information. Computes generalized eigenvalues d of A and B using implicitly restarted Lanczos or Arnoldi iterations for real symmetric or general nonsymmetric matrices respectively. tau must have length greater than or equal to the smallest dimension of A. Compute the pivoted QR factorization of A, AP = QR using BLAS level 3. We use essential cookies to perform essential website functions, e.g. The storage layout for A is described the reference BLAS module, level-2 BLAS at http://www.netlib.org/lapack/explore-html/. Computes the Schur factorization of the matrix A. A Range giving the indices of the kth diagonal of the matrix M. The kth diagonal of a matrix, as a vector. A lazy-view wrapper of an AbstractArray, taking the elementwise complex conjugate. For matrices or vectors $A$ and $B$, calculates $A / Bᴴ$. If ritzvec = false, the left and right singular vectors will be empty. If A is symmetric or Hermitian, its eigendecomposition (eigfact) is used, if A is triangular an improved version of the inverse scaling and squaring method is employed (see [AH12] and [AHR13]). Returns A and jpvt, modified in-place, and tau, which stores the elementary reflectors. Solves the equation A * X = B where A is a tridiagonal matrix with dl on the subdiagonal, d on the diagonal, and du on the superdiagonal. Finds the solution to A * X = B where A is a symmetric or Hermitian positive definite matrix. For a $M \times N$ matrix A, U is $M \times M$ for a full SVD (thin=false) and $M \times \min(M, N)$ for a thin SVD. Returns the updated b. Returns the updated y. Sparse matrices generalized eigenvalue problem. If order = B, eigvalues are ordered within a block. The possiblities are: Dot function for two complex vectors consisting of n elements of array X with stride incx and n elements of array Y with stride incy. 13.2, "Accessing Accuracy in Lanczos Problems", pp. The return value can be reused for efficient solving of multiple systems. dA determines if the diagonal values are read or are assumed to be all ones. Returns the updated B. If uplo = U the upper Cholesky decomposition of A was computed. Construct a tridiagonal matrix from the first subdiagonal, diagonal, and first superdiagonal, respectively. ... Compute the eigenvalue decomposition of A and return an Eigen object. Returns a matrix M whose columns are the generalized eigenvectors of A and B. Reorders the Schur factorization of a real matrix A = Z*T*Z' according to the logical array select returning the reordered matrices T and Z as well as the vector of eigenvalues λ. I'm having the same problem Only the uplo triangle of A is used. if A == A'). The type doesn't have a size and can therefore be multiplied with matrices of arbitrary size as long as i2<=size(A,2) for G*A or i2<=size(A,1) for A*G'. u0: Initial guess for the first left Krylov vector. Overwrite b with the solution to A*x = b or one of the other two variants determined by tA and ul. If howmny = A, all eigenvectors are found. For matrices or vectors $A$ and $B$, calculates $Aᴴ$ \ $B$. Returns the solution to A*X = alpha*B or one of the other three variants determined by determined by side and tA. J R Bunch and L Kaufman, Some stable methods for calculating inertia and solving symmetric linear systems, Mathematics of Computation 31:137 (1977), 163-179. url. Already on GitHub? If uplo = U, the upper half of A is stored. If job = S, the columns of (thin) U and the rows of (thin) V' are computed and returned separately. This type is usually constructed (and unwrapped) via the conj function (or related ctranspose), but currently this is the default behavior for RowVector only. If range = A, all the eigenvalues are found. Returns T, Q, and reordered eigenvalues in w. Reorders the vectors of a generalized Schur decomposition. It differs from a 1×n-sized matrix by the facts that its transpose returns a vector and the inner product v1.' Returns X scaled by a for the first n elements of array X with stride incx. ipiv is the pivot vector from the triangular factorization. A Ritz value $θ$ is considered converged when its associated residual is less than or equal to the product of tol and $max(ɛ^{2/3}, |θ|)$, where ɛ = eps(real(eltype(A)))/2 is LAPACK's machine epsilon. Computes the LDLt factorization of a positive-definite tridiagonal matrix with D as diagonal and E as off-diagonal. If normtype = I, the condition number is found in the infinity norm. Matrix trace. This format should not to be confused with the older WY representation [Bischof1987]. Instead of returning a new vector as qr(v::AbstractVector), this function mutates the input vector v in place. Finds the LU factorization of a tridiagonal matrix with dl on the subdiagonal, d on the diagonal, and du on the superdiagonal. Compute a convenient factorization of A, based upon the type of the input matrix. Very curious. tau stores the elementary reflectors. Compute the matrix exponential of A, defined by. If uplo = L, the lower triangles of A and B are used. D and E are overwritten and returned. Returns the vector or matrix X, overwriting B in-place. Successfully merging a pull request may close this issue. It may have length n (the second dimension of A), or 0. svd: An SVD object containing the left singular vectors, the requested values, and the right singular vectors. alpha and beta are scalars. Looks like we don't currently wrap the ARPACK function to compute the generalized eigenproblem. Returns A*x where A is a symmetric band matrix of order size(A,2) with k super-diagonals stored in the argument A. Returns A, vs containing the Schur vectors, and w, containing the eigenvalues. ev's length must be one less than the length of dv. Finds the generalized eigenvalues (jobz = N) or eigenvalues and eigenvectors (jobz = V) of a symmetric matrix A and symmetric positive-definite matrix B. if A is passed as a generic matrix. B is overwritten by the solution X. See the note below. If range = A, all the eigenvalues are found. Confirm if a specific wavefunction is an eigenfunction of a specific operation and extract the corresponding obserable (the eigenvalue) To recognize that the Schrödinger equation, just like all measurable, is also an eigenvalue problem with the eigenvalue ascribed to total energy; Identity and manipulate several common quantum mechanical operators Those BLAS functions that overwrite one of the input arrays have names ending in '!'. The i-th element of outer specifies the number of times that a slice along the i-th dimension of A should be repeated. is the same as bkfact, but saves space by overwriting the input A, instead of creating a copy. Converts a matrix A to Hessenberg form. Solves the equation A * X = B for a Hermitian matrix A using the results of sytrf!. For general matrices, the complex Schur form (schur) is computed and the triangular algorithm is used on the triangular factor. For any iterable container A (including arrays of any dimension) of numbers (or any element type for which norm is defined), compute the p-norm (defaulting to p=2) as if A were a vector of the corresponding length. SLEPc is a software library for the solution of large scale sparse eigenvalue problems on parallel computers. If B is provided, the generalized eigen-problem is solved. As per the definition, an operator acting on a function gives another function, however a special case occurs when the … Only the ul triangle of A is used. For matrices or vectors $A$ and $B$, calculates $Aᴴ / B$. Multiplication operator. Solves A * X = B (trans = N), A.' Only the ul triangle of A is used. For matrices or vectors $A$ and $B$, calculates $Aᴴ⋅B$. The Golub-Ye algorithm is an algorithm for solving hermitian (symmetric) generalized eigenvalue problems A x = λ B x with positive definite B, without the need for inverting B. We recommend that users _always_ specify a value for tol which suits their specific needs. Constructs an upper (isupper=true) or lower (isupper=false) bidiagonal matrix using the given diagonal (dv) and off-diagonal (ev) vectors. Computes the Schur factorization of the matrix A. dA determines if the diagonal values are read or are assumed to be all ones. Computes the eigenvalues (jobz = N) or eigenvalues and eigenvectors (jobz = V) for a symmetric tridiagonal matrix with dv as diagonal and ev as off-diagonal. The flop rate of the entire parallel computer is returned. tau contains scalars which parameterize the elementary reflectors of the factorization. Computes the generalized eigenvalues of A and B. doi:10.1137/110852553, Awad H. Al-Mohy, Nicholas J. Higham and Samuel D. Relton, "Computing the Fréchet derivative of the matrix logarithm and estimating the condition number", SIAM Journal on Scientific Computing, 35(4), 2013, C394-C410. Due to perturbation theory for eigenvalue problems, there is a family of continuous functions fgi( )g, de ned by the eigenvalues of (1.1b), where is the eigenvalue, of a generalized eigenvalue problem (GEP). If false, omit the singular vectors. Compute the QL factorization of A, A = QL. Dot product of two vectors consisting of n elements of array X with stride incx and n elements of array Y with stride incy. C is overwritten. Shouldn't be too difficult. Only the ul triangle of A is used. Finds the generalized singular value decomposition of A and B, U'*A*Q = D1*R and V'*B*Q = D2*R. D1 has alpha on its diagonal and D2 has beta on its diagonal. iblock_in specifies the submatrices corresponding to the eigenvalues in w_in. Returns A, containing the bidiagonal matrix B; d, containing the diagonal elements of B; e, containing the off-diagonal elements of B; tauq, containing the elementary reflectors representing Q; and taup, containing the elementary reflectors representing P. Compute the LQ factorization of A, A = LQ. nsv: Number of singular values. Returns the singular values of A in descending order. (a) λ is an eigenvalue of (A, B) if and only if 1/λ is an eigenvalue of (B, A). Returns the upper triangle of M starting from the kth superdiagonal, overwriting M in the process. A: Linear operator whose singular values are desired. Compute the Hessenberg decomposition of A and return a Hessenberg object. To include the effects of permutation, it is typically preferable to extract "combined" factors like PtL = F[:PtL] (the equivalent of P'*L) and LtP = F[:UP] (the equivalent of L'*P). uplo indicates which triangle of matrix A to reference. The individual components of the factorization F can be accessed by indexing: F[:L]*F[:U] == (F[:Rs] . (and similar with different combinations of arguments). For row vectors, return the $q$-norm of A, which is equivalent to the p-norm with value p = q/(q-1). Only the ul triangle of A is used. The multiplication occurs in-place on b. Matrix operations involving transpositions operations like A' \ B are converted by the Julia parser into calls to specially named functions like Ac_ldiv_B. qrfact! tol: parameter defining the relative tolerance for convergence of Ritz values (eigenvalue estimates). If uplo = L the lower Cholesky decomposition of A is computed. Upper triangle of a matrix, overwriting M in the process. Returns alpha*A*x. If compq = N the Schur vectors are not modified. Returns w, a unit vector in the direction of v (this is a mutation of v), and r, the norm of v. Compute the QR factorization of the matrix A: an orthogonal (or unitary if A is complex-valued) matrix Q, and an upper triangular matrix R such that. Although the residual is almost zero, so are the eigenvectors in many cases: Also, the values in d are not eigenvalues: I think the origin of this is the same as origin of the bugs reported in the comments of #24668. If range = V, the eigenvalues in the half-open interval (vl, vu] are found. Computes the inverse of a Hermitian matrix A using the results of sytrf!. Returns the solution to A*x = b or one of the other two variants determined by tA and ul. If jobvl = N, the left eigenvectors of A aren't computed. '*A + beta*C according to trans. This function requires LAPACK 3.6.0. If $A$ is an m×n matrix, then, where $Q$ is an orthogonal/unitary matrix and $R$ is upper triangular. A is overwritten with its inverse. }.\], $\left\vert M \right\vert_{ij} = \left\vert M_{ij} \right\vert$, $\begin{pmatrix} 0 & A^\prime \\ A & 0 \end{pmatrix}$, Mathematical Operations and Elementary Functions, Noteworthy Differences from other Languages, High-level Overview of the Native-Code Generation Process, Proper maintenance and care of multi-threading locks, Reporting and analyzing crashes (segfaults), http://www.netlib.org/lapack/explore-html/, eigenvalues of largest magnitude (default), eigenvalues of largest imaginary part (nonsymmetric or complex, eigenvalues of smallest imaginary part (nonsymmetric or complex, compute half of the eigenvalues from each end of the spectrum, biased in favor of the high end. For any iterable containers x and y (including arrays of any dimension) of numbers (or any element type for which dot is defined), compute the Euclidean dot product (the sum of dot(x[i],y[i])) as if they were vectors. For matrices or vectors $A$ and $B$, calculates $Aᵀ / Bᵀ$. Learn more, We use analytics cookies to understand how you use our websites so we can make them better, e.g. L is not extended with zeros if the full Q is requested. If jobu = N, no columns of U are computed. Downdate a Cholesky factorization C with the vector v. If A = C[:U]'C[:U] then CC = cholfact(C[:U]'C[:U] - v*v') but the computation of CC only uses O(n^2) operations. Returns the lower triangle of M starting from the kth superdiagonal, overwriting M in the process. Same as eigvals, but saves space by overwriting the input A, instead of creating a copy. However, since pivoting is on by default, the factorization is internally represented as A == P'*L*L'*P with a permutation matrix P; using just L without accounting for P will give incorrect answers. If transa = T, A is transposed. The optimal choice of tol varies both with the value of M and the intended application of the pseudoinverse. Reorders the Generalized Schur factorization of a matrix pair (A, B) = (Q*S*Z', Q*T*Z') according to the logical array select and returns the matrices S, T, Q, Z and vectors α and β. tau contains scalars which parameterize the elementary reflectors of the factorization. Returns the updated y. Returns the eigenvalues of A. Computed by solving the left-division N = M \ I. Computes the Moore-Penrose pseudoinverse. If sense = E,B, the right and left eigenvectors must be computed. A is overwritten by its inverse and returned. Returns A, modified in-place, and tau, which contains scalars which parameterize the elementary reflectors of the factorization. norm(v, p) == 1. Compute the singular value decomposition (SVD) of A and return an SVD object. Downdate a Cholesky factorization C with the vector v. If A = C[:U]'C[:U] then CC = cholfact(C[:U]'C[:U] - v*v') but the computation of CC only uses O(n^2) operations. I encountered a strange issue when solving the generalized eigenvalue problem in Julia 0.6 (version on mac os, but also on 0.5 and 0.6 in Juliabox). If uplo = L, the lower triangle of A is used. If the perm argument is nonempty, it should be a permutation of 1:size(A,1) giving the ordering to use (instead of CHOLMOD's default AMD ordering). operator (or related ctranspose or ' operator). which = 'LM': Eigenvalues with largest magnitude (eigs, eigsh), that is, largest eigenvalues in the euclidean norm of complex numbers.. which = 'SM': Eigenvalues with smallest … Default: 1000. ncv: Maximum size of the Krylov subspace, see eigs (there called nev). Explicitly finds Q, the orthogonal/unitary matrix from gehrd!. If uplo = L, the lower triangle of A is used. If range = V, the eigenvalues in the half-open interval (vl, vu] are found. The default is true for both options. It will short-circuit as soon as it can rule out symmetry/triangular structure. If balanc = S, A is scaled but not permuted. Solves the equation A * X = B for a symmetric matrix A using the results of sytrf!. Only the uplo triangle of C is updated. For matrices or vectors $A$ and $B$, calculates $Aᴴ$ \ $Bᴴ$. If fact = F and equed = C or B the elements of C must all be positive. lufact! If uplo = L the lower Cholesky decomposition of A was computed. Note that the LAPACK API provided by Julia can and will change in the future. Compute the RQ factorization of A, A = RQ. If fact = F and equed = R or B the elements of R must all be positive. vl is the lower bound of the interval to search for eigenvalues, and vu is the upper bound. ipiv contains pivoting information about the factorization. abstol can be set as a tolerance for convergence. Matrix inverse. Compute the QR factorization of A, A = QR. Update a Cholesky factorization C with the vector v. If A = C[:U]'C[:U] then CC = cholfact(C[:U]'C[:U] + v*v') but the computation of CC only uses O(n^2) operations. Only the uplo triangle of A is used. Compute the Cholesky factorization of a dense symmetric positive definite matrix A and return a Cholesky factorization. Returns X. tau must have length greater than or equal to the smallest dimension of A. Compute the QL factorization of A, A = QL. Explicitly finds the matrix Q of a RQ factorization after calling gerqf! Computes the inverse of A, using its LU factorization found by getrf!. No in-place transposition is supported and unexpected results will happen if src and dest have overlapping memory regions. Sums the diagonal elements of M. Log of matrix determinant. Valid values for p are 1, 2 (default), or Inf. If A has no negative real eigenvalues, compute the principal matrix square root of A, that is the unique matrix $X$ with eigenvalues having positive real part such that $X^2 = A$. Scale an array A by a scalar b overwriting A in-place. '*A, according to trans. If A has nonpositive eigenvalues, a nonprincipal matrix function is returned whenever possible. When running in parallel, only 1 BLAS thread is used. ), Computes the eigenvalue decomposition of A, returning an Eigen factorization object F which contains the eigenvalues in F[:values] and the eigenvectors in the columns of the matrix F[:vectors]. jpvt is an integer vector of length n corresponding to the permutation $P$. Computes the eigensystem for a symmetric tridiagonal matrix with dv as diagonal and ev as off-diagonal. on A. Concatenate matrices block-diagonally. Compute the blocked QR factorization of A, A = QR. For other arrays, the ConjArray constructor can be used directly. If balanc = N, no balancing is performed. job can be one of N (A will not be permuted or scaled), P (A will only be permuted), S (A will only be scaled), or B (A will be both permuted and scaled). Only the ul triangle of A is used. Also return values in python are complex numbers, in Julia … A may be represented as a subtype of AbstractArray, e.g., a sparse matrix, or any other type supporting the four methods size(A), eltype(A), A * vector, and A' * vector. The eigenvalues are returned in w and the eigenvectors in Z. Computes the eigenvectors for a symmetric tridiagonal matrix with dv as diagonal and ev_in as off-diagonal. The input factorization C is updated in place such that on exit C == CC. The individual components of the factorization F can be accessed by indexing with a symbol: F[:p]: the permutation vector of the pivot (QRPivoted only), F[:P]: the permutation matrix of the pivot (QRPivoted only). If order = E, they are ordered across all the blocks. Compute the Cholesky ($LL'$) factorization of A, reusing the symbolic factorization F. A must be a SparseMatrixCSC or a Symmetric/ Hermitian view of a SparseMatrixCSC. A (non-zero) vector v of dimension N is an eigenvector of a square N × N matrix A if it satisfies the linear equation = where λ is a scalar, termed the eigenvalue corresponding to v.That is, the eigenvectors are the vectors that the linear transformation A merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. Finds the solution to A * X = B for Hermitian matrix A. searches for the minimum norm/least squares solution. If jobu = U, the orthogonal/unitary matrix U is computed. Initial support is now in place. Explicitly finds the matrix Q of a QL factorization after calling geqlf! Linear algebra functions in Julia are largely implemented by calling functions from LAPACK. is the same as svdfact, but modifies the arguments A and B in-place, instead of making copies. If uplo = U, the upper half of A is stored. First, treat the equation as a normal homogeneous ODE with constant coefficients and find its general solution (include six integration constants C 1, C 2, …, C 6 and the eigenvalue λ H in the expression). The sigma and which keywords interact: the description of eigenvalues searched for by which do not necessarily refer to the eigenvalue problem $Av = Bv\lambda$, but rather the linear operator constructed by the specification of the iteration mode implied by sigma. A is overwritten by its Schur form. F[:D2] is a P-by-(K+L) matrix whose top right L-by-L block is diagonal. The eigenvalues of A are returned in the vector λ. Reorders the Schur factorization F of a matrix A = Z*T*Z' according to the logical array select returning the reordered factorization F object. For example: A=factorize(A); x=A\b; y=A\C. alpha is a scalar. The following functions are available for PivotedCholesky objects: size, \, inv, det, and rank. Computes eigenvalues d of A using implicitly restarted Lanczos or Arnoldi iterations for real symmetric or general nonsymmetric matrices respectively. Methods for complex arrays only. If jobvr = N, the right eigenvectors of A aren't computed. C is overwritten. Åke Björck and Sven Hammarling, "A Schur method for the square root of a matrix", Linear Algebra and its Applications, 52-53, 1983, 127-140. doi:10.1016/0024-3795(83)80010-X. When p=1, the matrix norm is the maximum absolute column sum of A: with $a_{ij}$ the entries of $A$, and $m$ and $n$ its dimensions. The complete list of supported factors is :L, :PtL, :D, :UP, :U, :LD, :DU, :PtLD, :DUP. Generalized Eigenvalue Problem [duplicate] I'm trying to solve Generalized Eigenvalue Problem with Java and I'm currently using OjAlgo, but it cannot solve random symmetrical A and B. Returns the singular values in d, and if compq = P, the compact singular vectors in iq. Awad H. Al-Mohy and Nicholas J. Higham, "Improved inverse scaling and squaring algorithms for the matrix logarithm", SIAM Journal on Scientific Computing, 34(4), 2012, C153-C169. If sense = N, no reciprocal condition numbers are computed. The triangular Cholesky factor can be obtained from the factorization F with: F[:L] and F[:U]. select determines which eigenvalues are in the cluster. If uplo = L, it is lower triangular. Returns the solution in B and the effective rank of A in rnk. Have a question about this project? The first dimension of T sets the block size and it must be between 1 and n. The second dimension of T must equal the smallest dimension of A. Recursively computes the blocked QR factorization of A, A = QR. Generalized eigensystems. If side = L, the left eigenvectors are computed. As this library only supports sparse matrices with Float64 or Complex128 elements, lufact converts A into a copy that is of type SparseMatrixCSC{Float64} or SparseMatrixCSC{Complex128} as appropriate. such that $v_i$ is the $i$th column of $V$, and $au_i$ is the $i$th diagonal element of $T$. Computes the inverse of positive-definite matrix A after calling potrf! If $A$ is an m×n matrix, then. By default, the value of tol is the largest dimension of M multiplied by the eps of the eltype of M. Compute the p-norm of a vector or the operator norm of a matrix A, defaulting to the 2-norm. ncv: Number of Krylov vectors used in the computation; should satisfy nev+1 <= ncv <= n for real symmetric problems and nev+2 <= ncv <= n for other problems, where n is the size of the input matrices A and B. Computes the generalized eigenvalue decomposition of A and B, returning a GeneralizedEigen factorization object F which contains the generalized eigenvalues in F[:values] and the generalized eigenvectors in the columns of the matrix F[:vectors]. Returns alpha*A*x where A is a symmetric band matrix of order size(A,2) with k super-diagonals stored in the argument A. In julia they are numbered from 1. If jobv = V the orthogonal/unitary matrix V is computed. The singular values in S are sorted in descending order. If info = 0, the factorization succeeded. I'd like to compute what in matlab would be, (and I need both eigenvalues and relative eigenfunctions). eig is a wrapper around eigfact, extracting all parts of the factorization to a tuple; where possible, using eigfact is recommended. Only the ul triangle of A is used. This type is usually constructed (and unwrapped) via the transpose function or .' No in-place transposition is supported and unexpected results will happen if src and dest have overlapping memory regions. Returns A*B or the other three variants according to tA and tB. Left division operator: multiplication of y by the inverse of x on the left. dA determines if the diagonal values are read or are assumed to be all ones. Same as schurfact but uses the input argument as workspace. The specified value of tol should be positive; otherwise, it is ignored and $ɛ$ is used instead. The matrix $Q$ is stored as a sequence of Householder reflectors $v_i$ and coefficients $\tau_i$ where: The upper triangular part contains the elements of $R$, that is R = triu(F.factors) for a QR object F. The subdiagonal part contains the reflectors $v_i$ stored in a packed format where $v_i$ is the $i$th column of the matrix V = eye(m,n) + tril(F.factors,-1). Kronecker tensor product of two vectors or two matrices. The following keyword arguments are supported: nev: Number of eigenvalues. Compute the cross product of two 3-vectors. * C (trans = T), Q' * C (trans = C) for side = L or the equivalent right-sided multiplication for side = R using Q from a RQ factorization of A computed using gerqf!. If uplo = L, the lower half is stored. An InexactError exception is thrown if the factorization produces a number not representable by the element type of A, e.g. Construct a Symmetric view of the upper (if uplo = :U) or lower (if uplo = :L) triangle of the matrix A. svdfact! Computes the Bunch-Kaufman factorization of a Hermitian matrix A. This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. peakflops computes the peak flop rate of the computer by using double precision gemm!. ritzvec: Returns the Ritz vectors v (eigenvectors) if true, v0: starting vector from which to start the iterations. Perform simple linear regression using Ordinary Least Squares. Returns the uplo triangle of alpha*A*A' or alpha*A'*A, according to trans. The i-th element of inner specifies the number of times that the individual entries of the i-th dimension of A should be repeated. Returns A, modified in-place, ipiv, the pivoting information, and an info code which indicates success (info = 0), a singular value in U (info = i, in which case U[i,i] is singular), or an error code (info < 0). Returns A, the pivots piv, the rank of A, and an info code. Conjugate transpose array src and store the result in the preallocated array dest, which should have a size corresponding to (size(src,2),size(src,1)). If uplo = U, the upper half of A is stored. If jobvl = V or jobvr = V, the corresponding eigenvectors are computed. Uses the output of gerqf!. Returns the solution X; equed, which is an output if fact is not N, and describes the equilibration that was performed; R, the row equilibration diagonal; C, the column equilibration diagonal; B, which may be overwritten with its equilibrated form diagm(R)*B (if trans = N and equed = R,B) or diagm(C)*B (if trans = T,C and equed = C,B); rcond, the reciprocal condition number of A after equilbrating; ferr, the forward error bound for each solution vector in X; berr, the forward error bound for each solution vector in X; and work, the reciprocal pivot growth factor.