Privacy Policy

December 01, 2018

Quantum BLAS: Basic Linear Algebra Subprograms with quantum computers

In this post we are going to see how well quantum computers can perform linear algebraic operations: matrix multiplication, matrix inversion, and projection of a vector in a subspaces. As usual, we suppose that we have access to the matrix and the vectors via a QRAM: a classical data structure that allows us to build quantum states proportional to the matrix and vectors we need to operate.

Historically, the trend of algorithms for doing linear algebra was started with the well known HHL algorithm: the algorithms that builds a quantum state proportional to the solution of a sparse and well conditioned system of equations. While HHL wasn’t originally based on QRAM, it relies on similar techniques to extract the singular values of the matrix (namely Hamiltonian Simulation).

A couple of years ago, we progressed substantially in our ability to perform quantum linear algebraic operations when we learned the ability to write (coherently) the singular values of a matrix in a quantum register (Kerenidis & Prakash, 2017) . The idea of the algorithm was the following: thanks to QRAM queries, it was possible to write the singular values of the matrix in the phase of our quantum state, where it was then recovered (i.e. written in a quantum register) as usual, i.e. using the Quantum Fourier Transform. Unfortunately, this approach have a significant limitation: performing a QTF on a register implies running time linear in the error on the QTF (the running time of the QFT is $O(\log n / \epsilon)$). Only recently, two new works overcame this limitation with an algorithm with logarithmic dependence on the error. The papers are based on qubitization(Hao Low & Chuang, 2016) and block encoding(Chakraborty, Gilyén, & Jeffery, 2018) . In this page I want to give you an intuition on how quantum linear algebra works, so we will see the simplest (but non optimal) algorithm based on singular value estimation, and then we just show the last theorem which represent the current state of the art on quantum linear algebra. First, let me introduce the following Theorem.

S.V.E Theorem

Let $M \in \mathbb{R}^{m \times n}$ be a matrix with singular value decomposition $A =\sum_{i \in \left[n\right]} \sigma_i u_i v_i^T$ stored in QRAM. Let $\varepsilon > 0$ the precision parameter. There is an algorithm with running time $O(polylog(mn)/\varepsilon)$ that performs the mapping $\sum_{i} \alpha_i \ket{v_i} \to \sum_{i}\alpha_i \ket{v_i} \ket{\tilde{\sigma}_i}$, where $\tilde{\sigma_i} \in \sigma_i \pm \varepsilon||M||_F $ for all $i$ with probability at least $1-1/poly(n)$

Let’s see a simple algorithm.

Let $M := \sum_{i}^d \sigma_iu_iv_i^T$. We also assume that . Note that by satisfying these conditions, the singular values of the matrix are rescaled to lie between $\kappa$ and $1$. This assumption can be easily satisfied by estimating iteratively the biggest singular value (also known as Perron-Frobenius root) with . If we want to use a algorithms, a polylogarithmic procedure can be found in (Kerenidis & Prakash, 2017).

Quatum Linear Algebra Algorithms

  • Require:
    • QRAM access to rows of $M \in \mathbb{R}^{n \times d}$ with s.v.d $\sum_{i} \sigma_iu_iv_i^T$ and $\ket{x} \in \mathbb{R}^{d}$.
    • A threshold $\theta$ and an error parameter $\delta > 0$ for doing matrix projection,
    • Error $\varepsilon > 0$ for doing matrix multiplication and inversion.
  • Ensure:
    • Our quantum register is in state $ \ket{Mx}$, $\ket{M^{-1}x}$, or
  1. Query the QRAM to obtain the state: $\ket{x} = \sum_{i} \alpha_i\ket{v_i} $

  2. Perform SVE on $M$ with precision $\epsilon$ for matrix multiplication and inversion and with precision $\frac{\delta}{2}\frac{\theta}{ \left\lVert M \right\rVert_F}$ for matrix projection. This step create the state: $ \sum_{i} \alpha_i\ket{v_i}\ket{\tilde{\sigma}_i} $

  3. Perform a controlled operation on an ancilla qubit:

    • for matrix multiplication: $ \sum_{i}^d \alpha_i\ket{v_i}\ket{\tilde{\sigma_i}}\Big(\tilde{\sigma}_i\ket{0} + \sqrt{1 - \tilde{\sigma}^2_i}\ket{1}\Big) $
    • for matrix inversion:
    • for projecting in the subspace spanned by singular vector smaller than $\theta$, map $\ket{\tilde{\sigma}_j}\ket{0} \to \ket{\tilde{\sigma}_j}\ket{0}$ if $\sigma_j < \theta + \frac{\delta}{2}\theta$ and to $\ket{\tilde{\sigma_j}}\ket{1}$ otherwise.
  4. Then, uncompute the register with the estimate of the singular values. You are left with the following states:

    • for matrix multiplication: $ \sum_{i}^d \alpha_i\ket{v_i}\Big(\tilde{\sigma}_i\ket{0} + \sqrt{1 - \tilde{\sigma}^2_i}\ket{1}\Big) $

    • for matrix inversion:

    • for projection in a subspace: $ \sum_{i \in S} \alpha_i \ket{v_i}\ket{0} + \sum_{i \in \tilde{S}} \alpha_i\ket{v_i}\ket{1}$ where $S$ is the set of $i$’s such that $\sigma_i \leq \theta$ and some $i$’s such that $\sigma_i \leq (\theta -\delta/2, \theta ]$, and $\bar{S}$ is the complement of $S$ in $m$

  5. Perform amplitude amplification on $\ket{0}$ in the ancilla qubit, with a unitary $U$ implementing steps 1 and 2, to obtain respectively: $\ket{Mx}$, $\ket{M^{-1}x}$, or $\ket{M^{+}_{\theta, \delta}M_{\theta, \delta} x}$

(Original) Matrix Algebra (missing reference)

Let $M := \sum_{i} \sigma_i u_iv_i^T$ such that $||M||_F \leq 1$ stored in the QRAM. There is an Algorithm that returns

  • $\ket{M_{\theta, \delta}^+M_{\theta, \delta}x}$ in expected time $\tilde{O}(\frac{\norm{M}_F{x}^2}{\theta {\lVert M_{\theta, \delta}M^{+}_{\theta, \delta} x \rVert }^2} )$
  • $\ket{Mx}$ or $\ket{M^{-1}x}$ in expected time $\tilde{O}(\kappa^2(M)\mu(M)/\epsilon)$ with probability at least $1-1/poly(d)$.

The function $\mu(M)$ is defined as $ min_{p \in [0,1]} \left( \norm{M}_F, \sqrt{s_{2p}(M) s_{2(1-p)}(M^T)} \right)$, while $s_p$ is defined as $ max_{i \in [m]} \norm{m(i)}_p^p $ (the maximum $\ell_p$ norm of the rows of the matrix.).

The optimal value for $p$ depend on the matrix under consideration, but for symmetric matrices we have that $\mu(M)$ is at most the maximum $l_1$ norm of the row vectors. For the projection, given that $\delta \in (0,1)$, the error that we consider lies in the interval $\left[\theta, (1+\kappa)\theta \right]$, that is, the project of $M$ onto the space spanned by the singular vectors whose corresponding singular values is smaller than $\theta$, and some subset of singular vectors whose corresponding singular value is slightly bigger than $\theta$. Note that the projection algorithm does not depend on the condition number of the matrix.

As anticipated before, the biggest issue of the s.v.e approach is that it relies on phase estimation in order to write the singular values of the matrix in a register. This implies that there is a polynomial (linear) dependence on the error/precision on the estimate, and therefore on all our algorithm! We are greedy, and we want to improve also this dependence. This work has been done, and we report here the resulting theorems.

(Optimized) Matrix algebra

Let $M := \sum_{i} \sigma_iu_iv_i^T \in \mathbb{R}^{d \times d}$ such that $\norm{M}_2=1$, and a vector $x \in \mathbb{R}^d$ stored in QRAM. There exist quantum algorithms that with probability at least $1-1/poly(d)$ return

  • a state $\ket{z}$ _such that $| \ket{z} - \ket{Mx}| \leq \epsilon$ in time $\tilde{O}(\kappa(M)\mu(M)\log(1/\epsilon))$
  • a state $\ket{z}$ such that $|\ket{z} - \ket{M^{-1}x}| \leq \epsilon$ in time $\tilde{O}(\kappa(M)\mu(M)\log(1/\epsilon))$
  • a state $\ket{M_{\leq \theta, \delta}^+M_{\leq \theta, \delta}x}$ in time $\tilde{O}(\frac{ \mu(M) \norm{x}}{\delta \theta \norm{M^{+}_{\leq \theta, \delta}M_{\leq \theta, \delta}x}})$

One can also get estimates of the norms with multiplicative error $\eta$ by increasing the running time by a factor $1/\eta$.

Another important advantage of the new methods is that it provides easy ways to manipulate sums or products of matrices. (Gilyén, Su, Low, & Wiebe, 2018)(Chakraborty, Gilyén, & Jeffery, 2018)

Matrix algebra on products of matrices

Let $M_1, M_2 \in \mathbb{R}^{d \times d}$ such that $||M_1||_2= ||M_2||_2=1$, $M=M_1M_2$, and a vector $x \in \mathbb{R}^d$ stored in QRAM. There exist quantum algorithms that with probability at least $1-1/poly(d)$ return

  • a state $\ket{z}$ such that $|\ket{z} - \ket{Mx}| \leq \epsilon$ in time $\tilde{O}(\kappa(M)(\mu(M_1)+\mu(M_2))\log(1/\epsilon))$
  • a state $\ket{z}$ such that $|\ket{z}-\ket{M^{-1}x}| \leq \epsilon$ in time $\tilde{O}(\kappa(M)(\mu(M_1)+\mu(M_2))\log(1/\epsilon)) $
  • a state $\ket{M_{\leq \theta, \delta}^+M_{\leq \theta, \delta}x}$ in time $\tilde{O}(\frac{ (\mu(M_1)+\mu(M_2)) \norm{x}}{\delta \theta \norm{M^{+}_{\leq \theta, \delta}M_{\leq \theta, \delta}x}})$

One can also get estimates of the norms with multiplicative error $\eta$ by increasing the running time by a factor $1/\eta$. (Gilyén, Su, Low, & Wiebe, 2018)(Chakraborty, Gilyén, & Jeffery, 2018).

As a note, I don’t know if with qubitization it is possible to perform all the operations that are possibile by accessing the singular values of the matrix in a quantum register.

References

  1. Kerenidis, I., & Prakash, A. (2017). Quantum Gradient Descent for Linear Systems and Least Squares. ArXiv:1704.04992.
  2. Hao Low, G., & Chuang, I. L. (2016). Hamiltonian Simulation by Qubitization.
  3. Chakraborty, S., Gilyén, A., & Jeffery, S. (2018). The power of block-encoded matrix powers: improved regression techniques via faster Hamiltonian simulation. ArXiv Preprint ArXiv:1804.01973.
  4. Gilyén, A., Su, Y., Low, G. H., & Wiebe, N. (2018). Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics. ArXiv Preprint ArXiv:1806.01838.

Fork me on GitHub