Jekyll2018-04-15T22:59:01+02:00https://luongo.pro/returnlambdaThe level of achievement that you have in anything, is a reflection of how well you were able to focus on it (Steve Vai)
Gather Statistics For Your Qram2018-04-15T00:00:00+02:002018-04-15T00:00:00+02:00https://luongo.pro/2018/04/15/Gather-statistics-for-your-QRAM<p>Generally, with the term QRAM people are referring to an oracle, or
generically to a unitary, that gets called with the purpose of creating
a state in a quantum circuit. This state represents some (classical)
data that you want to process later in your algorithm. More formally,
QRAM allows you to perform operations like:
$\ket{i}\ket{0} \to \ket{i}\ket{x_i}$ for $x_i \in \mathbb{R}$ for some
$i \in [n]$. This model can be used to create states proportional to
classical vectors, and allowing us to perform queries:
$\ket{i}\ket{0} \to \ket{i}\ket{x(i)}$ for $x(i) \in \mathbb{R}^d$ for
some $i \in [n]$</p>
<p>Querying the QRAM is assumed to be done efficiently. The running time is
expected to be polylogarithmic in the matrix dimensions, but eventually
the time complexity might polynomial in other parameters. As an example,
in QRAM described in Kerenidis and Prakash (2017)Kerenidis and Prakash
(2016)Prakash (2014) the authors stores a matrix decomposition such that
the running time of a query might depend on the Frobenius norm, or a
parametrized function, which is specific to their implementation. In
this model, the best parametrization of the decomposition might depend
on the dataset. This means that in practice, you might need to estimate
these parameters, and therefore I’ve decided to write a library for
this. Specifically, given a matrix $A$ to store in QRAM, you have to
find the value $p \in \left(0, 1 \right)$ such that it minimize the
function: <script type="math/tex">\mu_p(A) = \sqrt{ s_{2p}(A) s_{2(1-p)}(A^T)}</script> where
$s_p(A) := max_{i \in [m]} |A|_F^p $ is the maximum $l_p$ norm to the
power of $p$ of the row vectors.</p>
<p>Being able to estimate parameters of a dataset might happen also with
other model of access to the data. For instance, other algorithms such
HHL uses Hamiltonian simulation, which has an access model that makes
the complexity of the algorithm depend on the sparsity.</p>
<p>So far qramutils analyze a given numpy matrix for the following
parameters:</p>
<ul>
<li>
<p>The sparsity.</p>
</li>
<li>
<p>The conditioning number.</p>
</li>
<li>
<p>The Frobenius norm (of the rescaled matrix such that
$0< \sigma_i < 1$).</p>
</li>
<li>
<p>The best parameter $p$ for the matrix decomposition described above.</p>
</li>
<li>
<p>Some boring and common plotting.</p>
</li>
</ul>
<p><a href="https://github.com/Scinawa/qramutils">Here</a> you can find the
repository.</p>
<p>This code might be improved in many directions! For instance, I’d like
to integrate in the library the code for plotting the parameters for
various PCA dimensions and/or degree of polynomial expansion, integrate
options for dataset normalization, scaling, and maybe expand the type of
accepted input data, and so on..</p>
<p>Ideally, for other kind of matrices there hopefully might be other kind
matrix decompositions available and therefore there might be the need to
estimate other parameters in the future. This is where I’ll add that
code for that. :)</p>
<p>This is an example of usage on the MNIST dataset:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>$ pipenv run python3 examples/mnist_QRAM.py --help
usage: mnist_QRAM.py [-h] [--db DB] [--generateplot] [--analize]
[--pca-dim PCADIM] [--polyexp POLYEXP]
[--loglevel {DEBUG,INFO}]
Analyze a dataset and model QRAM parameters
optional arguments:
-h, --help show this help message and exit
--db DB path of the mnist database
--generateplot run experiment with various dimension
--analize Run all the analysis of the matrix
--pca-dim PCADIM pca dimension
--polyexp POLYEXP degree of polynomial expansion
--loglevel {DEBUG,INFO}
set log level
</code></pre>
</div>
<p>This is the output, assuming you have a folder called data that holds
the MNIST dataset.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>pipenv run python3 examples/mnist_QRAM.py --db data --analize --loglevel INFO
04-01 22:23 INFO Calculating parameters for default configuration: PCA dim 39, polyexp 2
04-01 22:24 INFO Matrix dimension (60000, 819)
04-01 22:24 INFO Sparsity (0=dense 1=empty): 0.0
04-01 22:24 INFO The Frobenius norm: 4.6413604982930385
04-01 22:26 INFO best p 0.8501000000000001
04-01 22:26 INFO Best p value: 0.8501000000000001
04-01 22:26 INFO The \mu value is: 4.6413604982930385
04-01 22:26 INFO Qubits needed to index+data register: 26.
</code></pre>
</div>
<p>If you want to use the library in your source code:</p>
<div class="highlighter-rouge"><pre class="highlight"><code> libq = qramutils.QramUtils(X, logging_handler=logging)
logging.info("Matrix dimension {}".format(X.shape))
sparsity = libq.sparsity()
logging.info("Sparsity (0=dense 1=empty): {}".format(sparsity))
frob_norm = libq.frobenius()
logging.info("The Frobenius norm: {}".format(frob_norm))
best_p, min_sqrt_p = libq.find_p()
logging.info("Best p value: {}".format(best_p))
logging.info("The \\mu value is: {}".format(min(frob_norm, min_sqrt_p)))
qubits_used = libq.find_qubits()
logging.info("Qubits needed to index+data register: {} ".format(qubits_used))
</code></pre>
</div>
<p>To install, you just need to do the following:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>pipenv run python3 setup.py sdist
</code></pre>
</div>
<p>And then, your package will be ready to be installed as:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>pipenv install dist/qramutils-0.1.0.tar.gz
</code></pre>
</div>
<div id="refs" class="references">
<div id="ref-kerenidis2016quantum">
Kerenidis, Iordanis, and Anupam Prakash. 2016. “Quantum Recommendation
Systems.” *ArXiv Preprint ArXiv:1603.08675*.
</div>
<div id="ref-kerenidis2017quantum">
———. 2017. “Quantum Gradient Descent for Linear Systems and Least
Squares.” *ArXiv Preprint ArXiv:1704.04992*.
</div>
<div id="ref-prakash2014quantum">
Prakash, Anupam. 2014. *Quantum Algorithms for Linear Algebra and
Machine Learning*. University of California, Berkeley.
</div>
</div>scinawaGenerally, with the term QRAM people are referring to an oracle, or generically to a unitary, that gets called with the purpose of creating a state in a quantum circuit. This state represents some (classical) data that you want to process later in your algorithm. More formally, QRAM allows you to perform operations like: $\ket{i}\ket{0} \to \ket{i}\ket{x_i}$ for $x_i \in \mathbb{R}$ for some $i \in [n]$. This model can be used to create states proportional to classical vectors, and allowing us to perform queries: $\ket{i}\ket{0} \to \ket{i}\ket{x(i)}$ for $x(i) \in \mathbb{R}^d$ for some $i \in [n]$Failed Attempt To Reverse Swap Test2018-04-15T00:00:00+02:002018-04-15T00:00:00+02:00https://luongo.pro/2018/04/15/Failed-attempt-to-reverse-swap-test<p>This post has born from an attempt of finding a reversible circuit for
computing the swap test: a circuit used to compute the inner product of
two quantum states. This circuit was originally proposed for solving the
state distinguishably problem, but as you can imagine is very used in
quantum machine learning too. Before starting, let’s note one thing. A
reversible circuit for the swap test implies that we are able to
recreate the two input states. Conceptually, this should be impossible,
because of the no cloning theorem. With a very neat observation we can
realize that we are not even able to preserve one of the states.</p>
<p>There is no unitary operator $U\ket{x}\ket{y}$ that allows you to
estimate the scalar product between two states $x,y$ as $\braket{x|y}$
using only one copy of $\ket{x}$.</p>
<p>By absurd. Assume this unitary exists. Than it would be possible to
estimate the scalar product between $\ket{x}$ and all the base states
$\ket{i}$. (basically doing tomography for the state). This is a way of
recover classically the state of $\ket{x}$. By knowing $\ket{x}$, we
could recreate as many copies as we want of $\ket{x}$. Therefore, we
could use this procedure to clone a state. This is prevented by the
no-cloning theorem.</p>
<p>Let’s see what happens if we try to reverse it.</p>
<p><img src="/assets/reverse_swap.png" alt="image" /></p>
<p>It is good to know that the circuit in Figure [conservative] is
inspired by the proof $BPP \subseteq BQP$. The idea is the following: if
after a swap test, and before doing any measurement on the ancilla
qubit, we do a CNOT on a second ancillary qubit, and then execute the
inverse of the swap test. Being the swap test self-inverse operator, it
simply means that we apply the swap test twice. Let’s start the
calculations from the CNOT on the second ancilla qubit.</p>
<script type="math/tex; mode=display">\frac{1}{2} \Big[ \left( \ket{ab} + \ket{ba} \right)\ket{00} + \left( \ket{ab} - \ket{ba} \right)\ket{11} \Big] \xrightarrow{\text{H}}</script>
<script type="math/tex; mode=display">\frac{1}{2} \Big[ \left( \ket{ab} + \ket{ba} \right)\ket{+0} + \left( \ket{ab} - \ket{ba} \right)\ket{-1} \Big] \xrightarrow{\text{SWAP}}</script>
<script type="math/tex; mode=display">\frac{1}{2} \left[
\frac{1}{\sqrt{2}} \Big[ \Big( \ket{ab} + \ket{ba} \Big) \ket{0} + \Big( \ket{ab} + \ket{ba} \Big) \ket{1} \Big] \ket{0} + \frac{1}{\sqrt{2}} \Big[ \Big( \ket{ab} - \ket{ba} \Big) \ket{0} - \Big( \ket{ba} - \ket{ab} \Big) \ket{1} \Big] \ket{1}
\right] =</script>
<script type="math/tex; mode=display">\frac{1}{2} \left[
\left[ 2\left( \ket{ab} + \ket{ba} \right)\ket{+} \right] \ket{0} +
\left[ 2\left( \ket{ab} - \ket{ba} \right)\ket{+} \right] \ket{1}
\right] \xrightarrow{\text{H}}</script>
<script type="math/tex; mode=display">\frac{1}{2} \left[
\frac{1}{\sqrt{2}} \left[ 2\left( \ket{ab} + \ket{ba} \right)\ket{0} \right] \ket{0} +
\frac{1}{\sqrt{2}} \left[ 2\left( \ket{ab} - \ket{ba} \right)\ket{0} \right] \ket{1}
\right].</script>
<p><script type="math/tex">p(\ket{0}) = \frac{1}{4}\Big( 2 + 2 |\braket{ab|ba}|\Big) = \frac{ 1+ \braket{ab|ba}}{2} = \frac{ 1+ |\braket{a|b}|^2}{2}</script>
And therefore $p(\ket{1})$ is $\frac{ 1- |\braket{a|b}|^2}{2}$ as in the
original swap test. So, the result is the same, but as in the original
swap test, the register are pretty entangled, therefore we haven’t
reversed our swap.</p>
<p>Here I have applied the rules:</p>
<ul>
<li>
<p>$ (A\otimes B)^{\dagger} = A^{\dagger} \otimes B^{\dagger} $</p>
</li>
<li>
<p>$ \left( \bra{\phi} \otimes \bra{\psi} \right) \left( \ket{\phi} \otimes \ket{\psi} \right) = \braket{\psi, \psi} \otimes \braket{\phi, \phi}$</p>
</li>
</ul>
<p>You may have noted that this circuit is very similar to circuit that you
obtain if you perform amplitude amplification Brassard et al. (2000) on
the swap test. The swap circuit is the algorithm $A$ that produces
states with a certain probability distribution, and the CNOT is the
unitary $U_f$ that is able to recognize the “good” states from bad
states. By setting the second ancilla qubit to $\ket{+}$ we would be
able to write on the phase of our state some useful information to
recover with a QFT later on. That’s very cool, since amplitude
amplification allows us to decrease quadratically the computational
complexity of the algorithm with respect to the error in the estimation
of the amplitude of the ancilla qubit.</p>
<div id="refs" class="references">
<div id="ref-brassard2002quantum">
Brassard, Gilles, Peter Høyer, Michele Mosca, and Alain Tapp. 2000.
“Quantum Amplitude Amplification and Estimation.” *ArXiv Preprint
Quant-Ph/0005055*.
</div>
</div>scinawaThis post has born from an attempt of finding a reversible circuit for computing the swap test: a circuit used to compute the inner product of two quantum states. This circuit was originally proposed for solving the state distinguishably problem, but as you can imagine is very used in quantum machine learning too. Before starting, let’s note one thing. A reversible circuit for the swap test implies that we are able to recreate the two input states. Conceptually, this should be impossible, because of the no cloning theorem. With a very neat observation we can realize that we are not even able to preserve one of the states.Hamiltonian Simulation2018-02-18T00:00:00+01:002018-02-18T00:00:00+01:00https://luongo.pro/2018/02/18/Hamiltonian-simulation<p>These are my notes are on Childs (n.d.).</p>
<h1 id="introduction">Introduction</h1>
<p>The only way possible to start a chapter on Hamiltonian simulation would
be to start from the work of Feynman, who had the first intuition on the
power of quantum mechanics for simulating physics with computers. We
know that the Hamiltonian dynamics of a closed quantum system, weather
its evolution changes with time or not, is give by the
Schr<span>ö</span>dinger equation:</p>
<script type="math/tex; mode=display">i\hbar \frac{d}{dt}\ket{\psi(t)} = H(t)\ket{\psi(t)}</script>
<p>Given the initial conditions of the system (i.e. $\ket{\psi(0)} $ ) is
it possible to know the state of the system at time
$t: \ket{\psi(t)} = e^{-i (H_1t/m)}\ket{\psi(0)}$.</p>
<p>As you can imagine, classical computers are suppose to struggle
simulating the system to get $ \ket{\psi(t)}$, since this equation
describes the dynamics of any quantum system, and we don’t think (hope
:D ) classical computer can simulate that efficiently. But we know that
quantum computers can help “copying” the dynamic of another quantum
system. Why would you be bothered?</p>
<p>Imagine you are a quantum machine learning scientist, and you have just
found a new mapping between an optimization problem and an Hamiltonian
dynamics, and you want to use quantum computer to perform the
optimization Otterbach et al. (2017). You expect a quantum computers to
run the Hamiltonian simulation for you, and then sample useful
information from the resulting quantum sate. This result might be fed
again into your classical algorithm to perform ML related task, in a
virtuous cycle of hybrid quantum-classical computation.</p>
<p>Or imagine you that you are a chemist, and you have developed an
hypothesis for the Hamiltonian dynamics of a chemical compound. Now you
want to run some experiments to see if the formula behaves according to
the experiments. Or maybe you are testing properties of complex
compounds you don’t want to synthesize. We can formulate the problem of
HS in this way:</p>
<p><span>Hamiltonian simulation problem</span>: Given a state
$\ket{\psi(0)}$ and an Hamiltonian $H$, obtain a state $\ket{\psi(t)}$
such that $\ket{\psi(t)}:=e^{-iHt}\ket{\psi(0)}$ and
$|\ket{\psi(0)} - \ket{\tilde{\psi(t)}}| < \varepsilon$ for some norm
(usually trace norm).</p>
<p>Which leads us to the definition of efficiently simulable Hamiltonian:</p>
<p><span>Efficient Hamiltonian simulation</span> Given a state
$\ket{\psi(0)}$ and an Hamiltonian $H$ acting on $n$ qubits, we say $H$
can efficiently simulated if,
$\forall t \geq 0, \forall \varepsilon \geq 0$, there is a quantum
circuit such $U$ that $||U - e^{-iHt} || < \varepsilon$ using a number
of gates that is polynomial in $n,t, 1/\varepsilon$.</p>
<p>In the following, we suppose to have a quantum computer and quantum
access to the Hamiltonian $H$. Te importance of this problem might not
be immediately clear to a computer scientist. But if we think that every
quantum circuit is described by an Hamiltonian dynamic, being able to
simulate an Hamiltonian is like being able to have virtual machines in
our computer. (This example actually came from a talk at IHP of Toby
Cubitt!) Remember that there’s a theorem that says that for an
Hamiltonian simulation problem, the number of gates is $\omega{t}$, and
this Theorem goes under the name of No fast-forward Theorem. <br>
But concretely? What does it means to simulate an Hamiltonian of a
physical system? Let’s take the Hamiltonian of a particle in a
potential: <script type="math/tex">H = \frac{p^2}{2m} + V(x)</script> We want to know the position of
the particle at time $t$ and therefore we have to compute
$e^{-iHt}\ket{\psi(0)}$</p>
<h2 id="some-hamiltonians-we-know-to-simulate-efficiently">Some Hamiltonians we know to simulate efficiently</h2>
<ul>
<li>
<p>Hamiltonians that represent the dynamic of a quantum circuits (more
formally, where you only admit local interactions between a constant
number of qubits). This result is due to the famous
Solovay-Kitaev Theorem. That says that there exist an efficient
compiler from an architecture that use a set of gates $\mathbb{S_1}$
and another quantum computer that uses a set of universal gates
$\mathbb{S_2}$.</p>
</li>
<li>
<p>If the Hamiltonian can be efficiently applied for a basis, then also
$UHU$ can be efficiently applied. Proof:
$e^{-iUHU^\dagger t} = Ue{-iH t}U^\dagger $.</p>
</li>
<li>
<p>If $H$ is diagonal in the computational basis and we can compute
efficiently $\braket{a||H|a}$ for a basis element $a$. By linearity:
<script type="math/tex">\ket{a,0} \to \ket{a, d(a)} \to e^{-itd(a)} \otimes I \ket{a,d(a)t} \to e^{-itd(a)}\ket{a,0} = e^{-itH}\ket{a,0}</script></p>
<p>(In general: if we know how to calculate the eigenvalues, we can
apply an Hamiltonian efficiently.)</p>
</li>
<li>
<p>The sum of two efficiently simulable Hamiltonians is efficiently
simulable using Lie product formula
<script type="math/tex">e^{-i (H_1 + H_2) t} = lim_{m \to \infty} ( e^{-i (H_1t/m)} + e^{-i (H_2t/m) t} )^m</script>
We chose $m$ such that
<script type="math/tex">|| e^{-i (H_1 + H_2) t} - ( e^{-i (H_1t/m)} + e^{-i (H_2t/m) t} )^m || \leq</script>
and this gives $m=(vt^2/\varepsilon)$ and
$v=\max{ ||H_1||, ||H_2||}$. Using higher order approximation is
possible to reduce the dependency on $t$ to $O(t^1+\delta)$ for a
chosen $\delta$. (wtf!)</p>
</li>
<li>
<p>This facts can be used to show that the sum of polynomially many
efficiently simulable Hamiltonians is simulable efficiently.</p>
</li>
<li>
<p>The commutator $[H_1, H_2]$ of two efficiently simulable Hamiltonian
can be computed efficiently because:
<script type="math/tex">e^{-i[H_1, H_2]t} = lim_{m\to \infty} (e^{-iH_1\sqrt[]{t/m}}e^{-iH_2\sqrt[]{t/m}}e^{H_1\sqrt[]{t/m}}e^{H_1\sqrt[]{t/m}})^m</script>
which we believe, without having idea on how to check it. :/</p>
</li>
<li>
<p>If the Hamiltonian is sparse, it can be efficiently simulated. The
idea is to pre-compute a edge-coloring of the graph represented by
the adjacency matrix of the sparse Hamiltonian. (For each $H$ you
can consider a graph $G=(V, E)$ such that its adjacency matrix $A$
is $a_{ij}=1$ if $H_{ij} \neq 0$ ).</p>
</li>
</ul>
<p>Recalling the example of a particle in a potential energy: its momentum
<script type="math/tex">\frac{p^2}{2m}</script> is diagonal in the fourier basis (and we know how to
do a QFT), and the potential $V(x)$ is diagonal in the computational
basis, thus this Hamiltonian is easy to simulate.</p>
<p>Exercise/open problem: do we know any algorithm that might benefit the
efficient simulation of $[H_1, H_2]$? Childs in Childs (n.d.) claims he
is not aware of any algorithm that uses that.</p>
<div id="refs" class="references">
<div id="ref-childs">
Childs, Andrew. n.d. “Lecture Notes in Quantum Algorithmics.”
</div>
<div id="ref-otterbach2017unsupervised">
Otterbach, JS, R Manenti, N Alidoust, A Bestwick, M Block, B Bloom, S
Caldwell, et al. 2017. “Unsupervised Machine Learning on a Hybrid
Quantum Computer.” *ArXiv Preprint ArXiv:1712.05771*.
</div>
</div>scinawaThese are my notes are on Childs (n.d.).Storing Data In A Quantum Computer2018-02-03T00:00:00+01:002018-02-03T00:00:00+01:00https://luongo.pro/2018/02/03/Storing-data-in-a-quantum-computer<p>We are going to see what does it mean to store/represent data on a
quantum computer. Is very important to know how, since knowing what are
the most common ways of encoding data in a quantum computer might pave
the way for the intuition in solving new problems. Let me quote an
article of 2015: Schuld, Sinayskiy, and Petruccione (2015): <em>In order to
use the strengths of quantum mechanics without being confined by
classical ideas of data encoding, finding “genuinely quantum” ways of
representing and extracting information could become vital for the
future of quantum machine learning</em>. Usually we store information in a
classical data structure, and the assume to have quantum access to it.
In general, this quantum access consist of a query: an operation
$U\ket{i}\ket{0}\to \ket{i}\ket{\psi_i}$, where the first is called the
index register, and the second is a target register that holds the
information that you requested. To get an intuition of what the previous
sentence means, I borrow an intuitive example that I stole from a
youtube video of Seth Lloyd. Imagine that you have a source of photons -
which represent your query register - and you send one towards a CD. Due
to the duality wave-particle, you are actually hitting your CD with a
“thing” that is not anymore located deterministically as a single
particle in the space, but behaves as a wave. When the wave hits the
surface of the CD, it gets all the information stored in the little
holes of $0$s and $1$s, and gets reflected carrying on this information.
This wave represent the output of your query. (Sure, we assume the
interaction between the wave and the CD does not make the wave-function
collapse).\
Let’s start. As good computer scientist, let’s organize what we know how
to do by data types.</p>
<h1 id="scalars">Scalars</h1>
<h2 id="integer-mathbbz">Integer: $\mathbb{Z}$</h2>
<p>Let’s start with the most simple “type” of date: the integers. Let
$m \in \mathbb{N}$. We take the binary expansion of $m$, and set the
qubits of our computer as the binary digits of the number. As example,
if your number’s binary expansion is $0100\cdots0111$ we can create the
state:
$\ket{x} = \ket{0}\otimes \ket{1} \ket{0} \ket{0} \cdots \ket{0} \ket{1} \ket{1} \ket{1}$.
Formally, given $m$:</p>
<script type="math/tex; mode=display">\ket{m} = \bigotimes_{i=0}^{n} m_i</script>
<p>Using superposition of states like these we might create things like
$\frac{1}{\sqrt{2}} (\ket{5}+\ket{9})$ or more involved convex
combination of states.\
The time needed to create this state is linear in the number of
bits/qubits. It might be used to get speedup in the number of query to
an oracle, like in (<span class="citeproc-not-found" data-reference-id="Wiebe0QuantumModels"><strong>???</strong></span>), or in general
where you aim at getting a speedup in oracle complexity using amplitude
amplification and similar. For negative integers, we might just use a
qubit more for the sign. (Don’t be tempted into saying that
$\ket{3}+\ket{3}=\ket{6}$. It’s not!)\</p>
<h2 id="rational-mathbbq">Rational: $\mathbb{Q}$</h2>
<p>As far as I know, in quantum computation / quantum machine learning,
there are some register with rational numbers, usually as $n$-bit
approximation of a reals between $0$ and $1$. In that case, just take
the binary expansion and use the previous encoding.</p>
<h2 id="reals-mathbbr">Reals: $\mathbb{R}$</h2>
<p>As before, if the number is between $0$ and $1$, use the previous
encoding. It’s pretty rare to store just a single number in
$\mathbb{R}$, and usually real numbers are encoded into amplitudes and
used when dealing with vectors in $\mathbb{R}^n$.</p>
<h1 id="vectors">Vectors</h1>
<h2 id="binary-vectors-01n">Binary vectors: ${0,1}^n$</h2>
<p>Let $\vec{b} \in {0,1}^n$. As for the encoding used for the integers:</p>
<script type="math/tex; mode=display">\ket{b} = \bigotimes_{i=0}^{n} b_i</script>
<p>As an example, suppose you want to encode the vector
$[1,0,1,0,1,0] \in {0,1}^6$, which is $42$ in decimal. This will
correspond to the $42$-th base of the Hilbert space where our qubits
will evolve. In some sense, we are not fully using the $C^{2^{n}}$
Hilbert space: we are only mapping a binary vector in a (canonical)
base. As a consequence, distances between points in the new space are
different.\
We can imagine some other encodings. For instance we can map a $0$ into
$1$ and $1$ as $-1$ (even if I don’t know how it might be used nor how
to build it).
<script type="math/tex">\ket{v} = \frac{1}{\sqrt{2^n}} \sum_{i \in \{0,1\}^n} (-1)^{b_i} \ket{i}</script>\</p>
<h2 id="real-vectors-mathbbrn">Real vectors: $\mathbb{R}^n$</h2>
<p>Maybe you are used to see Greek letters inside a ket to represent
quantum states, and use latin letters to represent quantum states that
use binary expansion to hold classical data. The following is a very
common encoding in quantum machine learning. For a vector
$\vec{x} \in \mathbb{R}^{2^n}$, we can build:</p>
<script type="math/tex; mode=display">\ket{x} = \frac{1}\sum_{i=0}^{N}\vec{x}_i\ket{i} = |\vec{x}|^{-1}\vec{x}</script>
<p>Note that to span a space of dimension $N=2^n$, you just need $log_2(n)$
qubits: we encode each component of the classical vector in the
amplitudes of a state vector. Ideally, we know from Grover and Rudolph
(2002) how to create quantum states that corresponds to vector of data
(i.e. “efficiently integrable probability distribution”). We miss an
important ingredient. This encoding might not be enough if you have to
manipulate “many” vectors, as in some sense what you are creating is
vector with unitary norm. What if we want to build a superposition of
two vectors? Well, might expect to be able to create a state
$\frac{1}{\sqrt{N}} \sum_{i} \ket{x_i}$, but there’s a problem. Imagine
to do it with just two vectors: $x_1 = [-1, -1, -1]$ and
$x_2 = [1,1,1]$. Well, their (uniform) linear combination is the vector
$[0,0,0]$. What does this means? that to make a unitary vector out of
it, we need a exceptionally small normalizing factor. Usually this kind
of superpositions are obtained as a result of a measurement on an
ancilla qubit. The measurement has a probability that is proportional to
the norm of the vectors. Therefore, to be able to build this state we’re
gonna need an intolerable number of trial and error in building this
state. This problem can be amended by adjoining an ancilla register, as
we see now.</p>
<h1 id="matrices">Matrices</h1>
<p>Imagine to store your vectors in the rows of a matrix. Let
$X \in \mathbb{R}^{n \times d}$, a matrix of $n$ vectors of $d$
components. We will encode them using $log(d)+log(d)$ qubits as the
states:</p>
<script type="math/tex; mode=display">\frac{1}{\sqrt{\sum_{i=0}^n {\left \lVert x(i) \right \rVert}^2 }} \sum_{i=0}^n {\left \lVert x(i) \right \rVert}\ket{i}\ket{x(i)}</script>
<p>Or, put it another way:</p>
<script type="math/tex; mode=display">\frac{1}{\sqrt{\sum_{i=0}^n} {\left \lVert x(i) \right \rVert}^2} \sum_{i,j} X_{ij}\ket{i}\ket{j}</script>
<p>The problem is how to build it this state. We are going to need a very
specific oracle (which we call QRAM, even if there is ambiguity in
literature on that). A QRAM gives us access to two things: the norm of
the rows of a matrix and the rows itself. Calling the two oracles
combined, we can do the following mapping:</p>
<script type="math/tex; mode=display">\sum_{i=0}^{n} \ket{i} \ket{0} \to \sum_{i=0}^n {\left \lVert x(i) \right \rVert}\ket{i}\ket{x(i)}</script>
<p>Basically, we use the superposition in the first register to select the
rows of the matrix that we want, and after the query we have them in the
second register. A QRAM is a tree-like classical data structure that
offer quantum access in an oracular way to a data structure like this.
You can think of a QRAM as a circuit that encodes your matrix. Note that
using this nice encoding, the ratios between the distances between
vectors is the same as in the Hilbert space. Also note that once the
vector is created, the only way to recover $x$ from $\ket{x}$ is to do
quantum tomography (i.e. destroying the state with a measurement). The
cost (in term of time and space) of creating this data structure is a
little bit more than linear: $O(nd log (nd))$ but it pays by giving a
access time for a query that is $O(log(nd))$. (An example of QRAM can be
found in Kerenidis and Prakash (2017), and will obviously covered in
this blog in the next posts. Yes, I know. It might be difficult with the
physical implementation of QRAM, but I have faith the experimental
physicists. :)</p>
<h1 id="graphs">Graphs</h1>
<p>For specific problems we can even change the computational model (i.e.
no more gates on wires used to describe computation). For instance,
given a graph $G=(V,E)$ we can encode it as a state $\ket{G}$ such that:
<script type="math/tex">K_G^V\ket{G} = \ket{G} \forall v \in V</script> where
$K_G^v = X_y\prod_{u \in N}(v)Z_u $, and $X_u$ and $Z_u$ are the Pauli
operators on $u$. The way of picture this encoding is this. Take as many
qubits in state $\ket{+}$ as nodes in the graph, and apply controlled
$Z$ rotation between qubits representing adjacent nodes. There are some
algorithms that use this state as input, for instance in Zhao,
Pérez-Delgado, and Fitzsimons (2016), where they even extended this
definition.\</p>
<h1 id="conclusions">Conclusions</h1>
<p>Te precision that we can use for specifying the amplitude of a quantum
state might be limited in practice by the precision of our quantum
computer in manipulating quantum states (i.e. development in techniques
in quantum metrology and sensing). Techniques that use a certain
precision in the amplitude of a state might suffer of initial technical
limitations of the hardware. As a parallel, think of what’s happening
with CPUs where we had 16, 32 and now 64 bits of precision.\</p>
<div id="refs" class="references">
<div id="ref-Grover2002">
Grover, Lov, and Terry Rudolph. 2002. “Creating superpositions that
correspond to efficiently integrable probability distributions.”
</div>
<div id="ref-kerenidis2017quantum">
Kerenidis, Iordanis, and Anupam Prakash. 2017. “Quantum Gradient Descent
for Linear Systems and Least Squares.” *ArXiv Preprint
ArXiv:1704.04992*.
</div>
<div id="ref-schuld2015introduction">
Schuld, Maria, Ilya Sinayskiy, and Francesco Petruccione. 2015. “An
Introduction to Quantum Machine Learning.” *Contemporary Physics* 56
(2). Taylor & Francis: 172–85.
</div>
<div id="ref-zhao2016fast">
Zhao, Liming, Carlos A Pérez-Delgado, and Joseph F Fitzsimons. 2016.
“Fast Graph Operations in Quantum Computation.” *Physical Review A* 93
(3). APS: 032314.
</div>
</div>scinawaWe are going to see what does it mean to store/represent data on a quantum computer. Is very important to know how, since knowing what are the most common ways of encoding data in a quantum computer might pave the way for the intuition in solving new problems. Let me quote an article of 2015: Schuld, Sinayskiy, and Petruccione (2015): In order to use the strengths of quantum mechanics without being confined by classical ideas of data encoding, finding “genuinely quantum” ways of representing and extracting information could become vital for the future of quantum machine learning. Usually we store information in a classical data structure, and the assume to have quantum access to it. In general, this quantum access consist of a query: an operation $U\ket{i}\ket{0}\to \ket{i}\ket{\psi_i}$, where the first is called the index register, and the second is a target register that holds the information that you requested. To get an intuition of what the previous sentence means, I borrow an intuitive example that I stole from a youtube video of Seth Lloyd. Imagine that you have a source of photons - which represent your query register - and you send one towards a CD. Due to the duality wave-particle, you are actually hitting your CD with a “thing” that is not anymore located deterministically as a single particle in the space, but behaves as a wave. When the wave hits the surface of the CD, it gets all the information stored in the little holes of $0$s and $1$s, and gets reflected carrying on this information. This wave represent the output of your query. (Sure, we assume the interaction between the wave and the CD does not make the wave-function collapse).\ Let’s start. As good computer scientist, let’s organize what we know how to do by data types.Swap Test For Distances2018-01-29T00:00:00+01:002018-01-29T00:00:00+01:00https://luongo.pro/2018/01/29/Swap-test-for-distances<h1 id="intro-to-swap-test">Intro to swap test</h1>
<p>What is known as <em>swap test</em> is a simple but powerful circuit used to
measure the “proximity” of two quantum states (cosine distance in
machine learning). It consists in a controlled swap operation surrounded
by two Hadamard gates on the controlling qubit. Repeated measurements of
the ancilla qubit allows us to estimate the probability of reading $0$
or $1$, which in turn will allow us to estimate $\braket{\psi|\phi}$.
Let’s see the circuit:</p>
<p><img src="/assets/swap_distances/swap_test.png" alt="image" /></p>
<p>It is simple to check that the state at the end of the execution of the
circuit is the following:</p>
<script type="math/tex; mode=display">\Big[\ket{\psi} \ket{\phi} + \ket{\phi} \ket{\psi} \Big]\ket{0} +\Big[\ket{\psi} \ket{\phi} - \ket{\psi} \ket{\phi} \Big] \ket{1}</script>
<p>Thus, the probability of reading a $0$ in the ancilla qubit is:
<script type="math/tex">P (\ket{0}) = \left( \frac{1+|\braket{\psi|\phi}|^2}{2} \right)</script> And
the probability of reading a $1$ in the ancilla qubit is:
<script type="math/tex">P (\ket{1}) = \left( \frac{1-|\braket{\psi|\phi}|^2}{2} \right)</script></p>
<p>This means that if the two states are completely orthogonal, we will
measure an equal number of zero and ones. On the other side, if
$\ket{\psi} = \ket{\phi}$, then the probability amplitude of reading
$\ket{1}$ in the ancilla qubit is $0$. Repeating this operation a
certain number of time, allows us to estimate the inner product between
$\ket{\psi},\ket{\phi}$. Unfortunately, at each measurement we
irrevocably destroy the states, and we need to recreate them in order to
perform again the swap test. This is not much of a problem, if we have
an efficient way of creating $\ket{\psi}$ and $\ket{\phi}$. We can
informally state what the swap test consist with the following theorem.</p>
<p>[Swap test for inner products] Suppose you have access to unitary
$U_\psi$ and $U_\phi$ that allows you to create $\ket{\psi}$ and
$\ket{\phi}$, each of them requiring time $T(U_\psi)$ and $T(U_\phi)$.
Then, there is a circuit that allows to estimate inner products between
two states $\ket{x},\ket{y}$ in $O(T(U_\psi)T(U_\phi)\varepsilon^{-2})$
number of operations.</p>
<p>The correctnes of the circuit was sown before. This is the analysis of
the running time. We recognize in the measurement on the ancilla qubit a
random variable $X$ with Bernulli distribution with
$p=(1+|\braket{\psi|\phi}|^2)/2$, and variance $p(1-p)$. The number of
repetitions that are necessary to estimate the expected value $\bar{p}$
of $X$ with relative error $\epsilon$ is bounded by the Chernoff bound.</p>
<h1 id="swap-test-for-distance-between-vector-and-center-of-a-cluster">Swap test for distance between vector and center of a cluster</h1>
<p>Now we are going to see how to use the swap test to calculate the
distance between two vectors. This section is entirely based on the work
of Lloyd, Mohseni, and Rebentrost (2013). There, they explain how to use
this subroutine to do cluster assignment and many other interesting
things in quantum machine learning. This was one of the first paper I
read in quantum machine learning, and I really wanted to understand
everything, so I tried to do the calculation myself. I think I have
found some typos in the original paper, so here you will find what I
think is the correct version. At the bottom of this post you will find
the calculations. In the following section we will assume that we are
given access to two unitaries $U : \ket{i}\ket{0} \to \ket{i}\ket{v}$
and $V : \ket{i}\ket{0} \to \ket{i}\ket{|v_i|} $.\
Let’s recall the relation between inner product and distance of
$\vec{u}, \vec{v} \in \mathbb{R}^n$. The inner product between two
vector is $\braket{ v, u } = \sum_{i} v_i u_i $, and the norm of a
vector is $ |v|= \sqrt{\langle v, v \rangle} $. Therefore, the distance
can be rewritten as:</p>
<table>
<tbody>
<tr>
<td>$</td>
<td>u-v</td>
<td>= \sqrt{ \langle u-v, u-v \rangle } = \sqrt{\sum_{i} (u_i-v_i)^2 } = \sqrt{</td>
<td>u</td>
<td>^2 +</td>
<td>v</td>
<td>^2 -2 \langle u, v \rangle } $</td>
</tr>
</tbody>
</table>
<table>
<tbody>
<tr>
<td>By setting $ Z =</td>
<td>u</td>
<td>^2 +</td>
<td>v</td>
<td>^2 $ it follows that:</td>
</tr>
<tr>
<td>$</td>
<td>u-v</td>
<td>^2 = Z ( 1 - \frac{ 2 \langle u, v \rangle } {Z} ) $.</td>
<td> </td>
<td> </td>
</tr>
</tbody>
</table>
<p>As you may have guessed, to find the distance $|v-u|$ we will repeat the
necessary number of times the swap circuit. The problem now is to find
the right states.\
We first start by creating
$|\psi \rangle = \frac{1}{\sqrt{2}} \Big( \ket{0}\ket{u} + \ket{1}\ket{v} \Big)$
querying QRAM in $O(log(N))$ time, where N is the dimension of the
Hilbert space (the length of the vector of the data).\
Then we proceed by creating
$|\phi\rangle \frac{1}{\sqrt{Z}} \Big( |\vec{v}||0\rangle + |\vec{u}||1\rangle \Big) $
and and estimate $Z=|\vec{u}|^2 + |\vec{v_j}|^2$. Remember that for two
vectors, $Z$ is easy to calculate, while in the case of a distance
between a vector and the center of a cluster then
$Z=|\vec{u}|+\sum_{i \in V} |\vec{v_i}|^2$. In this case, calculating
$Z$ scales linearly with the number of elements in the cluster, and we
don’t want that.</p>
<p>To create $\ket{\phi}$ and estimate $Z$, we have to start with another,
simpler-to-build $\ket{\phi^-}$ and make it evolve to $\ket{\phi}$. To
do so, we apply the following time dependent Hamiltonian for a certain
amount of time $t$ such that $t|\vec{v}|, t|\vec{u}| « 1 $
<script type="math/tex">H = |\vec{u}|\ket{0}\bra{0}+|\vec{v}|\ket{1}\bra{1} \otimes \sigma_x</script>
<script type="math/tex">\ket{\phi^-} = \ket{-}\ket{0}</script></p>
<p>The evolution $e^{-iHt} \ket{\phi^-}$ for small $t$ will give us the
following state:
<script type="math/tex">\Big( \frac{cos(|\vec{u}|t)}{\sqrt{2}}\ket{0} - \frac{cos(|\vec{v}|t)}{\sqrt{2}}\ket{1} \Big) \ket{0} - \Big( \frac{i sin(|\vec{u}|t)}{\sqrt{2}}\ket{0} - \frac{i sin(|\vec{v}|t)}{\sqrt{2}}\ket{1} \Big) \ket{1}</script></p>
<p>Reading the ancilla qubit in the second register, we should read $1$
with the following probability, given by small angle approximation of
the $sin$ function:</p>
<script type="math/tex; mode=display">P(1) = \lvert - \frac{i sin(|\vec{u}|t)}{\sqrt{2}} \rvert^2 + \lvert \frac{i sin(|\vec{v}|t)}{\sqrt{2}} \lvert^2 \approx |\frac{|\vec{u}|t}{\sqrt{2}}|^2 + | \frac{|\vec{v}|t}{\sqrt{2}} |^2 = \frac{1}{2} \Big( |\vec{u}|^2t^2 + |\vec{v}|^2t^2 \Big) = Z^2t^2/2</script>
<p>Now we are almost ready to use the swap circuit. Note that our two
quantum register have a different dimension, so we cannot swap them.
What we can do instead is to swap the index register of $\ket{\phi}$
with the whole state $\ket{\psi}$. The probability of reading $1$ is:</p>
<script type="math/tex; mode=display">\begin{split}
p(1) = \frac{2|\vec{u}|^2 + 2\vec{v}^2 - 4(u,v)}{8Z}
\end{split}</script>
<h1 id="conclusion">Conclusion</h1>
<p>We saw how to use a simple circuit to estimate things like inner product
and distance between two quantum vectors. We have assumed that we have
an efficient way of creating the states we are using, and we didn’t went
deep into explaining how. Given a $\epsilon > 0$, you can repeat the
previous circuit $O(\epsilon^{-2})$ times to have the desired precision.
Note the following thing: while calculating the value of $Z$ for two
vectors is easy, estimating it for calculating the distance between a
vector and the center of a cluster takes time linear in the number of
element in the superposition. Note that we can use amplitude estimation
in order to reduce the dependency on error to $O(\epsilon^{-1})$.</p>
<p>For the records:</p>
<p><span>Chernoff Bounds</span> Let $X = \sum_{i=0}^n X_i$, where $X_i = 1$
with probability $p_i$ and $X_i=0$ with probability $1-p_i$. All $X_i$
are independent. Let $\mu = E[X] = \sum_{i=0}^n p_i$. Then:</p>
<ul>
<li>
<p>$P(X \geq (1+\delta)\mu) \leq e^{-\frac{\delta^2}{2+\delta}\mu} $
for all $\delta > 0$</p>
</li>
<li>
<p>$P(X \leq (1-\delta)\mu) \leq e^{\frac{\delta^2}{2}}$</p>
</li>
</ul>
<p><span>Chebyshev</span> Let $X$ a random variable with $E[X] = \mu$ and
$Var[X]=\sigma_2$. For all $t > 0$:</p>
<script type="math/tex; mode=display">P(|X - \mu| > t\sigma) \leq 1/t^2</script>
<p>If we substitute $k/\sigma$ on $t$, we get the equivalent version that
we use to bound the error:
<script type="math/tex">P(|X - \mu|) \geq k) \leq \frac{\sigma^2}{k^2}</script></p>
<h2 id="calculations">Calculations</h2>
<p>It’s time now to prove that our claim is true and to show some
calculation. After all the previous passages, this is the initial state:</p>
<script type="math/tex; mode=display">\ket{0}\Big( \frac{1}{\sqrt{Z}} \left( |\vec{u}|\ket{0} + |\vec{v}|\ket{1} \right) \otimes \frac{1}{\sqrt{2}} (\ket{0}\ket{u} + \ket{1}\ket{v} ) \Big)</script>
<p>We apply an Hadamard on the leftmost ancilla register:</p>
<script type="math/tex; mode=display">\frac{1}{2\sqrt{Z}} \left[ \ket{0} \Big( \left( |\vec{u}|\ket{0} + |\vec{v}|\ket{1} \right) \otimes (\ket{0}\ket{u} + \ket{1}\ket{v} ) \Big) +
\ket{1} \Big( \left( |\vec{u}\ket{0} + \vec{v}\ket{1} \right) \otimes (\ket{0}\ket{u} + \ket{1}\ket{v} ) \Big) \right] =</script>
<script type="math/tex; mode=display">\begin{split}
= \frac{1}{2\sqrt{Z}} \Big[ \ket{0} \Big( |u|\ket{00u} + |u|\ket{01v} + |v|\ket{10u} + |v|\ket{11v} \Big) \\
\ket{1} \Big( |u|\ket{00u} + |u|\ket{01v} + |v|\ket{10u} + |v| \ket{11v}
\Big) \Big]
\end{split}</script>
<p>Controlled on the ancilla being $1$, we swap the second and the third
register:</p>
<script type="math/tex; mode=display">\begin{split}
= \frac{1}{2\sqrt{Z}} \Big[ \ket{0} \Big( |u|\ket{00u} + |u|\ket{01v} + |v|\ket{10u} + |v|\ket{11v} \Big) \\
\ket{1} \Big( |u|\ket{00u} + |u|\ket{10v} + |v|\ket{01u} + |v| \ket{11v} \Big) \Big]
\end{split}</script>
<p>Now we apply the Hadamard on the ancilla qubit again:</p>
<script type="math/tex; mode=display">\begin{split}
= \frac{1}{2^{3/2}\sqrt{Z}} \Big[
|u|\ket{000u} + |u|\ket{001v} + |v|\ket{010u} + |v|\ket{011v} \\
+|u|\ket{100u} + |u|\ket{101v} + |v|\ket{110u} + |v|\ket{111v} \\
+|u|\ket{000u} + |u|\ket{010v} + |v|\ket{001u} + |v|\ket{011v} \\
-|u|\ket{100u} - |u|\ket{110v} - |v|\ket{101u} - |v|\ket{111v}
\Big]
\end{split}</script>
<p>And now we check the probability of reading $\ket{1}$.</p>
<script type="math/tex; mode=display">\begin{split}
p(1) = \frac{1}{2^{3}Z} (u\bra{v10} + v\bra{u01} - u\bra{v01} - v\bra{u10})\\
(u\ket{01v} + v\ket{10u} - u\ket{10v} - v\ket{01u})
\end{split}</script>
<script type="math/tex; mode=display">\begin{split}
p(1) = \frac{2u^2 + 2v^2 - 4(u,v)}{8Z}
\end{split}</script>
<p>Thanks to IK and AG who checked :)</p>
<div id="refs" class="references">
<div id="ref-Lloyd2013QuantumLearning">
Lloyd, Seth, Masoud Mohseni, and Patrick Rebentrost. 2013. “Quantum
algorithms for supervised and unsupervised machine learning.” *ArXiv*
1307.0411 (July): 1–11. <http://arxiv.org/abs/1307.0411>.
</div>
</div>scinawaIntro to swap testSpace Estimation Of Hhl2017-12-27T00:00:00+01:002017-12-27T00:00:00+01:00https://luongo.pro/2017/12/27/Space-estimation-of-HHL<p>Let’s imagine that we are given a quantum computer with 100 logical
qubits, and let’s also assume that we have high gate fidelity (i.e.
applying a gate won’t introduce major errors in our computation). This
means that we can run all the algorithm that we want. An idyllic
situation like this won’t probably happen in the near future (let’s say
5 years). Even if now we have the first prototypes of quantum computers
with the first dozen of quibts, those qubits are not stable, and
therefore the computation we can do is pretty limited: in fact, these
prototypes aren’t able to perform error-free computations (there’s no
error correction yet), and that the computation won’t be “long” as we
want: we will be able to apply a limited number of gate before the
system decohere.</p>
<p>The question is the following: can we compete in solving linear system
of equations with a classical computer? Can we use it to run HHL
algorithm? Let’s recall that for HHL we need 1 qubit register for the
ancilla, a register for the output of phase estimation in the
Hamiltonian simulation (that will store the superposition of the
eigenvalues) and the rest of the qubit can store the input register. We
assume to have logical qubits in our comparison.</p>
<p>We’ll see what happen when we change precision for floating point
operations: 32, 64bit. The sparsity of the matrix is supposed to be
small. Since we want to be as close as possible as real cases, let’s
take a famous example of matrix considered sparse. The product-user
matrix that websites like Amazon, or Netflix use to run Recommendation
Algorithms: rows represent the users of the service, while columns are
the products. The rows are empty except where an user purchased a
specific product or watched a particular movie. Let’s say that an
educated guess for the sparsity of the matrix is $100$.</p>
<p><img src="/assets/HHL_resource_estimation/space_resource_estimation.png" alt="image" /></p>
<p>The upper horizontal line is an estimate of the space in TB of the
hard-disks for the whole Google (Cirrusinsight, n.d.) (13 EB), while the
lower one is an estimate for the storage need to store the images of
Google Maps (Mesarina, n.d.) (43 PB).</p>
<p>Let’s do an example to show what the software is plotting for $100$
qubits. In HHL we need an ancilla qubit. So we have 99 qubits. To get
$64$ bit precision, we need to allocate 64 qubits: this is the phase
estimation of the Hamiltonian simulation step. So now we are left with
just $35$ qubits. With $35$ qubits we can span a Hilbert space of
dimension $2^{35}$: this allow us to encode a vector of data with the
same number of components. Suppose our vector of known values has $64$
bits floating points: classically, the cost for storing this amount of
data we need $2^{35}*64$ bits, which are $0.549.5$TB (Terabytes).
Summing up the cost for storing a $2^{35} \times 2^{35}$ matrix with
sparsity $100$, we get $27$ TB.</p>
<p>Remember that each component of the vector will be encoded as
probability amplitude in our quantum register. This imply that our
precision in modifying a qubit need to grow, along with the number fo
qubit and the fidelity of the gates. Here we just focused on the
computational capabilities of a small quantum computer with respect to
HHL algorithm. Don’t forget that for HHL we will need to store the
matrix to invert as in the classical case (in form of QRAM or other
oracle). [Here](<a href="https://github.com/Scinawa/space_estimation_hhl">https://github.com/Scinawa/space_estimation_hhl</a>) the
code for generating the plot.</p>
<div id="refs" class="references">
<div id="ref-exagoogle">
Cirrusinsight. n.d. “How Much Data Does Google Store?”
</div>
<div id="ref-gmaps">
Mesarina, Malena. n.d. “How Much Storage Space Do You Need for Google
Maps?”
</div>
</div>scinawaLet’s imagine that we are given a quantum computer with 100 logical qubits, and let’s also assume that we have high gate fidelity (i.e. applying a gate won’t introduce major errors in our computation). This means that we can run all the algorithm that we want. An idyllic situation like this won’t probably happen in the near future (let’s say 5 years). Even if now we have the first prototypes of quantum computers with the first dozen of quibts, those qubits are not stable, and therefore the computation we can do is pretty limited: in fact, these prototypes aren’t able to perform error-free computations (there’s no error correction yet), and that the computation won’t be “long” as we want: we will be able to apply a limited number of gate before the system decohere.Rewriting Swap Test2017-12-04T00:00:00+01:002017-12-04T00:00:00+01:00https://luongo.pro/2017/12/04/Rewriting-Swap-Test<h1 id="rewriting-the-swap-test">Rewriting the swap test</h1>
<p>Some weeks ago I stumbled upon a nice paper by Garcia-Escartin and
Chamorro-Posada (2013). There, they show the equivalence between a
widely used circuit in quantum information, called the swap test, and a
phenomena that goes under the name of Hong-Ou-Mandel effect. In their
work, they rewrote the circuit of the swap test using less gates and
with no ancilla qubit. I find this fact pretty interesting, and I think
it’s worth sharing it with you. It gives us a new interpretation of the
swap test, that is possible to prove with very simple gate’s
manipulations. More specifically here we show the equivalence of the
swap test and the circuit we use to measure in the Bell’s bases (an
Hadamard on the first qubit and a CNOT) - the same used to create an EPR
pair. Here we will work with single qubit register, but the result can
be extended to register with multiple qubits. Assuming to work with one
qubit register, let’s recall the original circuit of the swap test:
<img src="/assets/rewriting_swap_test/Fig1.PNG" alt="image" /></p>
<p>The probability of reading $1$ is $ \frac{1 - \braket{a | b} }{2} $ and
the probability of reading $0$ is $ \frac{1 + \braket{a | b} }{2} $.</p>
<p>The probability of reading $1$ in the ancilla qubit of the original swap
test is equal to the probability of reading $11$ in the modified version
of the swap test in Figure 7.</p>
<p>Here is the proof. We start by rewriting the swap test as a series of
controlled not operations. Note that $x \oplus x = 0 $ and that
$x \oplus 0 = x$. It’s very simple to show that the swap gate can be
replaced with a series of CNOTs:
<script type="math/tex">\ket{x}\ket{y} \to \ket{x}\ket{x \oplus y} \to \ket{x \oplus (x \oplus y)}\ket{x \oplus y} \to \ket{y}\ket{x}</script></p>
<p><img src="/assets/rewriting_swap_test/Fig2.PNG" alt="image" /></p>
<p>We know that A NOT gate is just a $Z$ gate on another rotation axis. The
rotation axis can easily be changed by two surrounding Hadamard’s gate.</p>
<p><img src="/assets/rewriting_swap_test/Fig3.PNG" alt="image" /></p>
<p>The CCZ gate is pretty agnostic with respect to the target or control
qubit, so we can put the $Z$ rotation on any of the control qubit.</p>
<p><img src="/assets/rewriting_swap_test/Fig4.PNG" alt="image" /></p>
<p>In this circuit we note that some of the gates we are applying are
actually useless for the measurement on the ancilla qubit, and we can
remove them from the circuit.</p>
<p><img src="/assets/rewriting_swap_test/Fig5.PNG" alt="image" /></p>
<p>Again, we use the equivalence between the $X$ gate $HZH$ gate.</p>
<p><img src="/assets/rewriting_swap_test/Fig6.PNG" alt="image" /></p>
<p>We note that we could remove the ancilla qubit, and measure the other
two qubits instead. This is possible thanks to the principle of deffered
measurement. The probability of reading $1$ in the ancilla qubit is
equivalent to the probability of reading $1$ in both the qubit $\ket{a}$
and $\ket{b}$. We don’t mind measuring the two qubit since after
measuring the ancilla qubit we cannot use $\ket{a}$ and $\ket{b}$
nevertheless. So here we get the final circuit.</p>
<p><img src="/assets/rewriting_swap_test/Fig7.PNG" alt="image" /></p>
<p>This equivalence might be useful when we need to optimize a circuit and
we have to reduce both the number of gates and the number of ancilla
qubit. This result gives us a nice intuition on the behavior of the swap
test when the two qubits are entangled. But beware of remote hacking! Be
sure to run the swap test only on non entangled data, otherwise you
might get unexpected results! Take for example the Bell’s sates. $1$ of
the four Bell’s basis for 2 qubit gates will pass the test
($\frac{\ket{01} - \ket{10}}{\sqrt{2}}$), but it’s pretty
counterintuitive, since the first and the second qubit are always
different. Therefore, we should use the swap test only with
non-entangled data! Entanglement, along with its usefulness in quantum
protocols and computation brings much troubles. Indeed it’s because of
entanglement that bit commitment is not possible using quantum
resources, and it’s because of entanglement that we can attack position
based encryption schemes. And that’s it. Hope you enjoyed it as much as
I did. To extend this equivalence to multi-qubit register, look at the
paper!</p>
<div id="refs" class="references">
<div id="ref-garcia2013swap">
Garcia-Escartin, Juan Carlos, and Pedro Chamorro-Posada. 2013. “Swap
Test and Hong-Ou-Mandel Effect Are Equivalent.” *Physical Review A* 87
(5). APS: 052330.
</div>
</div>scinawaRewriting the swap testThe Hhl Algorithm2017-11-21T00:00:00+01:002017-11-21T00:00:00+01:00https://luongo.pro/2017/11/21/The-HHL-Algorithm<h1 id="the-hhl-algorithm">The HHL algorithm</h1>
<p>A linear system of $N$ equations can be represented in matrix form as
$A\vec{x}=\vec{b}$. Its solution is defined as $\vec{x}=A^{-1}\vec{b}$.
This tells us that if we want to get the solution vector $\vec{x}$, we
should be able to invert the matrix $A$. Classically inverting a matrix
can be done in polynomial time, usually with algorithms that scale
between the square and the cube in the dimension of the system. HHL is a
quantum algorithm that allows to create a quantum state proportional to
the solution $\ket{A^{-1}\vec{b}}$ in time $polylog(N)$. This will give
us an exponential speedup with respect to classical algorithms, but it
will introduce a time dependency on other factors, such as the sparsity
of the matrix or the conditioning number.\
Let’s recall some notions from linear algebra. Given a Hermitian matrix,
we are also given a<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> set of eigenvectors ${ \vec{\varphi_i} }$
which happen to be a base for the space. We can thus express the vector
$b$ as a linear combination of eigenvectors:
$\vec{b}=\sum_{j}\beta_j\vec{\varphi_j}$. The solution of the system is
therefore $\vec{x} = \sum_j \beta_j \lambda_j^{-1} \vec{\varphi}_j$. The
idea for the quantum algorithm is to relay on this observations and get
the state:
<script type="math/tex">\ket{x} = \sum_{j} \beta_j \lambda_j^{-1} \ket{ \varphi_j} = \ket{A^{-1}\vec{b}}</script></p>
<p>If our matrix $A$ is not Hermitian, we can associate an Hermitian
matrix: <script type="math/tex">% <![CDATA[
A' =
\begin{bmatrix}
0 & A \\
A^T & 0
\end{bmatrix} %]]></script> In this form, the multiplying a vector $x$ can be done
in this way: $(Ax, 0) = A’ (0, x)$.\
A few initial remarks now: creating a quantum state
$\ket{x} \in \mathbb{R}^{2^n}$ (where $n$ is the number of qubits) does
not means that we are “solving” the linear system of equation, as we
don’t have classical access to the solution. Indeed, doing quantum
tomography for recovering $\ket{x}$ would cost us $O(N)$, forcing us to
lose the exponential speedup. But we are just getting as an output a
normalized version of $\vec{x}$, that is $\ket{x}$. Indeed, there are a
few assumption which is better to state explicitly: Aaronson (2015)</p>
<ol>
<li>
<p>On the input vector we have the following restrictions:</p>
<ol>
<li>
<p>there must be a fast way of getting $|b\rangle$ from the
classical input vector $b$. If $b$ is made of $N$ components,
than we will need $log_2(N)$ qubits to express
$|b\rangle = \sum_{i=0}^{N-1} b_i |i\rangle$. Practically, we
assume the existence of QRAM: an operator
$B: |0\rangle \to |b\rangle$. I hope to write more about
this soon.</p>
</li>
<li>
<p>The initial vector $b$ should be relatively uniform, otherwise
it will contradict the impossibility of an exponential speedup
for black-box quantum search Aaronson (2015).</p>
</li>
</ol>
</li>
<li>
<p>The matrix $A$ should be $s$-sparse on rows (or other
efficiently-simulable kind of Hamiltonians). This is needed because
Hamiltonians with a sparse matrix can be efficiently simulated, and
the amount of time needed to simulate them grows linearly with the
sparsity $s$.</p>
</li>
<li>
<p>The conditioning number
$\kappa = \frac{\lambda_{max}} { \lambda_{min}} = $ should be
low, i.e. the matrix should be robustly invertible, because the
asymptotic complexity grows linearly with $\kappa$. Singular values
of $A$ should lie between $1/\kappa$ and $1$.</p>
</li>
<li>
<p>We are not interested in the values of $x$ itself (i.e. the
probability amplitudes of $\ket{x}$, but just in a measurement in a
basis of choice: $ \braket{x|M|x} $.</p>
</li>
</ol>
<p>There are many enhancements of HHL, where we can solve rectangular
matrices, over-determined and under-determined systems of equations,
dense matrices, and speedups obtained by applying amplitude
amplification techniques.</p>
<h2 id="first-step-hamiltonian-simulation-and-phase-estimation">First step: Hamiltonian simulation and phase estimation</h2>
<p>The first step of HHL is the eigenvalue estimation of the unitary matrix
associated to the linear transformation $U_A=e^{iA}$. Remember that
given an hermitian matrix $A$, there is an isomorphism between it’s
(real) eigenvalues and the eigenvalues of the unitary matrix $U=e^{iA}$.
For each eigenvalue $\lambda_i$ of A there is an eigenvalue
$e^{i\lambda_i}$. Hamiltonian simulation is a procedure that, given a
time $t$ and an Hamiltonian $H$, allows us to perform a time unitary
evolution $U$ associated to $H$ for a given state $|\psi\rangle$ for a
given time $t$:</p>
<script type="math/tex; mode=display">U|\psi{0\rangle} = e^{-iHt}|\psi(0)\rangle = |\psi(t)\rangle</script>
<p>Controlled operations of $U$ allows us to write the eigenvalues of $U$
in the phase of our quantum computer. As usual, we use an index register
$\sum_{i}^{O(\epsilon)} |i\rangle$ with a uniform superposition and use
this register the apply the controlled $U$. Let’s imagine
$K \propto O(\varepsilon)$ is the chosen precision for phase estimation.
That will allow us to build the following mapping:</p>
<script type="math/tex; mode=display">\sum_{k \in [K]} \sum_{j \in [N]} |k\rangle \beta_j|\varphi_j\rangle \to \sum_{k \in [K] } \sum_{j \in [N]} e^{2\pi i k \lambda_i/N}|k\rangle\beta_j|\varphi_j\rangle</script>
<p>Using QFT$^{-1}$ we perform a phase estimation as usual:
<script type="math/tex">\sum_{k \in [K]}\sum_{j \in [N]} e^{2\pi i k \lambda_i/N}|k\rangle\beta_j|\varphi_j\rangle \to \sum_{j \in [N]}\beta_{j}|\varphi_j\rangle|\tilde{\lambda_j\rangle}</script></p>
<p>The idea is to use quantum phase estimation allows to calculate
eigenvalues and eigenvector of the Hermitian operator associated to a
matrix $A$. We need Hamiltonian simulation in order to encode
efficiently the eigenvalues as a phase. Phase estimation is applied
next, writing an approximation of the eigenvalues in a register:
$|\tilde{\lambda_j\rangle}$.</p>
<h2 id="second-step-controlled-rotations">Second step: controlled rotations</h2>
<p>The second step is where the magic happen. Note that multiplying each
eigenvector by its the inverse of an eigenvalue is not unitary
transformation, so we have to find some trick. The problem is solved in
by introducing the right non linear operation: a measurement on an
ancilla qubit. We adjoin a single qubit register, and we perform an
operation controlled on the eigenvalues estimated in the previous step.
Here, $C=\lambda_{min}$:</p>
<script type="math/tex; mode=display">|\lambda_j\rangle|0\rangle \to |\lambda_j\rangle\otimes \left( \sqrt{1-(\frac{C}{\lambda_j})^2}|1\rangle + \frac{C}{\lambda_j}|0\rangle \right)^A</script>
<p>Measuring the $A$ register, we make the rest of the state into the
subspace consistent with our observation. That’s a neat trick that allow
us to “move out” a value inside a ket in the “outer world” in a
meaningful way. To get rid of the register with the eigenvalues
$|\lambda_j\rangle$, we run eigenvalue estimation in reverse, in order
to be left with the state
<script type="math/tex">\sum_{j} \beta_j C\lambda_j^{-1} |\psi_j\rangle |1\rangle +|G\rangle|0\rangle = \sum_{j} C\beta_j U_A^{-1} |\psi_j\rangle \ket{1} + \ket{G}\ket{0} = C\ket{x}\ket{0} +\ket{G}\ket{1}.</script></p>
<p>Now we measure the ancilla qubit.</p>
<ol>
<li>
<p>If we observe $|1\rangle$, the new state of the system is
<script type="math/tex">\sum_{j}\beta_jC\lambda_j^{-1}|\varphi_j\rangle \propto |x\rangle</script>
The probability of observing $|1\rangle$ is:
$ p(|1\rangle) =C^2 |||x\rangle||^2$. In this step we have
introduced a dependency on the conditioning number:
<script type="math/tex">p(|1\rangle) = \sum_{j} |\frac{\beta_jC}{\lambda_y}|^2 \geq |\frac{C}{\lambda_{max}}|^2= \frac{1}{\kappa^2}</script></p>
</li>
<li>
<table>
<tbody>
<tr>
<td>If we observe $</td>
<td>0\rangle$ we start again from step 1.</td>
</tr>
</tbody>
</table>
</li>
</ol>
<p>It is possible to use amplitude amplification on $|1\rangle$ to increase
by a factor of a square root the dependency on $\kappa^{-2}$.</p>
<h1 id="complexity-analysis">Complexity analysis</h1>
<p>The cost of eigenvalue estimation with error less than $\epsilon$, is
$O(log(N)s^{2} + log(1/\varepsilon))$. To read $|1\rangle$ in the second
step, we should repeat the algorithm $O(\kappa^2)$ times. I extend a
little bit <strong>???</strong>, which list the complexity of all the version of
quantum algorithm for solving linear system of equations.</p>
<ul>
<li>
<p>The original version Aram W. Harrow, Hassidim, and Lloyd (2009), we
have: <script type="math/tex">O(\kappa^{2}log(N)s^{2} / \epsilon)</script>
<script type="math/tex">\tilde{O}(\kappa T_B + log (N) s^2 \kappa^2 T_A / \varepsilon)</script></p>
</li>
<li>
<p>In Ambainis (2010), where he used a technique called variable time
amplitude amplification in order to decrease the running time, by
applying a particular flavor of Grover algorithm in order to reduce
the dependency on $\kappa$ from $\kappa^3$ to $\kappa^2$.
<script type="math/tex">\tilde{O}(\kappa T_B + log(N)s^2 \kappa T_A / \varepsilon^3)</script>
This results is based upon a previous work of Childs, Kothari, and
Somma (2015) in precision for simulation sparse Hamiltonians. I
think results might have changed since Hao Low and Chuang (2016):
the last result I am aware of on Hamiltonian simulation.</p>
</li>
<li>
<p>In Childs, Kothari, and Somma (2015) they improved the dependency on
the error: going from a polynomial dependency to a polylog. The idea
is to avoid the QFT, whose dependency on error cannot be improved.</p>
</li>
<li>
<p>In Wossnig, Zhao, and Prakash (2017), instead of using Hamiltonian
simulation, they use singular value estimation: a procedure that use
QRAM to create a register proportional to the superposition of the
singular values of a matrix. Their complexity is
$O(\kappa^2\sqrt{N}\times polylog(N)/\varepsilon)$ for dense
matrices, and $O(\kappa^2||A||_F \times polylog(N)/\varepsilon)$.
This idea is based on the result of Kerenidis and Prakash (2016).</p>
</li>
</ul>
<h1 id="conclusions">Conclusions</h1>
<p>This was a pretty significant result in quantum algorithmic, that made
possible many of the first results in quantum machine learning. For
instance, this algorithm can be used to performing least-squares
estimation of a model Aram W Harrow (2014). You are given a matrix
$A \in R^{n \times p}$ with $n \geq p$, and $b \in \mathbb{R}^n$. You
are asked to find the best solution $x$ with the constrain that
<script type="math/tex">\operatorname*{arg\,min}_{xc \in \mathbb{R}^p} ||Ax -b||.</script> To relate
linear system of equations and function minimization, consider the
following function $f(x) = x^TAx-x^Tb $. Then, the gradient of the
function is $\nabla f(x) = Ax-b $ therefore finding the minimum of the
function (with additional hypotesis of ML, such as convexity,
smoothness, etc..) reduces to solving a linear system of equations. This
is a well-known in Machine Learning, and some qML algorithms started to
use this subrutine soon after it has been crated. For instance in data
fitting: Wiebe, Braun, and Lloyd (2012), Rebentrost, Mohseni, and Lloyd
(2014) and other things like differential equations Berry (2014). For an
useful answer on stack-overflow, refer to Vega (n.d.).\
It’s worth adding that in the original paper, they use a particular
initial state for the control register, in order to minimize some error
studied in the paper’s appendix. Many useful information can be found
in: Melkebeek (2010), Aram W. Harrow, Hassidim, and Lloyd (2009), and
Lloyd (n.d.).</p>
<div id="refs" class="references">
<div id="ref-Aaronson2015ReadPrint">
Aaronson, Scott. 2015. “Read the fine print.” *Nature Physics* 11 (4):
291–93. doi:[10.1038/nphys3272](https://doi.org/10.1038/nphys3272).
</div>
<div id="ref-Ambainis2010VariableEquations">
Ambainis, Andris. 2010. “Variable time amplitude amplification and a
faster quantum algorithm for solving systems of linear equations.”
</div>
<div id="ref-berry2014high">
Berry, Dominic W. 2014. “High-Order Quantum Algorithm for Solving Linear
Differential Equations.” *Journal of Physics A: Mathematical and
Theoretical* 47 (10). IOP Publishing: 105301.
</div>
<div id="ref-Childs2015QuantumPrecision">
Childs, Andrew M, Robin Kothari, and Rolando D Somma. 2015. “Quantum
linear systems algorithm with exponentially improved dependence on
precision.”
</div>
<div id="ref-HaoLow2016HamiltonianQubitization">
Hao Low, Guang, and Isaac L Chuang. 2016. “Hamiltonian Simulation by
Qubitization.”
</div>
<div id="ref-Harrow2014ReviewEquations">
Harrow, Aram W. 2014. “Review of Quantum Algorithms for Systems of
Linear Equations,” December, 2–4. <http://arxiv.org/abs/1501.00008>.
</div>
<div id="ref-Harrow2009QuantumEquations">
Harrow, Aram W., Avinatan Hassidim, and Seth Lloyd. 2009. “Quantum
Algorithm for Linear Systems of Equations.” *Physical Review Letters*
103 (15): 150502.
doi:[10.1103/PhysRevLett.103.150502](https://doi.org/10.1103/PhysRevLett.103.150502).
</div>
<div id="ref-Kerenidis2016QuantumSystems">
Kerenidis, Iordanis, and Anupam Prakash. 2016. “Quantum Recommendation
Systems.”
</div>
<div id="ref-lloydyoutube">
Lloyd, Seth. n.d. “Quantum Algorithm for Solving Linear Equations.”
</div>
<div id="ref-Melkebeek2010Lecture12Equations">
Melkebeek, Instructor Dieter Van. 2010. “Lecture12 : Order Finding
‘Solving’ Linear Equations,” 1–4.
[http://pages.cs.wisc.edu/{\\\~{}}dieter/Courses/2010f-CS880/Scribes/12/lecture12.pdf](http://pages.cs.wisc.edu/{\~{}}dieter/Courses/2010f-CS880/Scribes/12/lecture12.pdf).
</div>
<div id="ref-Rebentrost2014QuantumClassification">
Rebentrost, Patrick, Masoud Mohseni, and Seth Lloyd. 2014. “Quantum
support vector machine for big data classification.” *Physical Review
Letters*.
doi:[10.1103/PhysRevLett.113.130503](https://doi.org/10.1103/PhysRevLett.113.130503).
</div>
<div id="ref-otherHHLuse">
Vega, Juan Bermejo. n.d. “Applications of Hhl’s Algorithm for Solving
Linear Equations.”
</div>
<div id="ref-Wiebe2012QuantumFitting">
Wiebe, Nathan, Daniel Braun, and Seth Lloyd. 2012. “Quantum Algorithm
for Data Fitting.” *Physical Review Letters* 109 (5): 050505.
doi:[10.1103/PhysRevLett.109.050505](https://doi.org/10.1103/PhysRevLett.109.050505).
</div>
<div id="ref-wossnig2017quantum">
Wossnig, Leonard, Zhikuan Zhao, and Anupam Prakash. 2017. “A Quantum
Linear System Algorithm for Dense Matrices.” *ArXiv Preprint
ArXiv:1704.06174*.
</div>
</div>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>very convenient - since they are orthogonal. Generalized
eigenvectors are not orthogonal <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>scinawaThe HHL algorithmTransavia is not recommended for travelling musicians2017-01-06T16:16:00+01:002017-01-06T16:16:00+01:00https://luongo.pro/2017/01/06/transavia-not-recommended<p>A couple of days ago I traveled with my guitar from VCE to ORY.
For the safety of my dear guitar, <em>much care was taken in order to find a trustworthy airline that allow to carry guitars as hand luggages</em>.</p>
<p>I made sure that my case was largely <em>within</em> the allowed dimensions for musical instrument, as <a href="http://service-en.transavia.com/app/answers/detail/a_id/454/~/is-it-possible-to-take-a-musical-instrument-along-with-me-into-the-cabin%3F">stated in their website</a>.</p>
<p>While checking othoer luggage, I asked for a confirmation that my guitar will be taken as my hand luggage. The lady at the checking told me that since my guitar has a hard case, she wasn’t 100% sure I could load it into the cabin.</p>
<p>Unfortunately, the website doe <em>not</em> speak <em>at all</em> about the kind of cover that your guitar should have.</p>
<p>Thanks to a delay of 30 minutes, I asked on the twitter profile of transavia some clarification about the behaiour of the crew. I received only useless information back:</p>
<p><img src="/assets/twitter_transavia.png" alt="alt text" title="Useful conversation on twitter" /></p>
<p>After having stated clearly that I was not going to take any responsability for any damaged eventually caused, a gentlemen of the land crew - to whom I owe a lot of gratitude - spoke with the cabin crew and a person in charge of luggages and I was allowed to place my guitar in a service closet.</p>
<p>Long story short, this time I was luky. But if you are a musician, I wouldn’t recommend you to fly with transavia: people cannot rely on the kindness of flight attendant. If you do, always ask at the check-in a confirmation that your instrument will be taken as hand luggage, and always be prepared for the worst (like double packing your guitar with bubble wrap).</p>
<p>It would be desirable if Transavia starts to use social media in a more meaningful way, and to have a clear policy on hand luggages (of better trained flight assistent).</p>
<p>Anyway, this was the condition of my other checked in luggage after the trip. Not bad for the first flight of my new suitcase.</p>
<p><img src="/assets/baggage.png" alt="alt text" title="My first trip with my new suitcase" /></p>scinawaA couple of days ago I traveled with my guitar from VCE to ORY. For the safety of my dear guitar, much care was taken in order to find a trustworthy airline that allow to carry guitars as hand luggages.My i3 configuration for Qubes-OS2017-01-06T16:16:00+01:002017-01-06T16:16:00+01:00https://luongo.pro/2017/01/06/my-i3-config<p>Here you can find some useful tips on the configuration of i3 (a windows manager) and it’s integration with qubes.
This post is mostly a reference for random walkers on google who happend to search the right keyword, and me, since I tend to forget quickly how I achieved certain configuration which is edited for:</p>
<ol>
<li>Toggle/untoggle keyboard backlight with $mod+n</li>
<li>Exec i3lock with $mod+b</li>
<li>Open a terminal in qubes appvm with $mod+t</li>
<li>Up & Down volume keys (keyboard dependent)</li>
<li>I can name my workspaces with $mod+y. If the name start with a number, than I can treat them like they were just numbered and switch between workspaces and move container as before. Last but not least, I have enabled shortcut to move between previously used workspaces.</li>
</ol>
<p>Ready?</p>
<div class="highlighter-rouge"><pre class="highlight"><code>set $mod Mod1
# Font for window titles. Will also be used by the bar unless a different font
# is used in the bar {} block below.
# This font is widely installed, provides lots of unicode glyphs, right-to-left
# text rendering and scalability on retina/hidpi displays (thanks to pango).
font pango:DejaVu Sans Mono 10
# Before i3 v4.8, we used to recommend this one as the default:
# font -misc-fixed-medium-r-normal--13-120-75-75-C-70-iso10646-1
# The font above is very space-efficient, that is, it looks good, sharp and
# clear in small sizes. However, its unicode glyph coverage is limited, the old
# X core fonts rendering does not support right-to-left and this being a bitmap
# font, it doesn’t scale on retina/hidpi displays.
# Use Mouse+$mod to drag floating windows to their wanted position
floating_modifier $mod
# start a terminal in the domain of the currently active window
bindsym $mod+Return exec qubes-i3-sensible-terminal
# kill focused window
bindsym $mod+Shift+q kill
# start dmenu (a program launcher)
bindsym $mod+d exec --no-startup-id i3-dmenu-desktop --dmenu="dmenu -nb #d2d2d2 -nf #000000 -sb #63a0ff"
# change focus
bindsym $mod+j focus left
bindsym $mod+k focus down
bindsym $mod+l focus up
bindsym $mod+ograve focus right
bindsym $mod+t exec "qvm-run Home gnome-terminal"
# alternatively, you can use the cursor keys:
bindsym $mod+Left focus left
bindsym $mod+Down focus down
bindsym $mod+Up focus up
bindsym $mod+Right focus right
# move focused window
bindsym $mod+Shift+j move left
bindsym $mod+Shift+k move down
bindsym $mod+Shift+l move up
bindsym $mod+Shift+ograve move right
# alternatively, you can use the cursor keys:
bindsym $mod+Shift+Left move left
bindsym $mod+Shift+Down move down
bindsym $mod+Shift+Up move up
bindsym $mod+Shift+Right move right
# split in horizontal orientation
bindsym $mod+h split h
# split in vertical orientation
bindsym $mod+v split v
# enter fullscreen mode for the focused container
bindsym $mod+f fullscreen
# change container layout (stacked, tabbed, toggle split)
bindsym $mod+s layout stacking
bindsym $mod+w layout tabbed
bindsym $mod+e layout toggle split
# toggle tiling / floating
bindsym $mod+Shift+space floating toggle
# change focus between tiling / floating windows
bindsym $mod+space focus mode_toggle
# focus the parent container
bindsym $mod+a focus parent
# focus the child container
#bindsym $mod+d focus child
# switch to workspace
bindsym $mod+1 workspace number 1
bindsym $mod+2 workspace number 2
bindsym $mod+3 workspace number 3
bindsym $mod+4 workspace number 4
bindsym $mod+5 workspace number 5
bindsym $mod+6 workspace number 6
bindsym $mod+7 workspace number 7
bindsym $mod+8 workspace number 8
bindsym $mod+9 workspace number 9
bindsym $mod+0 workspace number 10
# move focused container to workspace
bindsym $mod+Shift+1 move container to workspace number 1
bindsym $mod+Shift+2 move container to workspace number 2
bindsym $mod+Shift+3 move container to workspace number 3
bindsym $mod+Shift+4 move container to workspace number 4
bindsym $mod+Shift+5 move container to workspace number 5
bindsym $mod+Shift+6 move container to workspace number 6
bindsym $mod+Shift+7 move container to workspace number 7
bindsym $mod+Shift+8 move container to workspace number 8
bindsym $mod+Shift+9 move container to workspace number 9
bindsym $mod+Shift+0 move container to workspace number 10
bindsym $mod+y exec i3-input -F 'rename workspace to "%s"' -P ' New name for this workspace'
# reload the configuration file
bindsym $mod+Shift+c reload
# restart i3 inplace (preserves your layout/session, can be used to upgrade i3)
bindsym $mod+Shift+r restart
# exit i3 (logs you out of your X session)
bindsym $mod+Shift+e exec "i3-nagbar -t warning -m 'You pressed the exit shortcut. Do you really want to exit i3? This will end your X session.' -b 'Yes, exit i3' 'i3-msg exit'"
# resize window (you can also use the mouse for that)
mode "resize" {
# These bindings trigger as soon as you enter the resize mode
# Pressing left will shrink the window’s width.
# Pressing right will grow the window’s width.
# Pressing up will shrink the window’s height.
# Pressing down will grow the window’s height.
bindsym j resize shrink width 10 px or 10 ppt
bindsym k resize grow height 10 px or 10 ppt
bindsym l resize shrink height 10 px or 10 ppt
bindsym ograve resize grow width 10 px or 10 ppt
# same bindings, but for the arrow keys
bindsym Left resize shrink width 10 px or 10 ppt
bindsym Down resize grow height 10 px or 10 ppt
bindsym Up resize shrink height 10 px or 10 ppt
bindsym Right resize grow width 10 px or 10 ppt
# back to normal: Enter or Escape
bindsym Return mode "default"
bindsym Escape mode "default"
}
bindsym $mod+r mode "resize"
# Start i3bar to display a workspace bar (plus the system information i3status
# finds out, if available)
bar {
status_command qubes-i3status
font -misc-fixed-medium-r-normal--13-120-75-75-C-70-iso10646-1
colors {
background #d2d2d2
statusline #00000
#class #border #backgr #text
focused_workspace #4c7899 #63a0ff #000000
active_workspace #333333 #5f676a #ffffff
inactive_workspace #222222 #333333 #888888
urgent_workspace #BD2727 #E79E27 #000000
}
}
# Use a screen locker
exec --no-startup-id "xautolock -detectsleep -time 6 -locker 'i3lock -d -c 008000' -notify 30 -notifier \"notify-send -t 2000 'Locking screen in 30 seconds'\""
# Make sure all xdg autostart entries are started, this is (among other things)
# necessary to make sure transient vm's come up
exec --no-startup-id qubes-i3-xdg-autostart
# bindsym XF86AudioRaiseVolume exec "amixer -q sset Master,0 1+ unmute"
# bindsym XF86AudioLowerVolume exec "amixer -q sset Master,0 1- unmute"
#bindsym $mod+p exec "amixer -q sset Master,0 1+ unmute"
bindsym $mod+m exec "amixer -q sset Master,0 1- unmute"
bindsym XF86AudioMute exec "amixer -q sset Master,0 toggle"
bindsym $mod+Shift+n exec "sudo bash /home/scinawa/.i3/keybacklight.sh"
bindsym $mod+Shift+b exec "i3lock -c 045347"
bindsym $mod+n workspace next
bindsym $mod+p workspace prev
</code></pre>
</div>
<h3 id="bash-script-for-backlight-of-keyboard">Bash script for backlight of keyboard.</h3>
<p>The bash script I’m calling with $mod+n is the following, which is copied from [1]</p>
<div class="highlighter-rouge"><pre class="highlight"><code><span class="c">#!/bin/sh</span>
<span class="nv">STATUS</span><span class="o">=</span><span class="sb">`</span>xset -q | grep <span class="s2">"LED"</span> | awk <span class="s1">'{print $10}'</span><span class="sb">`</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">"</span><span class="k">${</span><span class="nv">STATUS</span><span class="k">}</span><span class="s2">"</span> <span class="o">=</span> <span class="s2">"00000000"</span> <span class="o">]</span>
<span class="k">then
</span>xset led 3
<span class="k">else
</span>xset -led 3
<span class="k">fi
</span><span class="nb">exit </span>0
</code></pre>
</div>
<h3 id="creating-shortcut-wise-calls-to-browser-profiles">Creating shortcut-wise calls to browser profiles</h3>
<p>I find cozy to have specific browser shortcut in my i3 menu, where I can lunch specific firefox profile.
In this way, I can call specific browser profiles (banking, social, cloud stuff, etc..) with a name that is faster to type than $appvm-firefox$ and than switching profile.</p>
<p>This is what I did recently to get a fast shortcut for my web app of calendar I’m currently using:</p>
<p>In dom0:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>vim /home/`whoami`/.local/share/applications/$APPVM-name.desktop
</code></pre>
</div>
<p>Edit it and specify:</p>
<ol>
<li>A mnemonic name which is also fast to type, like “CAL”.</li>
<li>The proper link to your appvm menu file: something like <code class="highlighter-rouge">/usr/share/application/calendar.desktop</code></li>
</ol>
<p>In the AppVM you want to lunch the application:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>cd /usr/share/application/
cp firefox.desktop calendar.desktop
vim calendar.desktop
</code></pre>
</div>
<p>Scroll down and modify the exec line like this:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>X-Desktop-File-Install-Version=0.22
[Desktop Action new-window]
Name=Open a New Window
Exec=firefox -P calendar %u
[Desktop Action new-private-window]
Name=Open a New Private Window
Exec=firefox --private-window %u
</code></pre>
</div>
<p>When I type “CAL” in the i3 menu, I open a specific firefox profile with the homepage that I have specified. Voilà!</p>
<h3 id="references">References</h3>
<p>[1] https://m.reddit.com/r/i3wm/comments/3sumks/cant_get_scrolllock_and_mod_key_to_work/</p>scinawaHere you can find some useful tips on the configuration of i3 (a windows manager) and it’s integration with qubes. This post is mostly a reference for random walkers on google who happend to search the right keyword, and me, since I tend to forget quickly how I achieved certain configuration which is edited for: