By: Alvaro Ballon / Xanadu
PennyLane is an open source Quantum Software Development Framework.
Step 0: Importing PennyLane
import pennylane as qml
from pennylane import numpy as np
Step 1: Choosing your quantum device:
default.qubit
: Run-of-the-mill device that works with pure qubit states.default.mixed
: Device that works with mixed states.lightning.qubit
: Faster simulator, with a backend coded in C++ for efficiency.default.gaussian
: For continuous variable systems.We will focus on default qubit, since we'll be doing very simple simulations. Let's define a device with two qubits (wires).
dev = qml.device('default.qubit', wires = 2)
Step 2: Building your quantum circuit
We wil define a function quantum_circuit
, which will implement the quantum circuit shown below:
def quantum_circuit():
qml.Hadamard(wires = 0)
qml.CNOT(wires = [0,1])
qml.PauliX(wires = 1)
However, if we call quantum_circuit(), nothing happens! This is not surprising since we are not returning
anything. A quantum circuit is just a list of successive operations.
Step 3: Put device and circuit together into a QNode
There are two reasons our circuit doesn't work. One is that it doesn't have a device to run on, it's just an abstract platonic circuit. When we assign a device to it, it becomes a QNode. We do this using a decorator
@qml.qnode(dev) # Remember that we defined this device before!
def quantum_node():
quantum_circuit()
return something!
But unlike quantum circuits, QNodes need to return something. That is, we need to measure!
@qml.qnode(dev)
def quantum_node():
quantum_circuit()
return qml.probs(wires=0)
quantum_node()
tensor([0.5, 0.5], requires_grad=True)
This is telling us that the probability of measuring $\vert 0 \rangle$ is 0.5, and so is the probability of measuring $\vert 1 \rangle$. All measurements are done in the computational basis! Let's try other measurements!
@qml.qnode(dev)
def quantum_node():
quantum_circuit()
return qml.probs(wires=[0,1])
quantum_node()
tensor([0. , 0.5, 0.5, 0. ], requires_grad=True)
@qml.qnode(dev)
def quantum_node():
quantum_circuit()
return qml.expval(qml.PauliX(0)), qml.expval(qml.PauliY(0)), qml.expval(qml.PauliZ(0))
quantum_node()
tensor([0., 0., 0.], requires_grad=True)
@qml.qnode(dev)
def quantum_node():
quantum_circuit()
return qml.state()
quantum_node()
tensor([0. +0.j, 0.70710678+0.j, 0.70710678+0.j, 0. +0.j], requires_grad=True)
@qml.qnode(dev)
def quantum_node():
quantum_circuit()
return qml.density_matrix(wires=[0])
quantum_node()
tensor([[0.5+0.j, 0. +0.j], [0. +0.j, 0.5+0.j]], requires_grad=True)
Putting it all together
dev = qml.device('default.qubit', wires = 2)
@qml.qnode(dev)
def quantum_node():
qml.Hadamard(wires = 0)
qml.CNOT(wires = [0,1])
qml.PauliX(wires = 1)
return qml.state()
quantum_node()
tensor([0. +0.j, 0.70710678+0.j, 0.70710678+0.j, 0. +0.j], requires_grad=True)
Try to build the following circuit! Check out the PennyLane documentation to get the commands for any gates that you need!
dev = qml.device('default.qubit', wires = ...)
# Feel free to define any helper quantum circuits here #
@qml.qnode(...)
def quantum_node(params):
"""params is an array containing the parameters"""
### YOUR CODE HERE ###
return ...
Once you're done, return the expectation value of the PauliZ observable on the first wire $\alpha = 0.5$, $\beta = 0.4$, $\gamma = 0.3$, $\theta = 0.4$, and $\phi = 0.5$.
Solution:
dev = qml.device('default.qubit', wires = 3)
def quantum_circuit(params):
qml.RX(params[0], wires = 0)
qml.RY(params[1], wires = 0)
qml.CNOT(wires=[0,1])
qml.RY(params[2], wires = 1)
qml.CNOT(wires=[1,2])
qml.RY(params[3], wires = 2)
qml.RY(params[4], wires = 0)
@qml.qnode(dev)
def quantum_node(params):
quantum_circuit(params)
return qml.expval(qml.PauliZ(0))
quantum_node([0.1, 0.4, 0.3, 0.4, 0.5])
tensor(0.8042689, requires_grad=True)
We just built a parametrized circuit. Let's ask the following question. For which parameters $\alpha$, $\beta$, $\gamma$, $\theta$, and $\phi$. is the expectation value of the PauliZ observable on the first wire minimized? And what is such minimum value?
Step 1: Define the function that you want to minimize. This is called a "cost function".
def cost_function(params):
return quantum_node(params)
Step 2: Set the initial parameters. Usually randomized, but in some cases it can determine which local minimum we converge to.
init_params = np.array([0.1,0.4,0.3,0.4,0.5], requires_grad=True)
cost_function(init_params)
tensor(0.8042689, requires_grad=True)
Step 3: Define an optimizer of your choice. We'll use a simple Gradient Descent optimizer
opt = qml.GradientDescentOptimizer(stepsize=0.4)
steps = 100
params = init_params
for i in range(steps):
# update the circuit parameters
params = opt.step(cost_function, params)
if (i + 1) % 20 == 0:
print("Cost after step {:5d}: {: .7f}".format(i + 1, cost_function(params)))
print("Optimized parameters: {}".format(params.round(2)))
Cost after step 20: -0.9999826 Cost after step 40: -1.0000000 Cost after step 60: -1.0000000 Cost after step 80: -1.0000000 Cost after step 100: -1.0000000 Optimized parameters: [0. 0. 0.3 0.4 3.14]
Kinda boring, let's do something more fun. VQE for the Transverse Ising Model!
Turns out that, for three qubits, all we need is the same circuit, which prepare the most general ground state (taking symmetries into account). The only difference is that the cost function will be the expectation value of the Hamiltonian!
$ H = -\sum_{i=0}^{n}Z_{i}\otimes Z_{i+1} - \sum_{i=0}^{n}X_i$
First, we need to create the Hamiltonian (for three spins)
H = - qml.PauliZ(0) @ qml.PauliZ(1) - qml.PauliZ(1)@ qml.PauliZ(2) - qml.PauliZ(2) @ qml.PauliZ(0)
H = H - 1/2*qml.PauliX(0) - 1/2*qml.PauliX(1) - 1/2*qml.PauliX(2)
Minimizing the expectation value of the Hamiltonian corresponds to finding the minimum energy
@qml.qnode(dev)
def quantum_node(params):
quantum_circuit(params)
return qml.expval(H)
def cost_function(params):
return ...
init_params = np.array(..., requires_grad=True)
opt =...
steps = ...
params = init_params
for i in range(steps):
# update the circuit parameters
params = opt.step(cost_function, params)
if (i + 1) % 20 == 0:
print("Cost after step {:5d}: {: .7f}".format(i + 1, cost_function(params)))
print("Optimized parameters: {}".format(params.round(2)))
def cost_function(params):
return quantum_node(params)
init_params = np.array([0.1,0.4,0.3,0.4,0.5], requires_grad=True)
opt = qml.GradientDescentOptimizer(stepsize=0.2)
steps = 200
params = init_params
for i in range(steps):
# update the circuit parameters
params = opt.step(cost_function, params)
if (i + 1) % 40 == 0:
print("Cost after step {:5d}: {: .7f}".format(i + 1, cost_function(params)))
print("Optimized parameters: {}".format(params.round(2)))
Cost after step 40: -3.1213717 Cost after step 80: -3.1236480 Cost after step 120: -3.1239722 Cost after step 160: -3.1240167 Cost after step 200: -3.1240227 Optimized parameters: [0. 0. 0. 0.25 0.25]
Solution:
@qml.qnode(dev)
def quantum_node_state(params):
quantum_circuit(params)
return qml.state()
quantum_node_state([0,0,0,0.25,0.25]).round(2)
tensor([0.98+0.j, 0.12+0.j, 0. +0.j, 0. +0.j, 0.12+0.j, 0.02+0.j, 0. +0.j, 0. +0.j], requires_grad=True)
Let's solve, using what we learned, as silly classification problem. We are given arrays of length 4 whose elements are 0 or 1. We want to classify them according to their parity (even or odd number of 1s).
Extra step: Encode the data. Here we use four basis qubit to encode the list.
dev = qml.device("default.qubit", wires=4)
def statepreparation(x):
qml.BasisState(x, wires=[0, 1, 2, 3])
E.g [0,1,0,1] is encoded in the state $\vert 0101 \rangle$.
We write the basic quantum layer of our classifier.
def layer(W):
qml.Rot(W[0, 0], W[0, 1], W[0, 2], wires=0)
qml.Rot(W[1, 0], W[1, 1], W[1, 2], wires=1)
qml.Rot(W[2, 0], W[2, 1], W[2, 2], wires=2)
qml.Rot(W[3, 0], W[3, 1], W[3, 2], wires=3)
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[1, 2])
qml.CNOT(wires=[2, 3])
qml.CNOT(wires=[3, 0])
Here $W$ is the matrix
$ W = \left( \begin{array}{ccc} \theta_0 & \phi_0 & \omega_0 \\ \theta_1 & \phi_1 & \omega_1 \\ \theta_2 & \phi_2 & \omega_2 \\ \theta_3 & \phi_3 & \omega_3 \end{array} \right)$
The circuit piles up a lot a bunch of these layers
@qml.qnode(dev, interface="autograd")
def circuit(weights, x):
statepreparation(x)
for W in weights:
layer(W)
return qml.expval(qml.PauliZ(0))
We add a bias as well:
def variational_classifier(weights, bias, x):
return circuit(weights, x) + bias
Define our square loss
def square_loss(labels, predictions):
loss = 0
for l, p in zip(labels, predictions):
loss = loss + (l - p) ** 2
loss = loss / len(labels)
return loss
And our model accuracy
def accuracy(labels, predictions):
loss = 0
for l, p in zip(labels, predictions):
if abs(l - p) < 1e-5:
loss = loss + 1
loss = loss / len(labels)
return loss
And with this we have our cost function
def cost(weights, bias, X, Y):
predictions = [variational_classifier(weights, bias, x) for x in X]
return square_loss(Y, predictions)
Now we load some data
data = np.loadtxt("variational_classifier/data.txt")
X = np.array(data[:, :-1], requires_grad=False)
Y = np.array(data[:, -1], requires_grad=False)
Y = Y * 2 - np.ones(len(Y)) # shift label from {0, 1} to {-1, 1}
for i in range(5):
print("X = {}, Y = {: d}".format(X[i], int(Y[i])))
print("...")
X = [0. 0. 0. 0.], Y = -1 X = [0. 0. 0. 1.], Y = 1 X = [0. 0. 1. 0.], Y = 1 X = [0. 0. 1. 1.], Y = -1 X = [0. 1. 0. 0.], Y = 1 ...
Generate some random initial weights
np.random.seed(0)
num_qubits = 4
num_layers = 2
weights_init = 0.01 * np.random.randn(num_layers, num_qubits, 3, requires_grad=True)
bias_init = np.array(0.0, requires_grad=True)
print(weights_init, bias_init)
[[[ 0.01764052 0.00400157 0.00978738] [ 0.02240893 0.01867558 -0.00977278] [ 0.00950088 -0.00151357 -0.00103219] [ 0.00410599 0.00144044 0.01454274]] [[ 0.00761038 0.00121675 0.00443863] [ 0.00333674 0.01494079 -0.00205158] [ 0.00313068 -0.00854096 -0.0255299 ] [ 0.00653619 0.00864436 -0.00742165]]] 0.0
Run the optimizer
opt = qml.optimize.NesterovMomentumOptimizer(0.5)
batch_size = 5
weights = weights_init
bias = bias_init
for it in range(10):
# Update the weights by one optimizer step
batch_index = np.random.randint(0, len(X), (batch_size,))
X_batch = X[batch_index]
Y_batch = Y[batch_index]
weights, bias, _, _ = opt.step(cost, weights, bias, X_batch, Y_batch)
# Compute accuracy
predictions = [np.sign(variational_classifier(weights, bias, x)) for x in X]
acc = accuracy(Y, predictions)
print(
"Iter: {:5d} | Cost: {:0.7f} | Accuracy: {:0.7f} ".format(
it + 1, cost(weights, bias, X, Y), acc
)
)
Iter: 1 | Cost: 3.4355534 | Accuracy: 0.5000000 Iter: 2 | Cost: 1.9717733 | Accuracy: 0.5000000 Iter: 3 | Cost: 1.8182812 | Accuracy: 0.5000000 Iter: 4 | Cost: 1.5042404 | Accuracy: 0.5000000 Iter: 5 | Cost: 1.1477739 | Accuracy: 0.5000000 Iter: 6 | Cost: 1.2734990 | Accuracy: 0.6250000 Iter: 7 | Cost: 0.8290628 | Accuracy: 0.5000000 Iter: 8 | Cost: 0.3226183 | Accuracy: 1.0000000 Iter: 9 | Cost: 0.1436206 | Accuracy: 1.0000000 Iter: 10 | Cost: 0.2982810 | Accuracy: 1.0000000
weights = weights_init
bias = bias_init
for it in range(10):
# Update the weights by one optimizer step
batch_index = np.random.randint(0, len(X), (batch_size,))
X_batch = X[batch_index]
Y_batch = Y[batch_index]
weights, bias, _, _ = opt.step(cost, weights, bias, X_batch, Y_batch)
# Compute accuracy
predictions = [np.sign(variational_classifier(weights, bias, x)) for x in X]
acc = accuracy(Y, predictions)
print(
"Iter: {:5d} | Cost: {:0.7f} | Accuracy: {:0.7f} ".format(
it + 1, cost(weights, bias, X, Y), acc
)
)
Iter: 1 | Cost: 1.9818724 | Accuracy: 0.5000000 Iter: 2 | Cost: 1.2707072 | Accuracy: 0.5000000 Iter: 3 | Cost: 1.2892865 | Accuracy: 0.5000000 Iter: 4 | Cost: 1.3330569 | Accuracy: 0.5000000 Iter: 5 | Cost: 1.7284673 | Accuracy: 0.5000000 Iter: 6 | Cost: 1.2516154 | Accuracy: 0.5000000 Iter: 7 | Cost: 0.9170268 | Accuracy: 0.7500000 Iter: 8 | Cost: 0.5192060 | Accuracy: 0.7500000 Iter: 9 | Cost: 0.5106025 | Accuracy: 0.7500000 Iter: 10 | Cost: 0.3721737 | Accuracy: 1.0000000