The author's programming experience was entirely gained in the pre-quantum 'classical' computing era but like probably many others from that era he wonders how programming a

quantum computer might differ from traditional programming. Microsoft have recently made available a quantum computer simulator and the quantum programming language Q#. A brief look at the documentation available at https://docs.microsoft.com/en-us/quantum/?view=qsharp-preview is the basis of the following high level overview. For a far more detailed treatment the documentation itself should be referred to.

Quantum computers are promised to be blazingly fast and the documentation suggests they will always be used in tandem with classical Von Neuman computers and will be utilised to carry out complex calculations exceedingly quickly. A multi-quantum-processor environment under the control of code written in a traditional programming language running on a classical computer is suggested as a reasonable mental model of the situation.

Quantum computing is a revolutionary step forward from the classic Von Neuman computing architecture which is built entirely on 'bits' that can take values of either 0 or 1 because quantum computers will implement 'qubits' that as well as taking the values 0 and 1 can take values that represent a combination of those values, this is sometimes stated as being able to take both values at once. The documentation states:

"While a bit, or binary digit, can have value either 0 or 1, a qubit can have a value that is either of these or a quantum superposition of 0 and 1. The state of a single qubit can be described by a two-dimensional column vector of unit norm, that is, the magnitude squared of its entries must sum to 1. This vector, called the quantum state vector, holds all the information needed to describe the one-qubit quantum system just as a single bit holds all of the information needed to describe the state of a binary variable."

A single qubit is apparently of little interest and it is only by combining numbers of quibits that the full performance promise of quantum computing can be realised. The state of such a combination of qubits will of course be complex and capable of taking many values and apparently not all of them will be predictable. Not only that but the data held in qubits is extremely fragile and error processing will have a large role to play.

So how does Q# let you use qubits? At one level it's a recognisable programming language and has the classic four elements of all structured programs - sequential code blocks (enclosed in {}); repetition (for and repeat until loops); conditionals (if, elif, else) and a version of try/catch error handling (fail). The basic building blocks of a Q# program are operations (much like functions in a C program) which will typically be accessing qubits. There is of course a primitive Q# data type the Quibit as well as other novel data types such as Paulis. Qubits are described as below:

"The Qubit type represents a quantum bit or qubit. Qubits are opaque to the user; the only operation possible with them, other than passing them to another operation, is to test for identity (equality). Ultimately, actions on Qubits are implemented by calling operations in the Q# standard library."

What qubit actions are available in the Q# standard library? Well this is the meat of quantum programming and requires careful study and advanced knowledge but suffice it to say that the actions include Quantum Fourier Transforms, Amplitude Measurement, Iterative Phase Estimation, Oracles (" Here the term oracle refers to a blackbox quantum subroutine that acts upon a set of qubits and returns the answer as a phase.") and plenty of other actions.

Taking all this into account it seems that using a quantum programming language such as Q# will be very much a case of the devil being in the details and to understand those details will require a real understanding of the quantum world.

ARCHIVE POSTS

RECENT POSTS