You are here: By David Warne, a former QCIF eResearch Analyst at QUT.

MATLAB is a powerful programming environment for scientific applications. It is designed to be highly optimised for matrix and vector mathematical operations. Writing code in a way that leverages these matrix operations is known as vectorising code. There are many reasons why a particular piece of MATLAB code is running slow, but low usage of vectorised code is one of the most common ones that I see.

The differences between vectorised and non-vectorised code can be substantial. Consider a somewhat contrived example of matrix multiplication; a direct MATLAB implementation would be:

`function [C] = slowMatMul(A,B) [N,M] = size(A); [Q,P] = size(B); if M ~= Q     error(‘Matrix dimension mismatch!’); end C = zeros(N,P); for j=1:P     for i=1:N         for k=1:M             C(i,j) = C(i,j) + A(i,k)*B(k,j);         end     end end`

Now, compare the runtime of the above code to MATLAB’s in-built matrix multiply for two 1000 x 1000 matrices (using a two-year-old Core i7 laptop):

`>> tic; C = slowMatMul(A,B);toc; Elapsed time is 9.948649 seconds. >> tic; C = A*B;toc; Elapsed time is 0.039372 seconds.`

The in-built matrix mathematics is around 250x faster than direct MATLAB! There are lots of reasons for this, but for now, let’s just leave it at that.

Admittedly, this example is a bit artificial, but the point is to emphasise the potential computational gain that can be achieved through ensuring your code is as vectorised as possible. This may require a little more coding effort, but the results are well worth it.

For more details and examples, check out the following MATLAB documentation page.