12 April, 2014

# Speed measurement of repeating a row multiple times

As working on a fast implementation of the Hough-transform, at a given step repeating a row-vector multiple times was needed. Because there are some methods to do this task, it was worth to measure the runtime of the different approaches and choose the best one.

Suppose that we have a row-vector, containing values from 1 to 180.

theta = 1 : 180;


The task is to repeat this vector n times to get the following M matrix:

M = [1 2 3 ... 180
1 2 3 ... 180
. . . ... ...
1 2 3 ... 180]


### Four methods to repeat a row-vector in MATLAB

The classic way is to use the repmat function:

M = repmat(theta, [n, 1]);


Another method – often referenced as Tony's trick – is to index the row multiple times:

% example of the functionality
% A = [1 2 3];
% M = A([1; 1; 1], :);
% M = [1 2 3;
%      1 2 3;
%      1 2 3]
M = theta(ones(n, 1), :);


The same functionality can be achieved by a simple matrix multiplication, too:

% example of the functionality
% A = [1 2 3];
% M = [1; 1; 1] * A;
% M = [1 2 3;
%      1 2 3;
%      1 2 3]
M = ones(n, 1) * theta;


Another solution is using the bsxfun function:

M = bsxfun(@times, ones(n, 1), theta);


If you are unfamiliar with this function, please have a look at the end of this post, which contains a detailed example.

### Time measurement

The following MATLAB code was used to measure the runtime of the different approaches:

theta = 1 : 180;

% we want to repeat the row-vector these times
rows = [10, 25, 50, 100, 250, 500, 1000, 2500, 5000,        10000, 25000, 50000, 100000];

time = zeros(length(rows), 4);  % a matrix to contain the run-times
runs = 100;                     % 100 runs of each method for averaging
for i = 1 : length(rows)
M = [];

% the bsxfun approach
for j = 1 : runs
clear M;
tic;
M = bsxfun(@times, ones(rows(i), 1), theta);
time(i, 1) = time(i, 1) + toc;
end
time(i, 1) = time(i, 1) / runs;

% multiple indexing of the row
for j = 1 : runs
clear M;
tic;
M = theta(ones(rows(i), 1), :);
time(i, 2) = time(i, 2) + toc;
end
time(i, 2) = time(i, 2) / runs;

% the classic repmat function
for j = 1 : runs
clear M;
tic;
M = repmat(theta, [rows(i), 1]);
time(i, 3) = time(i, 3) + toc;
end
time(i, 3) = time(i, 3) / runs;

% matrix multiplication
for j = 1 : runs
clear M;
tic;
M = ones(rows(i), 1) * theta;
time(i, 4) = time(i, 4) + toc;
end
time(i, 4) = time(i, 4) / runs;
end


### Results

On the following figure the relative run-times are visualized. It means, that in each measuring point the fastest method has value 1.0, while the others show us, how much slower are the different approaches compared to the fastest one.

There is an interesting trend: when the number of the rows are below a hundred, Tony's trick is the absolute winner, especially on lower values. As we need more rows, bsxfun climbs to the first place.

The reason is that this function is well-optimized for memory handling. In case of having larger and larger arrays, memory management becomes more important, causing a noticeable speedup compared to other methods.

### An important note

Before using any of the methods described above, it is worth to measure the run-times on your computer. As Yair Altman pointed out in his comment, the results are highly dependent on the used system and the circumstances. Also have a look at his great blog revealing Undocumented MATLAB functionalities.