In order to see how the averaging filter is implemented, we will use the same example as before. We will follow these steps:
>> obj = VideoReader('inter.avi'), >> vid = read(obj); >> grayVid = uint8(zeros(size(vid,1), size(vid,2), size(vid,4))); >> for i = 1:size(vid,4), grayVid(:,:,i) = rgb2gray(vid(:,:,:,i)); end
>> avFilt = ones(3,3,3); % Make a 3x3x3 matrix full of ones >> avFilt = avFilt/numel(avFilt); % Make all elements equal to 1/n
grayVid
and avFilt
:>> filteredVid = convn(grayVid, avFilt); % Apply convolution >> filteredVid = uint8(filteredVid ); % Convert to uint8
>> subplot(1,2,1),imshow(grayVid(:,:,15)),title('Original frame') >> subplot(1,2,2),imshow(filteredVid(:,:,15)),title('Filtered frame')
This was another way to implement spatiotemporal smoothing using an averaging filter. The steps were pretty simple and similar to the processes discussed in Chapter 5, 2-Dimensional Image Filtering. The first step was to prepare our video for the process by converting it to grayscale, frame-by-frame. Once this was over, we created the filter for the averaging process. It was a 3 x 3 x 3 filter with all its values equal to 1 / (3*3*3).
Next, we applied n-dimensional convolution for n=3
. The result of the convolution was transformed to uint8
, and then one of its frames was demonstrated next to the respective original frame for qualitative evaluation purposes.
Q1. Which of the following are true?
convn
for averaging instead of making a triple nested for
loop leads to faster processing speed.