MOVING_AVERAGE(X,F) smooths the vector data X with a boxcar window of size 2F+1, i.e., by means of averaging each element with the F elements at his right and the F elements at his left. The extreme elements are also averaged but with less data, obviously. Leaving the edges intact. The method is really fast.
MOVING_AVERAGE2(X,M,N) smooths the matrix X with a boxcar window of size (2M+1)x(2N+1), i.e., by means of averaging each element with its surrounding elements that fits in the mentioned box centered on it. This one is also really fast. The elements at the edges are averaged too, but the corners are left intact.
NANMOVING_AVERAGE(X,F) or NANMOVING_AVERAGE(X,F,1) accept NaN's elements in the vector X; the latter interpolates also those NaN's elements surrounded by numeric elements.
NANMOVING_AVERAGE2(X,M,N) or NANMOVING_AVERAGE2(X,M,N,1) accept elements NaN's in the matrix X; the latter interpolates also those NaN elements surrounded by numeric elements.
[ New simple GAP filling ]:
SMOOTH_MAVERAGE(X,M,N,IND) this one smooths only the X(IND) elements. ignoring NaNs. This can be used to elimante GAPS on your data.
Each Mfiles has an example (see the screenshot).
Check below to see the CHANGES on v3.1.
Note: Looking the 2 dimensional code from MOVING_AVERAGE2.M (and RUNMEAN for some hints) somebody can easily make an Ndimensional MA. Would you?
1.0.0.0  BSD License 

1.0.0.0  Use CUMSUM trick as Jos's RUNMEAN. Eliminates subfunctions. New screenshot. 

1. Fixed bugs on the 2D averaging.


1. STATISTICS toolbox independent (nanmean replaced by nan_mean) 2. Change category from Graphics to DSP. 3. Little changes on the submit. 

English translation from spanish. Little changes on the submit and screenshot. 

1. Fixed error in the sums through the rows! 2. New files accepting elements with NaN's! 3. New screenshot 
Inspired: filt2 2D geospatial data filter, mvaverage, ndnanfilter.m
Create scripts with code, output, and formatted text in a single executable document.
Micke Malmström (view profile)
sixie yu (view profile)
AJIl Kottayil (view profile)
I would like to get a moving average function dealing with nan values
Eric (view profile)
Carlos,
I like your function moving_average, very easy to use. I have data which has small to large time gaps and I don't want to filter across the gaps. I could break the vector up at each gap but that would mean work. Any suggestion on how to handle something like this?
almog shalom (view profile)
Very useful foe me (mainly for plotting)
Peter (view profile)
KuoHsien (view profile)
Dear all, I'm dealing with gap filling on weather measurements which the NaN should be filled based on the time window of several days.(i.e., neighborhood hour of several days).
For example, one NaN at 5pm will be replaced by the mean value in the neighborhood hour of neighborhood several days. (let's say 4, 5 and 6pm of neighborhood 5 days)
Here is the bone of question I like to deal with:
values = rand(1,1000)';
fake_NaN = floor(rand(1,300)'*1000);
values(fake_NaN) = NaN;
for i = 1:length(values)
n = 24 * i * (1:5)
having_nan_index = find(isnan(values))
new_values = nanmean(values(having_nan_index * n1:having_nan_index*n+1))
:
:
Something like that
:
:
If you have any solutions or advices, please feel free to let me know. Thanks, Michael
hajer gharsallaoui (view profile)
hi Carlos i have send you an email about the difficulties in programmation of the recursive moving coverage have you any idea about this issue? any help please? thanks by advance
It's adequate por postprocessing of spectroscopic signals, quite useful indeed.
Common Aslak, you are searching for FLEAS instead of BUGS... But, ok, USERS: BEWARE of OUTLIERS and LARGE MEANS when using CUMSUM runmean method! Rather use: NDNANFILTER :)
Carlos Vargas
Hi Carlos. I meant it as a constructive criticism. The point is that the error is unnecessary, easily avoidable, and there is no speed advantage. You are right that in the specific example the error is not very big (although you make the mistake of comparing it to the mean rather than the standard deviation). However, just because the error is small in that case does not mean it is for all series. Try for example this:
m=3;
n=100000;
x=randn(n,1); x(1)=1e100;
The problem is that the outlier gives rise to huge errors in the whole smoothed series, and not just within the window.
I forget the rating for Aslak's example (1 star): he is comparing errors of precisions 1e9=0.000000001 and 1e13~eps. Yes 10,000 times greater, but for values with 1,000 mean which is 1'000,000'000,000 greater than the greater error. That is a great little error isn't it?
Yes, Aslak, you are right, for that reason I left commented the normal recursive way ('s' in your example).
The problem with cumsum is with the last values which became incredibly large and 'rounding' problems, I guess, appear (eliminantig the large mean, 1000, the error is reduced). But, this is only a method and the user can choose which he likes more.
For example, now I prefer to use my new function NDNANFILTER here at the FEx.
Cheers!
Nice code but ... The cumsum trick to calculate moving averages can result in unneccarily large errors under certain conditions:
* The mean is very different from zero and the series is very long.
Here's a small test that illustrates the problem by using 3 different approaches of calculating the moving average. It shows that the 'cumsum' method has errors that are more than 10000 times greater than the errors from the filter method. There is no real speed difference.
m=10;
n=300000;
x=randn(n,1)+1000; %think for example atmospheric pressure.
tic
s=nan(length(x)m+1,1);
for ii=1:length(s)
s(ii)=mean(x(ii+(0:m1)));
end
slowtime=toc
tic;
c=[0;cumsum(x)];
c=(c(m+1:end)c(1:endm))/m;
cumsumtime=toc
cumsumerror=sqrt(mean((cs).^2))
tic;
flt=ones(m,1)/m;
f=filter2(flt,x,'valid');
filtertime=toc
filtererror=sqrt(mean((fs).^2))
slowtime =
9.4732
cumsumtime =
0.041549
cumsumerror =
2.6456e009
filtertime =
0.033685
filtererror =
1.4151e013
Well commented
Check I/O numbers and errors
vectorized
quick
so: nice
Found a harmless bug on MOVING_AVERAGE, line 75: extra comma in the warning! ja!
R K: In fact, those are limitations of this moving average, but the problem with the edges are common in filtering theory. The author provides us an idea and you?
BTW: that GAP filling is what i was looking for. Thanks!
There are some problems with this routine. It doesnt work for matrixes smaller than the desired window size and it doesnt work where in areas where a full window cannot be applied.
Used moving_average.
It works wonderfully. Thanks.
It does it Ian. moving_average2(x,0,n) smooths the rows, and moving_average2(x,m,0) the columns.
The author.
Thanks. I would recommend adding arrayoperation capabilities (ie. smooth each column or row recursively).
Simple, optimizing code, thank you!!
Works exactly as described. Great find.
Thanks for this nice set of functions. Works excellent.
Thank you very much, it works excellent.
Greetings
Igor
Hello, Igor. Yes, I hadn't noticed NANMEAN was a function of stat toolbox. I already changed it for NAN_MEAN. It is a vector average ignoring NaN. In about 1 day it would be updated.
I am having trouble with the function nanmean , which is not defined. Can somebody help me, please?
Thanks in advance.
Igor Milicevic
Hello Sam! Thank you for your comments. I had no problem with the example, you should get the screenshot above (without the holes), maybe it's your matlab release but the code is really simple and should work on others.
About the F, to get a centered average around an element, the number of elements to average should be odd, so 2F+1, and in this way F is the halfwidth of the window (check the Description above). Similar for the m,n on 2D.
Hi Carlos, thank you so much for your interesting matlab code. I have one doubt about the window size in 1D moving average. The size of the window is '2F+1' can you please tell me what 'F' stand for?
Also I have tried to get the 2D moving average to work with your supplied example, i couldn't get it to work.
Very usefull. Siyi: I think the loops on this programs are faster than any box FILTER, it's easier to use and also work with NaN's!
It is functional, but I'd still prefer FILTER over for loops.
I cant believe it was that easy, thanks!