creating a loop for a specific computation

1 ビュー (過去 30 日間)
sermet OGUTCU
sermet OGUTCU 2021 年 7 月 31 日
コメント済み: dpb 2021 年 8 月 2 日
tCOD=readtable(full_file_name,'FileType','text', ...
all_time_second= all_time(:,4)*3600+all_time(:,5)*60+all_time(:,6); % seconds
The attached data file can be read by the above codes and "unique_seconds" variable can be created correctly. When multiple data files are involved, I used the following codes;
for j=1:num_of_files
tCOD{j,:}=readtable(full_file_name(j,:),'FileType','text', ...
all_time_second= all_time(:,4)*3600+all_time(:,5)*60+all_time(:,6); % seconds
For multiple files, all_time_second consists of n x 1 array where n equals the row numbers of tCOD.
for k=1:num_of_files
where a consists of row size of each tCOD array (378063 and 377840 for two different data files). I need to apply unique(all_time_second) command for each independent n x 1 array from tCOD as follows;
for example if two files are read;
for example if three files are read;
How I can create a loop or something else to create unique_seconds arrays w.r.t. the file numbers without explicitly writing unique_seconds_i?
I attached the one representative data file.
  8 件のコメント
sermet OGUTCU
sermet OGUTCU 2021 年 8 月 1 日
編集済み: sermet OGUTCU 2021 年 8 月 1 日
The purpose of creating the unique times (unique_seconds array) is just to compute how many unique times (different times) are there in the data file (it is 2880 for the single attached data). In this way, I can know the number of independent time sets in data file (there are bunch of repeated time sets in the data file). The above codes are purely intended to do so. When multiple files are read, I need to compute the number of each file's independent time sets (for example, 2880,2880,2500, etc.). If there are other more convenience way to do so, I don't necessarily use the above codes.



dpb 2021 年 8 月 1 日
編集済み: dpb 2021 年 8 月 2 日
OK, just making sure...although it may be interesting, still not positive it's of all that much use in the collect data at similar times of day across days where may not be the same number of observations still will need to use join or similar. I'd think turning it into a time table and retime could be worth consideration as well...
But,anyway, just to answer the Q? asked, quit trying to figure out your purpose... :)
tCOD=[]; % an empty placeholder
d=dir(fullfile(datdir,'COD_MatchWildCardString*.CLK')); % return list of wanted files
nFiles=numel(d); % how many files found
nUT=zeros(nFiles,1); % preallocate unique times number array
for j=1:nFiles
ffn=fullfile(d.folder,; % get fully qualified file name
nHdr=nHeaderLines(ffn); % find number header lines to skip
tC=readtable(ffn,'FileType','text', ... % and read the file to temporary table
'numHeaderLines',nHdr, ...
% create useful variable names -- you can fill in rest I'm not sure of best
tC.DateTime=datetime(tC{:,3:8}); % create the date time variable
nUT(j)=numel(unique(tC.DateTime)); % and count number timestamps this file
tCOD=[tCOD;tC]; % catenate the new file to end(*)
% whatever else want to do on individual file here...
% whatever else want to do on combined file here...
The helper function to get the number of header lines -- could be an internal function in the main m file or make it its own m-file if it might be useful elsewhere besides...or, if the number is fixed and known a priori, can dispense with; this adds generality if it is variable between files or groups of files.
If it is fixed and known to be so for a given group of files, can be called just once instead of inside the loop.
function nHdr=nHeaderLines(ffn)
% return number of lines in header of .CLK file -- looks for specific
% string "END OF HEADER" as last line of the header in the file...
% will fail if string not found...
fid=fopen(ffn); % open file to read
nHdr=1; % initialize counter
while ~contains(fgetl(fid),'END OF HEADER') % look for the end of the header string
(*) NB: The catenation of the files this way may slow things down as they're pretty big; if this is turns out to be a serious problem, post back....this isn't the most efficient but the easiest to code and if only doing once may be good enough.
When get a large file built, be sure to save it in .mat file and then won't have to rebuild it and so loading will be much quicker.
Again, might want to consider a timetable here depending upon just what are next step(s)...
  3 件のコメント
dpb 2021 年 8 月 2 日
No problem...I also just noticed (and corrected little bit ago) that left out the unique call in argument of numel for it'll be the height of the table, not the count intended.


その他の回答 (0 件)


Find more on Loops and Conditional Statements in Help Center and File Exchange


Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by