現在この質問をフォロー中です
- フォローしているコンテンツ フィードに更新が表示されます。
- コミュニケーション基本設定に応じて電子メールを受け取ることができます。
How can convert vector aa size 1x262144 to matrix 256x256
1 回表示 (過去 30 日間)
古いコメントを表示
nora elfiky
2016 年 12 月 11 日
hi every one i'm new her i need help
size of aa 1x262144 i want convert to matrix 256x256
4 件のコメント
Walter Roberson
2020 年 1 月 3 日
In order to calculate the frequency of a signal, you need to know either the time between samples or else the sampling rate (which is just the inverse of the time between samples.) You can then proceed to fft(), and find the peak of that (you probably want to ignore the first bin, and you probably want to look at abs()). Then convert the bin location into frequency; the first example in the fft() documentation shows you how to do that.
You might also want to consider using spectrogram()
Walter Roberson
2020 年 1 月 6 日
In your posted code, fs would be the sampling frequency. The sampled values would be in y. You would proceed to fft(y) and then analyze the results along the lines I mentioned.
Note: your y might have multiple channels, which will show up as separate columns. Be careful, as in theory the different channels can have different peak frequencies.
採用された回答
Isabella Osetinsky-Tzidaki
2016 年 12 月 11 日
編集済み: Isabella Osetinsky-Tzidaki
2016 年 12 月 11 日
% you will not convert (262144 is 4 times 256*256 and not equal to it)
% but complete 4 matrices out of your "aa" vector
% and you will get a "M" array:
L=256*256;
M=nan(256,256,4); % pre-allocate
for i=1:4
v=aa(L*(i-1)+1:L*i);
M(:,:,i)=reshape(v,256,256);
end
37 件のコメント
honey
2020 年 1 月 20 日
[y,fs]=audioread('filename');
x1=y(1:262144);
x1=reshape(x1,1,262144);
can you please explain what is happened in above function
and its significance also
Walter Roberson
2020 年 1 月 20 日
The first line reads the given audio file. The samples are stored in columns, one column per channel. The samples are stored in the variable named y. The sampling frequency is stored in the variable fs.
The next line extracts the first 262144 samples from y into a vector. If the file had more than that many samples in the left (or only) channel then it simply extracts that many from the first column. The output in x1 is a column.
The third line reshapes the column x1 into a row x1.
honey
2020 年 1 月 24 日
編集済み: Walter Roberson
2020 年 1 月 24 日
thank u so much
can u please explain this pseudo random sequence
z
%% UNCORRELATED PSEUDO RANDOM SEQUENCE GENERATOR
G=Key;
%Generation of first m-sequence using generator polynomial [45]
sd1 =[0 0 0 0 1]; % Initial state of Shift register
PN1=[]; % First m-sequence
for j=1:G
PN1=[PN1 sd1(5)];
if sd1(1)==sd1(4)
temp1=0;
else temp1=1;end
sd1(1)=sd1(2);
sd1(2)=sd1(3);
sd1(3)=sd1(4);
sd1(4)=sd1(5);
sd1(5)=temp1;
end
sd2 =[0 0 0 0 1]; % Initial state of Shift register
PN2=[]; % Second m-sequence
for j=1:G
PN2=[PN2 sd2(5)];
if sd2(1)==sd2(2)
temp1=0;
else temp1=1;
end
if sd2(4)==temp1
temp2=0;
else temp2=1;
end
if sd2(5)==temp2
temp3=0;
else temp3=1;
end
sd2(1)=sd2(2);
sd2(2)=sd2(3);
sd2(3)=sd2(4);
sd2(4)=sd2(5);
sd2(5)=temp3;
end
Walter Roberson
2020 年 1 月 24 日
Something about generator polynomials. That is potentially related to https://www.mathworks.com/help/comm/ref/cyclpoly.html or https://www.mathworks.com/help/comm/ref/rsgenpoly.html or https://www.mathworks.com/help/comm/ref/bchgenpoly.html
But if I were to guess, I would say that it looks like it is generating a pseudo-random stream of bits to be XOR'd with the input stream. The reason for that could potentially be encryption, but it is more likely to have to do with increasing robustness to noise, because if you have an input signal that happens to be constant, you can run into detection glitches if the signal does not change state often enough.
I have seen generator polynomials used in PHY interfaces for gigabit ethernet -- again because forcing the signal to change state gets you better detection characteristics than a signal that might be constant.
Walter Roberson
2020 年 1 月 24 日
If you are still working on the task of transmitting an image through an audio stream, then the answer is "It does not matter whether the image is binary or grayscale or color". However, for the same number of rows and columns, color images contain more information than grayscale or binary, and grayscale contain more information than binary, so for the same number of rows and columns of the image, color images would need the longest audio stream, and grayscale would need a shorter audio stream (by a factor of 3), and binary would need 1/8th of the audio stream that grayscale would need.
Walter Roberson
2020 年 1 月 24 日
honey
2020 年 1 月 26 日
編集済み: Walter Roberson
2020 年 1 月 26 日
[L,H,V,D] = dwt2(x,'haar','sym');
[CL,CH,CV,CD] = dwt2(H,'haar','sym');
[CA1,CA2,CA3,CA4] = dwt2(CH,'haar','sym');
can you please explain how this works
Walter Roberson
2020 年 1 月 26 日
Roughly speaking:
It has been found that for many situations, roughly 80% of the variation is due to 20% of the causes.
For example suppose you were doing an fft() analysis of a sound, then it would not be uncommon to find that 80% of the signal magnitude could be accounted for by knowing the coefficients for only 20% of the locations -- that if you were to zero out 80% of the fft coefficients and ifft() that the result would be "80%" similiar to the original -- so like 5:1 compression.
Furthermore, it turns out that for a lot of situations, that of the remaining 20% of the detail, that 80% of it can be accounted for by 20% of the remaining coefficients. So if you were to analyze the 20% of the signal that remained after the first round of compression, you could compress that by about 5:1 too, leaving about 20% of the 20% unaccounted for. For example if you started with a signal of length 100, and the first round found 20 coefficients that accounted for 80% of the signal, and then you too the remaining 20% of the signal then another 16 coefficients would account for 80% of the 20%, so 20+16 coefficients would account for 1-0.8*0.8 = 96% of the signal. The same 80/20 rule can be applied again multiple times, each time accounting for about 4/5 more of the detail.
That is the sort of approach this code is taking: it does a dwt2 haar transform and extracts four bands of coefficients. It then takes one of the four bands, the Horizontal detail, and analyzes it, getting out four bands of coefficients of that. It then analyzes the details of the horizontal detail -- because you can proceed to zero out most of [CA1,CA2,CA3,CA4] and store that in reduced form, and later expand those to the original size and use them to reconstruct CH, and use that to reconstruct x. This would get you a more accurate reconstruction than if you were just to do a single dwt2 and zero out a bunch of coefficients to do compression.
honey
2020 年 1 月 27 日
編集済み: Walter Roberson
2020 年 1 月 27 日
thank you
but
[L,H,V,D] = dwt2(x,'haar','sym');
what is the significance of symmetric extension, x is speech signal
and also why we convert matrix to cell in image processing
for i=1:16
a(i)=4;
end
for i=1:16
b(i)=4;
end
newCA2=mat2cell(CA2,a,b);
d=zeros(64,64);
nCA2=mat2cell(d,a,b);
Walter Roberson
2020 年 1 月 27 日
I do not know why sym and haar are used.
sym appears to refer to "half-point symmetry" and is described at https://books.google.ca/books?id=Z76N_Ab5pp8C&pg=PA273&lpg=PA273&dq=Symmetric+extension+(half+point)&source=bl&ots=qWVKcu3y8-&sig=ACfU3U2GiZqB5FES0OPGzJ07L-e1qkEk_Q&hl=en&sa=X&ved=2ahUKEwjEu9jD8aLnAhXSup4KHaacBoIQ6AEwAXoECAsQAQ#v=onepage&q=Symmetric%20extension%20(half%20point)&f=false
haar is commonly used with audio because apparently it has good time localization.
I do not know why that cell stuff is being done. The code is dividing a 64 x 64 numeric array into cells of 4 x 4, but I do not know why it would want to do that.
honey
2020 年 1 月 27 日
thank you
what is the significance after converting from matrix to cell in general in image processing
Walter Roberson
2020 年 1 月 27 日
The main significance is that many people do not know how to write code for processing blocks of an image at a time, so they think they have to divide the image up into cell arrays in order to process it.
The second main significance is that sometimes the code needed to write code for processing blocks of an image can end up obscuring the algorithm being used, so sometimes it is clearer to convert into blocks and process the blocks using cellfun() instead of writing loops. Notice that I said "clearer", not "more efficient" or "better".
honey
2020 年 2 月 3 日
%% WATERMARK BIT ENCODING ALGORITHM IN SELECTED/MID-BAND COEFFICIENTS OF DCT TRANSFORMED BLOCKS ONLY
%% FOR BLOCK CA2
for p=1:16
for q=1:16
if w1(p,q)==0
nCA2{p,q}(1,3)=nCA2{p,q}(1,3)+alpha*PN1(1);
nCA2{p,q}(1,4)=nCA2{p,q}(1,4)+alpha*PN1(2);
nCA2{p,q}(2,2)=nCA2{p,q}(2,2)+alpha*PN1(3);
nCA2{p,q}(2,3)=nCA2{p,q}(2,3)+alpha*PN1(4);
nCA2{p,q}(3,1)=nCA2{p,q}(3,1)+alpha*PN1(5);
nCA2{p,q}(3,2)=nCA2{p,q}(3,2)+alpha*PN1(6);
nCA2{p,q}(4,1)=nCA2{p,q}(4,1)+alpha*PN1(7);
else
nCA2{p,q}(1,3)=nCA2{p,q}(1,3)+alpha*PN2(1);
nCA2{p,q}(1,4)=nCA2{p,q}(1,4)+alpha*PN2(2);
nCA2{p,q}(2,2)=nCA2{p,q}(2,2)+alpha*PN2(3);
nCA2{p,q}(2,3)=nCA2{p,q}(2,3)+alpha*PN2(4);
nCA2{p,q}(3,1)=nCA2{p,q}(3,1)+alpha*PN2(5);
nCA2{p,q}(3,2)=nCA2{p,q}(3,2)+alpha*PN2(6);
nCA2{p,q}(4,1)=nCA2{p,q}(4,1)+alpha*PN2(7);
end
end
end
will you please explain why we taken (1,3)(1,4)....(4,1) and what is its significance
here pn1 and pn2 are pseudo noise sequences
Walter Roberson
2020 年 2 月 3 日
will you please explain why we taken (1,3)(1,4)....(4,1) and what is its significance
I do not know.
Perhaps someone studied which locations in a block most reliably produced distinguishable changes that survive idwt -> dwt -> extract
Walter Roberson
2020 年 2 月 4 日
The documentation page is https://www.mathworks.com/help/comm/ref/awgn.html . You could ask specific questions about that.
honey
2020 年 2 月 6 日
Okk sir I would like to know about the wavread,auread and audioread commands and which one is the best among three
Walter Roberson
2020 年 2 月 6 日
auread() used to exist, and it was a function to read NeXT/SUN (.au) sound files.
wavread() used to exist, and it was a function to read Microsoft WAVE (.wav) sound files.
Both auread() and wavread() have been removed from MATLAB, with audioread() now handling both kinds of files.
You should not use auread() or wavread() unless you need to use MATLAB R2012a or earlier; for any later release you should use audioread() or sometimes dsp.AudioFileReader https://www.mathworks.com/help/dsp/ref/dsp.audiofilereader-system-object.html
honey
2020 年 2 月 7 日
thank you sir
did wavwrite be used in updated versions instead of audiowrite
and will you please explain the below each noise
ok_noise =0; % select 1 for adding the particular disturbance to host audio
ok_filtering=0;
ok_cropping=0;
ok_resampling=0;
ok_requantization=0;
if ok_noise
% Additional noise
y1 = awgn(y1,10,'measured');
end
if ok_filtering
% Filtering
myfilter = ones(512,1);
myfilter = myfilter/sum(myfilter);
y1 = filter(myfilter,1,y1);
end
if ok_cropping
% Cropping
Lmin = round(L/11);
Lmax = round(5*L/11);
y1(1:1,1:Lmin) = 0;
y1(1:1,Lmax:end) = 0;
end
if ok_resampling
% Resampling
Fs_0 = Fs;
Fs_1 = round(Fs/9);
y1 = resample(y1,Fs_1,Fs_0);
y1 = resample(y1,Fs_0,Fs_1);
if length(y1)<L
y1(end+L) = 0;
end
if length(y1)>L
y1 = y1(1:L);
end
end
if ok_requantization
% Requantization
bits_new = 8;
wavwrite(y1,Fs,bits_new,'requantized_sound.wav');
y1 = audioread('requantized_sound.wav');
end
Walter Roberson
2020 年 2 月 7 日
Yes wavwrite was replaced by audiowrite.
That code only has one noise, which is standard white Gaussian noise with signal to noise ratio 10
honey
2020 年 2 月 7 日
i understand that only awgn is considering but i wanted to know about the remaining noises to and how they are working will you please explain those
Walter Roberson
2020 年 2 月 7 日
There are no other noises in that code.
If you are asking what other possible kinds of noises there are in the world, then that would probably take a fair amount of research and time to answer.
honey
2020 年 2 月 7 日
so those are just used for written purpose only is that correct
i didn't understand what is r---* in the below code
plot(qualityFactor,ssimValues,'r--*')
can you please explain it
Walter Roberson
2020 年 2 月 7 日
I suspect that you might be confusing the code for filtering or cropping or requantization as being "noise".
r--* as a plot request says that the plot should be drawn in red ('r') and that it should use a dashed line ('--') and that each point should have a marker that is asterisk shaped ('*'). You can find more information in the documentation under the term "linespec"
honey
2020 年 2 月 10 日
Is there any difference between reshaping and amplitude modulation while both are applied to a signal
Walter Roberson
2020 年 2 月 10 日
No. On MATLAB reshaping refers to taking the memory used to store an array and telling MATLAB to reinterpret the memory as a different set of dimensions. For example
reshape(1:12, 2, 3, 2)
uses the sequential memory used to store 1:12 and reinterprets it as 2 x 3 x 2. The values are not changed at all, and the total number of elements must be exactly the same afterwards, but now it is no longer 1 x 12 and is 2 x 3 x 2 instead.
Amplitude modulation is completely different, and refers to taking a carrier wave and imposing a signal on to it by modifying the amplitude of the carrier wave. The output is a higher frequency than the signal being carried, and the output is a vector.
honey
2020 年 3 月 16 日
can any one of the dct and dwt technique is used for speech modulation for image watermarking
Walter Roberson
2020 年 3 月 16 日
can any one of the dct and dwt technique is used for speech modulation for image watermarking
Yes. For the purpose of image watermarking, it does not matter how you create the stream of data that you use to watermark. If you want to convert a famous speech into Morse Code and use that, go ahead. If you want to use a picture of a baby goat, go ahead. If you want to use the historical record of high and low temperatures for your home town, go ahead. If you want to use dct or dwt to compress some audio and use that, go ahead.
honey
2020 年 3 月 16 日
編集済み: Walter Roberson
2020 年 3 月 16 日
clc
clear all
close all
warning off
%% audio input
ok_classical=0; % 1 selected 0 not-selected
ok_jazz=0;
ok_pop=1;
ok_looney=0;
if ok_classical
[y,Fs] = audioread('classical.wav');
end
if ok_jazz
[y,Fs] = audioread('jazz.wav');
end
if ok_pop
[y,Fs] = audioread('pop.wav');
end
if ok_looney
[y,Fs] = audioread('loopyMusic.wav');
end
x1=y(1:262144);
x1=reshape(x1,1,262144);
x=reshape(x1,512,512);
Key=7;
[L1, L2]=size(x1);
%% image input
img = imread('CW32.jpg'); %Get the input image
I = rgb2gray(img);
original_img=im2bw(I);
w1=im2bw(I);%Convert to grayscale image
%% 3 level of DWT decomposition with HAAR wavelet
[L,H,V,D]=dwt2(x,'haar','sym');
[CL,CH,CV,CD]=dwt2(H,'haar','sym');
[CA1,CA2,CA3,CA4]=dwt2(CH,'haar','sym');
for i=1:16
a(i)=4;
end
for i=1:16
b(i)=4;
end
%% DCT of sub-band block CH->CA2
newCA2=mat2cell(CA2,a,b);
d=zeros(64,64);
nCA2=mat2cell(d,a,b);
for i=1:16
for j=1:16
nCA2{i,j}=dct2(newCA2{i,j});
end
end
%% DCT of sub-band block CV->CV2
[CV1,CV2,CV3,CV4]=dwt2(CV,'haar','sym');
newCV2=mat2cell(CV2,a,b);
d=zeros(64,64);
nCV2=mat2cell(d,a,b);
for i=1:16
for j=1:16
nCV2{i,j}=dct2(newCV2{i,j});
end
end
%% DCT of sub-band block VH->VH2
[VL,VH,VV,VD]=dwt2(V,'haar','sym');
[VH1,VH2,VH3,VH4]=dwt2(VH,'haar','sym');
newVH2=mat2cell(VH2,a,b);
d=zeros(64,64);
nVH2=mat2cell(d,a,b);
for i=1:16
for j=1:16
nVH2{i,j}=dct2(newVH2{i,j});
end
end
%% dct OF SUB-band VV->VV2
[VV1,VV2,VV3,VV4]=dwt2(VV,'haar','sym');
newVV2=mat2cell(VV2,a,b);
d=zeros(64,64);
nVV2=mat2cell(d,a,b);
for i=1:16
for j=1:16
nVV2{i,j}=dct2(newVV2{i,j});
end
end
%% UNCORRELATED PSEUDO RANDOM SEQUENCE GENERATOR
G=Key;
%Generation of first m-sequence using generator polynomial [45]
sd1 =[0 0 0 0 1]; % Initial state of Shift register
PN1=[]; % First m-sequence
for j=1:G
PN1=[PN1 sd1(5)];
if sd1(1)==sd1(4)
temp1=0;
else temp1=1;
end
sd1(1)=sd1(2);
sd1(2)=sd1(3);
sd1(3)=sd1(4);
sd1(4)=sd1(5);
sd1(5)=temp1;
end
sd2 =[0 1 0 0 1]; % Initial state of Shift register
PN2=[]; % Second m-sequence
for j=1:G
PN2=[PN2 sd2(5)];
if sd2(1)==sd2(2)
temp1=0;
else temp1=1;
end
if sd2(4)==temp1
temp2=0;
else temp2=1;
end
if sd2(5)==temp2
temp3=0;
else temp3=1;
end
sd2(1)=sd2(2);
sd2(2)=sd2(3);
sd2(3)=sd2(4);
sd2(4)=sd2(5);
sd2(5)=temp3;
end
%% WATERMARKING COEFFICIENT OR WEIGHT ALPHA
alpha=0.1;
%% WATERMARK BIT ENCODING ALGORITHM IN SELECTED/MID-BAND COEFFICIENTS OF DCT TRANSFORMED BLOCKS ONLY
%% FOR BLOCK CA2
for p=1:16
for q=1:16
if w1(p,q)==0
nCA2{p,q}(1,3)=nCA2{p,q}(1,3)+alpha*PN1(1);
nCA2{p,q}(1,4)=nCA2{p,q}(1,4)+alpha*PN1(2);
nCA2{p,q}(2,2)=nCA2{p,q}(2,2)+alpha*PN1(3);
nCA2{p,q}(2,3)=nCA2{p,q}(2,3)+alpha*PN1(4);
nCA2{p,q}(3,1)=nCA2{p,q}(3,1)+alpha*PN1(5);
nCA2{p,q}(3,2)=nCA2{p,q}(3,2)+alpha*PN1(6);
nCA2{p,q}(4,1)=nCA2{p,q}(4,1)+alpha*PN1(7);
else
nCA2{p,q}(1,3)=nCA2{p,q}(1,3)+alpha*PN2(1);
nCA2{p,q}(1,4)=nCA2{p,q}(1,4)+alpha*PN2(2);
nCA2{p,q}(2,2)=nCA2{p,q}(2,2)+alpha*PN2(3);
nCA2{p,q}(2,3)=nCA2{p,q}(2,3)+alpha*PN2(4);
nCA2{p,q}(3,1)=nCA2{p,q}(3,1)+alpha*PN2(5);
nCA2{p,q}(3,2)=nCA2{p,q}(3,2)+alpha*PN2(6);
nCA2{p,q}(4,1)=nCA2{p,q}(4,1)+alpha*PN2(7);
end
end
end
for i=1:16
for j=1:16
newCA2{i,j}=idct2(nCA2{i,j});
end
end
CA2=cell2mat(newCA2);
%% FOR BLOCK CV2
for p=1:16
for q=1:16
if w1(p,16+q)==0
nCV2{p,q}(1,3)=nCV2{p,q}(1,3)+alpha*PN1(1);
nCV2{p,q}(1,4)=nCV2{p,q}(1,4)+alpha*PN1(2);
nCV2{p,q}(2,2)=nCV2{p,q}(2,2)+alpha*PN1(3);
nCV2{p,q}(2,3)=nCV2{p,q}(2,3)+alpha*PN1(4);
nCV2{p,q}(3,1)=nCV2{p,q}(3,1)+alpha*PN1(5);
nCV2{p,q}(3,2)=nCV2{p,q}(3,2)+alpha*PN1(6);
nCV2{p,q}(4,1)=nCV2{p,q}(4,1)+alpha*PN1(7);
else
nCV2{p,q}(1,3)=nCV2{p,q}(1,3)+alpha*PN2(1);
nCV2{p,q}(1,4)=nCV2{p,q}(1,4)+alpha*PN2(2);
nCV2{p,q}(2,2)=nCV2{p,q}(2,2)+alpha*PN2(3);
nCV2{p,q}(2,3)=nCV2{p,q}(2,3)+alpha*PN2(4);
nCV2{p,q}(3,1)=nCV2{p,q}(3,1)+alpha*PN2(5);
nCV2{p,q}(3,2)=nCV2{p,q}(3,2)+alpha*PN2(6);
nCV2{p,q}(4,1)=nCV2{p,q}(4,1)+alpha*PN2(7);
end
end
end
for i=1:16
for j=1:16
newCV2{i,j}=idct2(nCV2{i,j});
end
end
CV2=cell2mat(newCV2);
%% FOR BLOCK VH2
for p=1:16
for q=1:16
if w1(p+16,q)==0
nVH2{p,q}(1,3)=nVH2{p,q}(1,3)+alpha*PN1(1);
nVH2{p,q}(1,4)=nVH2{p,q}(1,4)+alpha*PN1(2);
nVH2{p,q}(2,2)=nVH2{p,q}(2,2)+alpha*PN1(3);
nVH2{p,q}(2,3)=nVH2{p,q}(2,3)+alpha*PN1(4);
nVH2{p,q}(3,1)=nVH2{p,q}(3,1)+alpha*PN1(5);
nVH2{p,q}(3,2)=nVH2{p,q}(3,2)+alpha*PN1(6);
nVH2{p,q}(4,1)=nVH2{p,q}(4,1)+alpha*PN1(7);
else
nVH2{p,q}(1,3)=nVH2{p,q}(1,3)+alpha*PN2(1);
nVH2{p,q}(1,4)=nVH2{p,q}(1,4)+alpha*PN2(2);
nVH2{p,q}(2,2)=nVH2{p,q}(2,2)+alpha*PN2(3);
nVH2{p,q}(2,3)=nVH2{p,q}(2,3)+alpha*PN2(4);
nVH2{p,q}(3,1)=nVH2{p,q}(3,1)+alpha*PN2(5);
nVH2{p,q}(3,2)=nVH2{p,q}(3,2)+alpha*PN2(6);
nVH2{p,q}(4,1)=nVH2{p,q}(4,1)+alpha*PN2(7);
end
end
end
for i=1:16
for j=1:16
newVH2{i,j}=idct2(nVH2{i,j});
end
end
VH2=cell2mat(newVH2);
%% FOR BLOCK VV2
for p=1:16
for q=1:16
if w1(p+16,q+16)==0
nVV2{p,q}(1,3)=nVV2{p,q}(1,3)+alpha*PN1(1);
nVV2{p,q}(1,4)=nVV2{p,q}(1,4)+alpha*PN1(2);
nVV2{p,q}(2,2)=nVV2{p,q}(2,2)+alpha*PN1(3);
nVV2{p,q}(2,3)=nVV2{p,q}(2,3)+alpha*PN1(4);
nVV2{p,q}(3,1)=nVV2{p,q}(3,1)+alpha*PN1(5);
nVV2{p,q}(3,2)=nVV2{p,q}(3,2)+alpha*PN1(6);
nVV2{p,q}(4,1)=nVV2{p,q}(4,1)+alpha*PN1(7);
else
nVH2{p,q}(1,3)=nVV2{p,q}(1,3)+alpha*PN2(1);
nVV2{p,q}(1,4)=nVV2{p,q}(1,4)+alpha*PN2(2);
nVV2{p,q}(2,2)=nVV2{p,q}(2,2)+alpha*PN2(3);
nVV2{p,q}(2,3)=nVV2{p,q}(2,3)+alpha*PN2(4);
nVV2{p,q}(3,1)=nVV2{p,q}(3,1)+alpha*PN2(5);
nVV2{p,q}(3,2)=nVV2{p,q}(3,2)+alpha*PN2(6);
nVV2{p,q}(4,1)=nVV2{p,q}(4,1)+alpha*PN2(7);
end
end
end
for i=1:16
for j=1:16
newVV2{i,j}=idct2(nVV2{i,j});
end
end
VV2=cell2mat(newVV2);
%% IDWT AND IDCT TO GET MODIFIED COEFFICIENTS AND FORM THE WATERMARKED AUDIO
CH=idwt2(CA1,CA2,CA3,CA4,'haar','sym');
CV=idwt2(CV1,CV2,CV3,CV4,'haar','sym');
H=idwt2(CL,CH,CV,CD,'haar','sym');
VH=idwt2(VH1,VH2,VH3,VH4,'haar','sym');
VV=idwt2(VV1,VV2,VV3,VV4,'haar','sym');
V=idwt2(VL,VH,VV,VD,'haar','sym');
newx=idwt2(L,H,V,D,'haar','sym');
y1=reshape(newx,1,512*512);
SNR=snr(y1,x1)
y2=reshape(newx,1,512*512);
L=length(y1);
%% DISTURBANCE ADDED TO THE WATERMARKED AUDIO
ok_noise =0; % select 1 for adding the particular disturbance to host audio
if ok_noise
% Additional noise
y1 = awgn(y1,10,'measured');
end
%% WATERMARK EXTRACTION ALGORITHM
newx=reshape(y1,512,512);
[A,B,C,D]=dwt2(newx,'haar','sym');
[B1,B2,B3,B4]=dwt2(B,'haar','sym');
[B21,B22,B23,B24]=dwt2(B2,'haar','sym');
[B31,B32,B33,B34]=dwt2(B3,'haar','sym');
[C1,C2,C3,C4]=dwt2(C,'haar','sym');
[C21,C22,C23,C24]=dwt2(C2,'haar','sym');
[C31,C32,C33,C34]=dwt2(C3,'haar','sym');
%% DCT OF SUB-BANDS
for i=1:16
a(i)=4;
end
for i=1:16
b(i)=4;
end
newB22=mat2cell(B22,a,b);
d=zeros(64,64);
nB22=mat2cell(d,a,b);
for i=1:16
for j=1:16
nB22{i,j}=dct2(newB22{i,j});
end
end
%% EXTRACTION OF WATERMARK BIT BY COMPARISON OF CORRELATION
for p=1:16
for q=1:16
if corr([nB22{p,q}(1,3) nB22{p,q}(1,4) nB22{p,q}(2,2) nB22{p,q}(2,3) nB22{p,q}(3,1) nB22{p,q}(3,2) nB22{p,q}(4,1)]',PN1(1:7)')>=corr([nB22{p,q}(1,3) nB22{p,q}(1,4) nB22{p,q}(2,2) nB22{p,q}(2,3) nB22{p,q}(3,1) nB22{p,q}(3,2) nB22{p,q}(4,1)]',PN2(1:7)')
w2(p,q)=0;
else
w2(p,q)=1;
end
end
end
%% DCT OF SUB-BANDS
for i=1:16
a(i)=4;
end
for i=1:16
b(i)=4;
end
newB32=mat2cell(B32,a,b);
d=zeros(64,64);
nB32=mat2cell(d,a,b);
for i=1:16
for j=1:16
nB32{i,j}=dct2(newB32{i,j});
end
end
%% EXTRACTION OF WATERMARK BIT BY COMPARISON OF CORRELATION
for p=1:16
for q=1:16
if corr([nB32{p,q}(1,3) nB32{p,q}(1,4) nB32{p,q}(2,2) nB32{p,q}(2,3) nB32{p,q}(3,1) nB32{p,q}(3,2) nB32{p,q}(4,1)]',PN1(1:7)')>=corr([nB32{p,q}(1,3) nB32{p,q}(1,4) nB32{p,q}(2,2) nB32{p,q}(2,3) nB32{p,q}(3,1) nB32{p,q}(3,2) nB32{p,q}(4,1)]',PN2(1:7)')
w2(p,q+16)=0;
else
w2(p,q+16)=1;
end
end
end
%% DCT OF SUB-BAND
for i=1:16
a(i)=4;
end
for i=1:16
b(i)=4;
end
newC22=mat2cell(C22,a,b);
d=zeros(64,64);
nC22=mat2cell(d,a,b);
for i=1:16
for j=1:16
nC22{i,j}=dct2(newC22{i,j});
end
end
%% EXTRACTION OF WATERMARK BIT BY COMPARISON OF CORRELATION
for p=1:16
for q=1:16
if corr([nC22{p,q}(1,3) nC22{p,q}(1,4) nC22{p,q}(2,2) nC22{p,q}(2,3) nC22{p,q}(3,1) nC22{p,q}(3,2) nC22{p,q}(4,1)]',PN1(1:7)')>=corr([nC22{p,q}(1,3) nC22{p,q}(1,4) nC22{p,q}(2,2) nC22{p,q}(2,3) nC22{p,q}(3,1) nC22{p,q}(3,2) nC22{p,q}(4,1)]',PN2(1:7)')
w2(p+16,q)=0;
else
w2(p+16,q)=1;
end
end
end
%% DCT OF SUB-BAND
for i=1:16
a(i)=4;
end
for i=1:16
b(i)=4;
end
newC32=mat2cell(C32,a,b);
d=zeros(64,64);
nC32=mat2cell(d,a,b);
for i=1:16
for j=1:16
nC32{i,j}=dct2(newC32{i,j});
end
end
%% EXTRACTION OF WATERMARK BIT BY COMPARISON OF CORRELATION
for p=1:16
for q=1:16
if corr([nC32{p,q}(1,3) nC32{p,q}(1,4) nC32{p,q}(2,2) nC32{p,q}(2,3) nC32{p,q}(3,1) nC32{p,q}(3,2) nC32{p,q}(4,1)]',PN1(1:7)')>=corr([nC32{p,q}(1,3) nC32{p,q}(1,4) nC32{p,q}(2,2) nC32{p,q}(2,3) nC32{p,q}(3,1) nC32{p,q}(3,2) nC32{p,q}(4,1)]',PN2(1:7)')
w2(p+16,q+16)=0;
else
w2(p+16,q+16)=1;
end
end
end
sbr=w2;
figure
imshow(logical(w2))
%% Quality check
q=max(imabsdiff(y2,reshape(x,1,512*512)))
BER=biterr(original_img,sbr)
ssimval = ssim(uint8(sbr),uint8(original_img))
ps=psnr(uint8(original_img),uint8(sbr)) % Must be greater than 35 dB
ssimValues = zeros(1,10);
qualityFactor = 10:10:100;
for i = 1:10
imwrite(I,'sbr.jpg','jpg','quality',qualityFactor(i));
ssimValues(i) = ssim(imread('sbr.jpg'),I);
end
figure
plot(qualityFactor,ssimValues,'r--*')
xlabel(' Copression Quality Factor');
ylabel(' SSIM value');
%% END %%
can u please explain in this code why is it necessary to use both DCT and DWT and also why only HAAR in DWT
Walter Roberson
2020 年 3 月 16 日
can u please explain in this code why is it necessary to use both DCT and DWT
You should be asking the author of the code questions like that.
why only HAAR in DWT
'haar' has good time localization, able to deal with sudden changes in value, which makes it more suitable for use in audio.
その他の回答 (1 件)
参考
カテゴリ
Help Center および File Exchange で Discrete Multiresolution Analysis についてさらに検索
タグ
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!エラーが発生しました
ページに変更が加えられたため、アクションを完了できません。ページを再度読み込み、更新された状態を確認してください。
Web サイトの選択
Web サイトを選択すると、翻訳されたコンテンツにアクセスし、地域のイベントやサービスを確認できます。現在の位置情報に基づき、次のサイトの選択を推奨します:
また、以下のリストから Web サイトを選択することもできます。
最適なサイトパフォーマンスの取得方法
中国のサイト (中国語または英語) を選択することで、最適なサイトパフォーマンスが得られます。その他の国の MathWorks のサイトは、お客様の地域からのアクセスが最適化されていません。
南北アメリカ
- América Latina (Español)
- Canada (English)
- United States (English)
ヨーロッパ
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom(English)
アジア太平洋地域
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)