Read audio data from standard audio device in real-time (32-bit Windows operating systems only)
The From Wave Device block is still supported but is likely to be obsoleted in a future release. We strongly recommend replacing this block with the From Audio Device block.
The From Wave Device block reads audio data from a standard Windows® audio device in real-time. It is compatible with most popular Windows hardware, including Sound Blaster cards. (Models that contain both this block and the To Wave Device block require a duplex-capable sound card.)
The Use default audio device parameter allows the block to detect and use the system's default audio hardware. This option should be selected on systems that have a single sound device installed, or when the default sound device on a multiple-device system is the desired source. In cases when the default sound device is not the desired input source, clear Use default audio device, and select the desired device in the Audio device menu parameter.
When the audio source contains two channels (stereo), the Stereo check box should be selected. When the audio source contains a single channel (mono), the Stereo check box should be cleared. For stereo input, the block's output is an M-by-2 matrix containing one frame (M consecutive samples) of audio data from each of the two channels. For mono input, the block's output is an M-by-1 matrix containing one frame (M consecutive samples) of audio data from the mono input. The frame size, M, is specified by the Samples per frame parameter. For M=1, the output is sample based; otherwise, the output is frame based.
The audio data is processed in uncompressed pulse code modulation (PCM) format, and should typically be sampled at one of the standard Windows audio device rates: 8000, 11025, 22050, or 44100 Hz. You can select one of these rates from the Sample rate parameter. To specify a different rate, select the User-defined option and enter a value in the User-defined sample rate parameter.
The Sample Width (bits) parameter specifies the number of bits used to represent the signal samples read by the audio device. The following settings are available:
8 — allocates 8 bits to each sample,
allowing a resolution of 256 levels
16 — allocates 16 bits to each sample,
allowing a resolution of 65536 levels
24 — allocates 24 bits to each sample,
allowing a resolution of 16777216 levels (only for use with 24-bit audio
Higher sample width settings require more memory but yield better fidelity. The output from the block is independent of the Sample width (bits) setting. The output data type is determined by the Data type parameter setting.
Since the audio device accepts real-time audio input, Simulink® software must read a continuous stream of data from the device throughout the simulation. Delays in reading data from the audio hardware can result in hardware errors or distortion of the signal. This means that the From Wave Device block must read data from the audio hardware as quickly as the hardware itself acquires the signal. However, the block often cannot match the throughput rate of the audio hardware, especially when the simulation is running from within Simulink rather than as generated code. (Simulink operations are generally slower than comparable hardware operations, and execution speed routinely varies during the simulation as the host operating system services other processes.) The block must therefore rely on a buffering strategy to ensure that signal data can be read on schedule without losing samples.
At the start of the simulation, the audio device begins writing the input data to a (hardware) buffer with a capacity of Tb seconds. The From Wave Device block immediately begins pulling the earliest samples off the buffer (first in, first out) and collecting them in length-M frames for output. As the audio device continues to append inputs to the bottom of the buffer, the From Wave Device block continues to pull inputs off the top of the buffer at the best possible rate.
The following figure shows an audio signal being acquired and output with a frame size of 8 samples. The buffer of the sound board is approaching its five-frame capacity at the instant shown, which means that the hardware is adding samples to the buffer more rapidly than the block is pulling them off. (If the signal sample rate was 8 kHz, this small buffer could hold approximately 0.005 second of data.
When the simulation throughput rate is higher than the hardware throughput rate, the buffer remains empty throughout the simulation. If necessary, the From Wave Device block simply waits for new samples to become available on the buffer (the block does not interpolate between samples). More typically, the simulation throughput rate is lower than the hardware throughput rate, and the buffer tends to fill over the duration of the simulation.
When the buffer size is too small in relation to the simulation throughput rate, the buffer might fill before the entire length of signal is processed. This usually results in a device error or undesired device output. When this problem occurs, you can choose to either increase the buffer size or the simulation throughput rate:
Increase the buffer size
The Queue duration parameter specifies the duration of signal, Tb (in real-time seconds), that can be buffered in hardware during the simulation. Equivalently, this is the maximum length of time that the block's data acquisition can lag the hardware's data acquisition. The number of frames buffered is approximately
where Fs is the sample rate of the signal and M is the number of samples per frame. The required buffer size for a given signal depends on the signal length, the frame size, and the speed of the simulation. Note that increasing the buffer size might increase model latency.
Increase the simulation throughput rate
Two useful methods for improving simulation throughput rates are increasing the signal frame size and compiling the simulation into native code:
Increase frame sizes (and convert sample-based signals to frame-based signals) throughout the model to reduce the amount of block-to-block communication overhead. This can drastically increase throughput rates in many cases. However, larger frame sizes generally result in greater model latency due to initial buffering operations.
Generate executable code with Simulink Coder™. Native code runs much faster than Simulink, and should provide rates adequate for real-time audio processing.
More general ways to improve throughput rates include simplifying the model, and running the simulation on a faster PC processor. See Delay and Latency and Optimize Performance (Simulink) for other ideas on improving simulation performance.
The sample rate of the audio data to be acquired. Select one of the
standard Windows rates or the
The (nonstandard) sample rate of the audio data to be acquired.
The number of bits used to represent each signal sample.
Specifies stereo (two-channel) inputs when selected, mono (one-channel) inputs when cleared. Stereo output is M-by-2; mono output is M-by-1.
The number of audio samples in each successive output frame,
M. When the value of this parameter is
1, the block outputs a sample-based signal.
The length of signal (in seconds) to buffer to the hardware at the start of the simulation.
Reads audio input from the system's default audio device when selected. Clear to enable the Audio device ID parameter and select a device.
The name of the audio device from which to read the audio output (lists the names of the installed audio device drivers). Select Use default audio device when the system has only a single audio card installed.
The data type of the output: double-precision, single-precision, signed 16-bit integer, or unsigned 8-bit integer.
Double-precision floating point
Single-precision floating point
16-bit signed integer
8-bit unsigned integer