Inaccessible audio data

After puzzling over it for a few hours, I can’t seem to find any way to access the audio data buffer from a loaded SoundEffect.

I have an MP3 that I’ve imported via the Content Pipeline Tool. I can load it as a SoundEffect, but I’m trying to access its internal byte[] data, namely so as to push portions of it to a DynamicSoundEffectInstance. I see that there is a SoundBuffer member inside the loaded SoundEffect, which seems to contain exactly what I want, but it is inaccessible from my project. Can we make this public, or somehow else allow reading from this buffer? Or is there another solution that I’m overlooking entirely?

You can use reflection to get the SoundBuffer (for OpenAL) or the _dataStream (for XAudio2). Then use their respective APIs to get the audio data.

The ‘right’ way to get the raw audio data would be to create a custom Content Pipeline extension for it. You can use the built in AudioImporter and SoundEffectProcessor, but you’ll need a custom ContentTypeWriter (it can work the same as the built in writer for AudioContent, but needs to specify what reader to use) and Reader that loads it into a class that simply holds the audio data.

Maybe we could add a GetData API. If you’d like that you can open an issue for it on GitHub :slight_smile:

I don’t see any issue in adding a GetData() and corresponding SetData() API to SoundEffect, similar to Texture2D, pending any discovery of issues with the underlying platform APIs that may block that. We would also have to add properties for accessing the properties such as format, bit depth, channel count, etc, that you need to determine how to interpret the data coming from the SoundEffect or to push to the SoundEffect.

Not all platforms have a SoundBuffer object (that is for OpenAL only), so we cannot just expose that.

If you aren’t wanting to play the data with SoundEffect, but pipe it through DynamicSoundEffectInstance, it would make sense to use a content pipeline extension to import audio, convert to the required format you need (usually 16-bit signed PCM for DynamicSoundEffectInstance) and output in a file format that you can easily load and process.

1 Like

I think it would be a good idea to have such Get/SetData methods; better to have more than fewer options, in my opinion.

Now, I’ve been trying to implement a content extension as you two described, and I finally got it to work… somewhat. I ran into an obstacle, though - it only works when importing WAV format files. Inspecting the code, it seems that anything else (namely MP3 and Ogg, which are my preferred options) do not load a byte[] Data member in the resulting AudioContent object. Granted, I’m not incredibly well versed in content extensions, but as far as I can tell, that prevents me from achieving this via content. Of course, I’d like to be proven incorrect, or at least have a solution investigated for this perceived impasse. Thanks for your guidance!