Difference between revisions of "BeagleBoard/GSoC/2021 Proposal/ALSA plugin for BELA"

From eLinux.org
Jump to: navigation, search
(Status)
 
(4 intermediate revisions by the same user not shown)
Line 13: Line 13:
  
 
=Status=
 
=Status=
Discussing the implementation ideas with Giulio Moro and others on #beagle-gsoc IRC.
+
Submitted for review.
  
 
==About you==
 
==About you==
Line 29: Line 29:
 
===Description===
 
===Description===
  
[https://learn.bela.io/ BELA] is a cape designed for BB Black which features real-time audio processing via usage of [https://source.denx.de/Xenomai/xenomai/-/wikis/home Xenomai] threads. Apart from being a hardware solution, BELA supplies its own operating system based on Debian Linux distribution and a full-fledged IDE allowing for seamless audio development experience. BELA provides its own library for interfacing with the hardware, however it does not provide any unified interface via ALSA, JACK or PulseAudio. Therefore, it is currently impossible to use BELA like a regular Linux audio device and it has to be done by utilizing its API calls.
+
[https://learn.bela.io/ BELA] is a cape designed for BB Black which features real-time audio processing via usage of [https://source.denx.de/Xenomai/xenomai/-/wikis/home Xenomai] co-kernel. Apart from being a hardware solution, BELA supplies its own customized version of Debian Linux distribution and a full-fledged IDE allowing for seamless audio development experience. BELA provides its own library for interfacing with the hardware, however it does not provide any unified interface via ALSA, JACK or PulseAudio. Therefore, it is currently impossible to use BELA like a regular Linux audio device and it has to be done by utilizing its API calls. Now, if any application wants to use the BELA, it must be patched to use its API, and this increases the maintenance burden.
  
 
The main premise of this project is to enable the unified access by means of ALSA plugin. This plugin will allow for tying user-provided functions for regular system calls alsa-lib API uses for operating on its devices. Since such need may arise for any other real-time ALSA devices, this plugin would be a valuable addition to the ALSA ecosystem and would be mainline'able. This way, users can call regular ALSA API's for interacting with the device and still profit from all the real-time benefits it offers.
 
The main premise of this project is to enable the unified access by means of ALSA plugin. This plugin will allow for tying user-provided functions for regular system calls alsa-lib API uses for operating on its devices. Since such need may arise for any other real-time ALSA devices, this plugin would be a valuable addition to the ALSA ecosystem and would be mainline'able. This way, users can call regular ALSA API's for interacting with the device and still profit from all the real-time benefits it offers.
  
The project will also focus on writing all necessary components for interfacing with this plugin, such as an exemplary userspace application and instructions on how to use the ALSA API with BELA. BELA also features analog and digial ins and outs which should be representable in ALSA in the manner of channels or as MIDI interfaces. // TODO: totally not sure about this giuliomoro,
+
The project will also focus on writing all necessary components for interfacing with this plugin, such as an exemplary userspace application and instructions on how to use the ALSA API with BELA.  
  
 
===Implementation===
 
===Implementation===
 
===== ALSA plugin =====
 
===== ALSA plugin =====
The first part - ALSA plugin will be realized basing on already existent ALSA plugins, like the [https://github.com/alsa-project/alsa-lib/blob/master/src/pcm/pcm_file.c file plugin] which allows interaction with arbitrary files as if they were regular ALSA devices. If not for hardcoded syscalls in this plugin, we would be fine by simply using it for our problem, or writing some workaround and shipping custom ''libasound'' along with other BELA software. However, this solution is not the best idea, as we would rely on shipping this library and maintaining it in sync with the upstream. Hence, having a completely fresh plugin which could be used in various other applications is a better tailored solution.
+
The first part - ALSA plugin will be realized basing on already existent ALSA plugins, like the [https://github.com/alsa-project/alsa-lib/blob/master/src/pcm/pcm_file.c file plugin] which allows interaction with arbitrary files as if they were regular ALSA devices. If not for hardcoded syscalls in this plugin, we would be fine by simply using it for our problem, or writing some workaround and shipping custom ''libasound'' along with other BELA software. However, this solution is not the best idea, as we would rely on shipping this library and maintaining it in sync with the upstream. Hence, having a completely fresh plugin which could be used in various other applications is a better tailored solution. The plugin itself will be not limited to BELA only and will open the door for other audio solutions that use custom real-time capabilities to integrate with ALSA.
 
+
 
===== BELA System description =====
 
===== BELA System description =====
 
In order to understand why we even have to create special ALSA plugins and not create regular driver for the BELA devices, we need to look closely at how BELA manages its audio data and delivers it to the user.
 
In order to understand why we even have to create special ALSA plugins and not create regular driver for the BELA devices, we need to look closely at how BELA manages its audio data and delivers it to the user.
Line 46: Line 46:
 
The diagram above shows how the BELA system operates and utilizes the ARM CPU and the PRU unit. It is already extended with the ALSA functionality and shows what I plan to achieve. It can be seen that there the PRU has an essential role in communicating with various peripherals and delivering the data to the ARM CPU. Since Linux kernel is running alongside the Xenomai kernel, the real-time guarantees can be met. This is especially important in such systems as this, and pairing it with the sheer power of Linux kernel makes this project especially valuable.  
 
The diagram above shows how the BELA system operates and utilizes the ARM CPU and the PRU unit. It is already extended with the ALSA functionality and shows what I plan to achieve. It can be seen that there the PRU has an essential role in communicating with various peripherals and delivering the data to the ARM CPU. Since Linux kernel is running alongside the Xenomai kernel, the real-time guarantees can be met. This is especially important in such systems as this, and pairing it with the sheer power of Linux kernel makes this project especially valuable.  
  
Whenever data is delivered, there is an almost instant (1us) context switch to the Xenomai Cobalt thread and the data is served to the user. Xenomai makes use of the hardware interrupts the BELA cape makes (//TODO: check?).
+
The data from peripherals is read in blocks and because Xenomai has guarantees of the scheduling latency of 20us to 80us, this upper limit makes it a desirable solution for real-time audio applications.
 +
Whenever data is ready to be delivered (a block is accumulated) the PRU generates an interrupt to ARM. During this time, the Cobalt audio thread sleeps on a ''ioctl'' syscall on the RTDM file descriptor. Upon receiving the interrupt the audio thread wakes up as soon as possible, in effect possibly preempting Linux kernel whatever it is currently doing.  
  
 
===== Userspace App =====
 
===== Userspace App =====
Next, the userspace app has to be developed and tested. For this app I plan to create just a simple audio playback and recording examples, utilizing the ALSA API. Documentation on how to interface BELA by means of ALSA shall also be written and will be later included in BELA's educational materials.
+
Next, the userspace app has to be developed and tested. Because recording and playing back audio makes no sense because reading from disk is usually [http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing a big hindrance in real-time applications], I plan to test it in several simple cases and benchmark the results between the original library and via ALSA plugin. Any discrepancies should pinpoint the relevant slowdowns, and if necessary profiling software, such as ALSA's latency test may be used.  
  
===== Digital and Analog ins and outs =====
+
The examples itself will be mostly focused on simple audio pipeline, inputting audio source, processing it and outputting. Alternatively, the system may generate audio in response to a GPIO state change, e.g. a button press. Documentation on how to interface BELA by means of ALSA shall also be written and will be later included in BELA's educational materials.
Lastly, the various analog and digital ins and outs of the BELA cape should be translated to channels or MIDI devices by ALSA configuration file (or something else??? ) not sure about this
 
  
 
===== Interfacing with the cape via ALSA =====
 
===== Interfacing with the cape via ALSA =====
  
// todo: probably not feasible to test it with these programs, because latency induced?
+
Because the performance is of utmost importance, it will be measured throughout the development process to make sure that no regressions are made. The example programs will be written using the ALSA API or by connecting ALSA plugins together. It is important to note that the programs itself will have to be written in a proper way, not to incur context switches and contain any synchronization primitives. Otherwise, even though the API calls are made in the Xenomai co-kernel, it is unable to run due because code waits on a mutex in the Linux part.
  
 
+
These programs will interface with the virtual ALSA devices created by means of plugins and still utilize the Xenomai threads for data transfers.
Because the performance is of utmost importance, it will be measured throughout the development process to make sure that no regressions are made. It will be done by the probably most common ALSA programs:
 
* ''aplay''
 
* ''arecord''
 
 
 
These programs will interface with the virtual ALSA devices created by means of plugins and still utilize the Xenomai threads for data transfers. // TODO they should call the appropriate functions for reading and writing which will be served by the Xenomai threads
 
  
 
Because the devices will be described in the ALSA configuration file, they will be listable with the ''aplay -L'' ALSA command, which lists all the soundcards and sound devices available on the given system.
 
Because the devices will be described in the ALSA configuration file, they will be listable with the ''aplay -L'' ALSA command, which lists all the soundcards and sound devices available on the given system.
Line 72: Line 67:
  
 
===Alternative ideas/Stretch goals===
 
===Alternative ideas/Stretch goals===
* One could even add a virtual MIDI device for delivering GPIO I/O and sensor/actuator analog I/O at non-audio rate.
+
* Analog sensors and actuators + digital I/O will be sampled at no-audio rate. - This could be done probably in the ALSA conf files or extending the said audio plugin to provide such functionality
 +
 
 +
The benefit gained from this would be treating these I/O as audio I/O or MIDI and interact with ALSA MIDI systems.
  
 
===API Overview===
 
===API Overview===
Line 150: Line 147:
  
 
The pcm object passed to every function(the ''context'') contains a ''private_data'' field which will be used for storing the function pointers to custom I/O functions. This way, these functions can be called in their respective place in the ALSA API.
 
The pcm object passed to every function(the ''context'') contains a ''private_data'' field which will be used for storing the function pointers to custom I/O functions. This way, these functions can be called in their respective place in the ALSA API.
 +
 +
For now we are fine with exposing RTDM file descriptors but in the future it would be better to have BELA API supply all the fd operations and we just store them in the struct above and not interact with the fd directly.
  
 
===Expected Performance===
 
===Expected Performance===
The performance is expected to be sustained and cannot be degraded. Since we supply our own Cobalt functions to the plugin, ALSA will just tie the userspace request to the Xenomai thread and the data will be delivered without any changes in latency.
+
The performance is expected to be sustained and cannot be degraded. Since we supply our own Cobalt functions to the plugin, ALSA will just tie the userspace request to the Xenomai thread and the data will be delivered without any changes in latency. Because we want to provide the real-time guarantees and not suffer any penalties due to ALSA internal buffering being done, we need to thoroughly examine ALSAs code and configure the plugin properly. Ensuring this is an important part of the project.
  
 
===Action Items===
 
===Action Items===
 +
Writing plugin will be the most complex part of this task, and getting it working may take at least 3 weeks of time.
  
 +
* setup the environment (BELA + Xenomai)
 +
* write plugin scaffolding
 +
* write plugin bare minimum to work with ALSA without BELA yet
 +
* plug in BELAs functions
 +
* write other necessary functions for configuring BELA via ALSA API (sound format, sampling rate etc)
 +
* write the ALSA configuration file for BELA
 +
* write the userspace app scaffolding
 +
* write the userspace app(s) examples
 +
* benchmark the ALSA API and regular BELA API
 +
* document the API
 +
* write guidelines for BELA learning section
 +
* record video 1 + 2
 +
* write up about the process on my blog
 +
* polish the plugins API
  
 
===Deliverables===
 
===Deliverables===
 
* The ALSA plugin itself
 
* The ALSA plugin itself
* Userspace example app
+
* Userspace example app and benchmarks for the version without ALSA and with it
 
* Guidelines on using ALSA with BELA  
 
* Guidelines on using ALSA with BELA  
  
Line 169: Line 183:
 
|-
 
|-
 
| 13.04.21-17.05.21 || Pre-work ||  
 
| 13.04.21-17.05.21 || Pre-work ||  
* Expand knowledge OpenGL ES
+
* Get familiar with BELA operation in terms of Xenomai
* Run the OpenGL computation acceleration on a regular PC
+
* Read up on Xenomai and experiment with various ways of utilizing it
* Learn about various scientific computations which can be done best in the GPU
+
* Get even more familiar with ALSA code (it is quite big library after all)
 
|-
 
|-
 
| 18.05.21-07.06.21 || Community Bonding ||
 
| 18.05.21-07.06.21 || Community Bonding ||
 
* Familiarize myself with the community
 
* Familiarize myself with the community
* Experiment with BB
+
* Experiment with BB - don't let the magic smoke escape its confines
 
|-
 
|-
 
| 14.06.21 || Milestone #1 ||
 
| 14.06.21 || Milestone #1 ||
 
* Introductory YouTube video
 
* Introductory YouTube video
* Setup the environment
+
* Setup the environment (BELA + Xenomai)
 
* Write first blog entry - "Part 1: Game plan"
 
* Write first blog entry - "Part 1: Game plan"
 
|-
 
|-
 
| 21.06.21 || Milestone #2 ||
 
| 21.06.21 || Milestone #2 ||
* setup SGX and run simple shaders
+
* Write plugin scaffolding
* write the array addition
+
* Write plugin bare minimum to work with ALSA without BELA yet 1/2
 
|-
 
|-
 
| 28.06.21 || Milestone #3 ||
 
| 28.06.21 || Milestone #3 ||
* setup the basic API
+
* Write plugin bare minimum to work with ALSA without BELA yet 2/2
* write the FIR convolution
+
* Plug in BELAs functions
* write the matrix multiplication
 
 
|-
 
|-
 
| 5.07.21 || Milestone #4 ||
 
| 5.07.21 || Milestone #4 ||
* write additional API (for continuous usage)
+
* Write the userspace app scaffolding
* write broadcast math operations
+
* Write the userspace app(s) examples
* benchmark the functions with various sizes
 
 
|-
 
|-
 
| 12.07.21 || Milestone #5 ||
 
| 12.07.21 || Milestone #5 ||
 
* Write second blog post - "Part 2: Implementation"
 
* Write second blog post - "Part 2: Implementation"
 
* Evaluate the mentor
 
* Evaluate the mentor
 +
* Write the ALSA configuration file for BELA
 
|-
 
|-
 
| 19.07.21 || Milestone #6 ||
 
| 19.07.21 || Milestone #6 ||
* document everything
+
* Test if the plugin works as intended
 +
* Write other necessary functions for configuring BELA via ALSA API (sound format, sampling rate etc)
 +
* Work further on making the API work
 
|-
 
|-
 
| 26.07.21 || Milestone #7 ||
 
| 26.07.21 || Milestone #7 ||
 
* Summer camp, mostly collect feedback for the project
 
* Summer camp, mostly collect feedback for the project
* Less brain-intensive tasks, documentation, benchmarking
+
* Less brain-intensive tasks, documentation of the API
 +
* Write guidelines for BELA learning section
 
|-
 
|-
 
| 31.07.21 || Milestone #8 ||
 
| 31.07.21 || Milestone #8 ||
 
* Write third blog post - "Part 3: Optimization and Benchmarks"
 
* Write third blog post - "Part 3: Optimization and Benchmarks"
* Polish the implementation
+
* Benchmark the ALSA API and regular BELA API
 
|-
 
|-
 
| 7.8.21 || Milestone #9 ||
 
| 7.8.21 || Milestone #9 ||
* Polish the API and make it extensible
+
* Polish the plugin API
 
* Prepare materials for the video
 
* Prepare materials for the video
 
* Start writing up the summary
 
* Start writing up the summary
Line 268: Line 284:
  
 
  I'd also want to know that it would be consistent over different releases. I'm not sure how often ALSA changes... would be super annoying to create a wrapper that suddenly stops working, I'm not sure how much of an issue this is. There's also a few things on top of ALSA that might be a better bet etc like portaudio. I'm just brainstorming here though haven't investigated
 
  I'd also want to know that it would be consistent over different releases. I'm not sure how often ALSA changes... would be super annoying to create a wrapper that suddenly stops working, I'm not sure how much of an issue this is. There's also a few things on top of ALSA that might be a better bet etc like portaudio. I'm just brainstorming here though haven't investigated
~ Lucas Pillsbury
+
~ Leah Pillsbury
  
 
==Misc==
 
==Misc==
 
The qualification PR is available [https://github.com/jadonk/gsoc-application/pull/148 here].
 
The qualification PR is available [https://github.com/jadonk/gsoc-application/pull/148 here].

Latest revision as of 02:17, 13 April 2021


[ALSA plugin for BELA]

About Student: Jakub Duchniewicz
Mentors: Giulio Moro
Code: not yet created!
Wiki: https://elinux.org/index.php?title=BeagleBoard/GSoC/2021_Proposal/GPGPU_with_GLES
GSoC: [1]

Status

Submitted for review.

About you

IRC: jduchniewicz
Github: JDuchniewicz
School: University of Turku/KTH Royal Institute of Technology
Country: Finland/Sweden/Poland
Primary language: Polish
Typical work hours: 8AM-5PM CET
Previous GSoC participation: Participating in GSoC, especially with BeagleBoard would further develop my software and hardware skills and help me apply my current knowledge for the mutual benefit of the open source community. I planned to do the YOLO project, but after spending several days researching and preparing the proposal I found it is impossible to do on current BBAI/X15.

About your project

Project name: ALSA plugin for BELA

Description

BELA is a cape designed for BB Black which features real-time audio processing via usage of Xenomai co-kernel. Apart from being a hardware solution, BELA supplies its own customized version of Debian Linux distribution and a full-fledged IDE allowing for seamless audio development experience. BELA provides its own library for interfacing with the hardware, however it does not provide any unified interface via ALSA, JACK or PulseAudio. Therefore, it is currently impossible to use BELA like a regular Linux audio device and it has to be done by utilizing its API calls. Now, if any application wants to use the BELA, it must be patched to use its API, and this increases the maintenance burden.

The main premise of this project is to enable the unified access by means of ALSA plugin. This plugin will allow for tying user-provided functions for regular system calls alsa-lib API uses for operating on its devices. Since such need may arise for any other real-time ALSA devices, this plugin would be a valuable addition to the ALSA ecosystem and would be mainline'able. This way, users can call regular ALSA API's for interacting with the device and still profit from all the real-time benefits it offers.

The project will also focus on writing all necessary components for interfacing with this plugin, such as an exemplary userspace application and instructions on how to use the ALSA API with BELA.

Implementation

ALSA plugin

The first part - ALSA plugin will be realized basing on already existent ALSA plugins, like the file plugin which allows interaction with arbitrary files as if they were regular ALSA devices. If not for hardcoded syscalls in this plugin, we would be fine by simply using it for our problem, or writing some workaround and shipping custom libasound along with other BELA software. However, this solution is not the best idea, as we would rely on shipping this library and maintaining it in sync with the upstream. Hence, having a completely fresh plugin which could be used in various other applications is a better tailored solution. The plugin itself will be not limited to BELA only and will open the door for other audio solutions that use custom real-time capabilities to integrate with ALSA.

BELA System description

In order to understand why we even have to create special ALSA plugins and not create regular driver for the BELA devices, we need to look closely at how BELA manages its audio data and delivers it to the user.

The diagram describing how BELA operates within the system with ALSA addition.

The diagram above shows how the BELA system operates and utilizes the ARM CPU and the PRU unit. It is already extended with the ALSA functionality and shows what I plan to achieve. It can be seen that there the PRU has an essential role in communicating with various peripherals and delivering the data to the ARM CPU. Since Linux kernel is running alongside the Xenomai kernel, the real-time guarantees can be met. This is especially important in such systems as this, and pairing it with the sheer power of Linux kernel makes this project especially valuable.

The data from peripherals is read in blocks and because Xenomai has guarantees of the scheduling latency of 20us to 80us, this upper limit makes it a desirable solution for real-time audio applications. Whenever data is ready to be delivered (a block is accumulated) the PRU generates an interrupt to ARM. During this time, the Cobalt audio thread sleeps on a ioctl syscall on the RTDM file descriptor. Upon receiving the interrupt the audio thread wakes up as soon as possible, in effect possibly preempting Linux kernel whatever it is currently doing.

Userspace App

Next, the userspace app has to be developed and tested. Because recording and playing back audio makes no sense because reading from disk is usually a big hindrance in real-time applications, I plan to test it in several simple cases and benchmark the results between the original library and via ALSA plugin. Any discrepancies should pinpoint the relevant slowdowns, and if necessary profiling software, such as ALSA's latency test may be used.

The examples itself will be mostly focused on simple audio pipeline, inputting audio source, processing it and outputting. Alternatively, the system may generate audio in response to a GPIO state change, e.g. a button press. Documentation on how to interface BELA by means of ALSA shall also be written and will be later included in BELA's educational materials.

Interfacing with the cape via ALSA

Because the performance is of utmost importance, it will be measured throughout the development process to make sure that no regressions are made. The example programs will be written using the ALSA API or by connecting ALSA plugins together. It is important to note that the programs itself will have to be written in a proper way, not to incur context switches and contain any synchronization primitives. Otherwise, even though the API calls are made in the Xenomai co-kernel, it is unable to run due because code waits on a mutex in the Linux part.

These programs will interface with the virtual ALSA devices created by means of plugins and still utilize the Xenomai threads for data transfers.

Because the devices will be described in the ALSA configuration file, they will be listable with the aplay -L ALSA command, which lists all the soundcards and sound devices available on the given system.

The overview of the system with custom ALSA plugin.

The system overview is described in the above diagram.

Alternative ideas/Stretch goals

  • Analog sensors and actuators + digital I/O will be sampled at no-audio rate. - This could be done probably in the ALSA conf files or extending the said audio plugin to provide such functionality

The benefit gained from this would be treating these I/O as audio I/O or MIDI and interact with ALSA MIDI systems.

API Overview

The API for the plugin will be subject to change, but for the time being, since we are creating a similar plugin to the file ALSA plugin, we must at least provide several most important operations as listed below.

 1 static const snd_pcm_ops_t snd_pcm_custom_ops = {
 2 	.close = snd_pcm_custom_close,
 3 	.info = snd_pcm_generic_info,
 4 	.hw_refine = snd_pcm_generic_hw_refine,
 5 	.hw_params = snd_pcm_custom_hw_params,
 6 	.hw_free = snd_pcm_custom_hw_free,
 7 	.sw_params = snd_pcm_generic_sw_params,
 8 	.channel_info = snd_pcm_generic_channel_info,
 9 	.dump = snd_pcm_custom_dump,
10 	.nonblock = snd_pcm_generic_nonblock,
11 	.async = snd_pcm_generic_async,
12 	.mmap = snd_pcm_generic_mmap,
13 	.munmap = snd_pcm_generic_munmap,
14 	.query_chmaps = snd_pcm_generic_query_chmaps,
15 	.get_chmap = snd_pcm_generic_get_chmap,
16 	.set_chmap = snd_pcm_generic_set_chmap,
17 };

And probably the fast ops as well:

 1 static const snd_pcm_fast_ops_t snd_pcm_custom_fast_ops = {
 2 	.status = snd_pcm_custom_status,
 3 	.state = snd_pcm_custom_state,
 4 	.hwsync = snd_pcm_custom_hwsync,
 5 	.delay = snd_pcm_custom_delay,
 6 	.prepare = snd_pcm_direct_prepare,
 7 	.reset = snd_pcm_custom_reset,
 8 	.start = snd_pcm_custom_start,
 9 	.drop = snd_pcm_custom_drop,
10 	.drain = snd_pcm_custom_drain,
11 	.pause = snd_pcm_custom_pause,
12 	.rewindable = snd_pcm_custom_rewindable,
13 	.rewind = snd_pcm_custom_rewind,
14 	.forwardable = snd_pcm_custom_forwardable,
15 	.forward = snd_pcm_custom_forward,
16 	.resume = snd_pcm_direct_resume,
17 	.link = NULL,
18 	.link_slaves = NULL,
19 	.unlink = NULL,
20 	.writei = snd_pcm_custom_writei,
21 	.writen = snd_pcm_custom_writen,
22 	.readi = snd_pcm_mmap_readi,
23 	.readn = snd_pcm_mmap_readn,
24 	.avail_update = snd_pcm_custom_avail_update,
25 	.mmap_commit = snd_pcm_custom_mmap_commit,
26 	.htimestamp = snd_pcm_custom_htimestamp,
27 	.poll_descriptors = snd_pcm_direct_poll_descriptors,
28 	.poll_descriptors_count = NULL,
29 	.poll_revents = snd_pcm_direct_poll_revents,
30 };

Hopefully we will not need to change most of these functions and just focus on writing the proper handling of writei/n and its sibling readi/n. This is where the custom provided functions will be handled, such as the Xenomai's cobalt_read or cobalt_write. Where no function is to be supported - NULL will be supplied.

Of course we need such functions as open and close which will be used for interacting with the underlying BELA device represented as a device file. The API for the open function is presented below.

 1 typedef struct {
 2     int (*open)(/* parameters have yet to be designed */)
 3     int (*close)();
 4     / * other important functions */
 5 } snd_custom_ops_t;
 6 
 7 int snd_pcm_custom_open(snd_pcm_t **pcmp, const char *name,
 8 		      const char *fname, int fd, const char *ifname, int ifd,
 9 		      snd_custom_ops_t* custom_ops,
10 		      snd_pcm_stream_t stream)

The pcm object passed to every function(the context) contains a private_data field which will be used for storing the function pointers to custom I/O functions. This way, these functions can be called in their respective place in the ALSA API.

For now we are fine with exposing RTDM file descriptors but in the future it would be better to have BELA API supply all the fd operations and we just store them in the struct above and not interact with the fd directly.

Expected Performance

The performance is expected to be sustained and cannot be degraded. Since we supply our own Cobalt functions to the plugin, ALSA will just tie the userspace request to the Xenomai thread and the data will be delivered without any changes in latency. Because we want to provide the real-time guarantees and not suffer any penalties due to ALSA internal buffering being done, we need to thoroughly examine ALSAs code and configure the plugin properly. Ensuring this is an important part of the project.

Action Items

Writing plugin will be the most complex part of this task, and getting it working may take at least 3 weeks of time.

  • setup the environment (BELA + Xenomai)
  • write plugin scaffolding
  • write plugin bare minimum to work with ALSA without BELA yet
  • plug in BELAs functions
  • write other necessary functions for configuring BELA via ALSA API (sound format, sampling rate etc)
  • write the ALSA configuration file for BELA
  • write the userspace app scaffolding
  • write the userspace app(s) examples
  • benchmark the ALSA API and regular BELA API
  • document the API
  • write guidelines for BELA learning section
  • record video 1 + 2
  • write up about the process on my blog
  • polish the plugins API

Deliverables

  • The ALSA plugin itself
  • Userspace example app and benchmarks for the version without ALSA and with it
  • Guidelines on using ALSA with BELA

Timeline

During 25.07.21-08.08.21 I have a summer camp from my study programme and will be probably occupied for a half of the day. The camp will most likely be held online though.

Date Milestone Action Items
13.04.21-17.05.21 Pre-work
  • Get familiar with BELA operation in terms of Xenomai
  • Read up on Xenomai and experiment with various ways of utilizing it
  • Get even more familiar with ALSA code (it is quite big library after all)
18.05.21-07.06.21 Community Bonding
  • Familiarize myself with the community
  • Experiment with BB - don't let the magic smoke escape its confines
14.06.21 Milestone #1
  • Introductory YouTube video
  • Setup the environment (BELA + Xenomai)
  • Write first blog entry - "Part 1: Game plan"
21.06.21 Milestone #2
  • Write plugin scaffolding
  • Write plugin bare minimum to work with ALSA without BELA yet 1/2
28.06.21 Milestone #3
  • Write plugin bare minimum to work with ALSA without BELA yet 2/2
  • Plug in BELAs functions
5.07.21 Milestone #4
  • Write the userspace app scaffolding
  • Write the userspace app(s) examples
12.07.21 Milestone #5
  • Write second blog post - "Part 2: Implementation"
  • Evaluate the mentor
  • Write the ALSA configuration file for BELA
19.07.21 Milestone #6
  • Test if the plugin works as intended
  • Write other necessary functions for configuring BELA via ALSA API (sound format, sampling rate etc)
  • Work further on making the API work
26.07.21 Milestone #7
  • Summer camp, mostly collect feedback for the project
  • Less brain-intensive tasks, documentation of the API
  • Write guidelines for BELA learning section
31.07.21 Milestone #8
  • Write third blog post - "Part 3: Optimization and Benchmarks"
  • Benchmark the ALSA API and regular BELA API
7.8.21 Milestone #9
  • Polish the plugin API
  • Prepare materials for the video
  • Start writing up the summary
14.8.21 Milestone #10
  • Finish the project summary
  • Final YouTube video
24.08.21 Feedback time
  • Complete feedback form for the mentor
31.08.21 Results announced
  • Celebrate the ending and rejoice

Experience and approach

I have strong programming background in the area of embedded Linux/operating systems as a Junior Software Engineer in Samsung Electronics during December 2017-March 2020. Additionally I have developed a game engine (| PolyEngine) in C++ during this time and gave some talks on modern C++ during my time as a Vice-President of Game Development Student Group "Polygon".

Apart from that, I have completed my Bachelors degree at Warsaw University of Technology successfully defending my thesis titled: | FPGA Based Hardware Accelerator for Musical Synthesis for Linux System. In this system I created a polyphonic musical synthesizer capable of producing various waveforms in Verilog code and deployed it on a De0 Nano SoC FPGA. Additionally I wrote two kernel drivers - one encompassed ALSA sound device and was responsible for proper synchronization of DMA transfers.

The ALSA part proved to be very time consuming and difficult to debug, but after hours of a wild goose chase I understand how ALSA works at a level allowing me for various kernel driver creation and userspace plugin extesion.

I am familiar with Deep Learning concepts and basics of Computer Vision. During my studies at UTU I achieved the maximal grades for my subjects, excelling at Navigation Systems for Robotics and Hardware accelerators for AI.

In my professional work, many times I had to complete various tasks under time pressure and choose the proper task scoping. Basing on this experience I believe that this task is deliverable in the mentioned time-frame.

Contingency

Since I am used to tackling seemingly insurmountable challenges, I will first of all keep calm and try to come up with alternative approach if I get stuck along the way. The internet is a vast ocean of knowledge and time and again I received help from benevolent strangers from reddit or other forums. Since I believe that humans are species, which solve problems in the best way collaboratively, I will contact #beagle, #beagle-gsoc and relevant subreddits (I received tremendous help on /r/FPGA, /r/embedded and /r/askelectronics in the past).

If all fails I may be able be forced to change my approach and backtrack, but this will not be a big problem, because the knowledge won't be lost and it will only make my future approaches better. Alternatively, I can focus on documenting my progress in a form of blogposts and videos while waiting for my mentor to come back to cyberspace.

In case of this problem, there exists a high risk of problems without obvious solutions and in this case I might need to directly email the creators of ALSA (or join the linux kernel mailing list) - Jaroslav Kysela and Takashi Iwai. Because the plugin development will be in the userspace, regular GDB will be enough for debugging it.

Materials

During my previous adventures with ALSA i accumulated a list of sound (no pun intended) materials:

Benefit

Having an ALSA plugin for BELA would allow interfacing with the devices in a unified manner and it will remove the onus of knowing what API BELA uses for communication. Moreover, such plugin can be mainline'able as it will allow for substituting various Linux syscalls the ALSA API makes for the device interfacing with this plugin, such as substituting the read() syscall with the Xenomai cobalt_read() making the operation run in real-time.

currently for each different application that a user want to run, the audio backend of the application has to be changed from using ALSA/portaudio/RTAudio/jack to using the Bela API. This could be mitigated by having portaudio/RTAudio/jack wrappers (what I put in the original project idea), but I figured if it could be just an ALSA driver (perhaps an ALSA plugin, which runs all in userspace I think?) then things could be more maintainable

~ Giulio Moro

 It just occurred to me recently that the ALSA solution would be more generally applicable

~ Giulio Moro

If there is not an ALSA plugin readily available that accepts arbitrary functions for read and write, we could add one. I think that would stand some chance of getting mainlined as it would be a new, clean feature, backward compatible and without hacks.

~ Giulio Moro

I'd also want to know that it would be consistent over different releases. I'm not sure how often ALSA changes... would be super annoying to create a wrapper that suddenly stops working, I'm not sure how much of an issue this is. There's also a few things on top of ALSA that might be a better bet etc like portaudio. I'm just brainstorming here though haven't investigated

~ Leah Pillsbury

Misc

The qualification PR is available here.