BeagleBoard/GSoC/2022 Propasal/Bela audio driver for common audio APIs

=[Bela audio driver for common audio APIs] = About Student: Marck Koothoor Mentors: Giulio Moro Code: not yet created! Wiki: [N/A] GSoC: Proposal Request

=Status= Proposal review.

About you
IRC: Marck Koothoor Github: https://github.com/marck3131 School: Veermata Jijabai Technological Institute (VJTI) Country: India Primary language: English, Hindi , Marathi Typical work hours: 10AM-7PM IST Previous GSoC participation: Participating first time in GSoC, I'm interested in embedded as I've experience working with ESP32, ESP8266,ArduinoUNO. Looking forward to work around audio drivers.

About your project
Project name: Bela audio driver for common audio APIs

Description
BELA is an open-source embedded computing platform for creating responsive, real-time interactive systems with audio and sensors. It features ultra-low latency, high-resolution seneor sampling, a convenient and powerful browser-based IDE, and a fully open-source toolchain that includes support for both low-level languages like C/C++ and popular computer music programming languages like Pure Data, SuperCollider and Csound. There are two types of Bela system: the original Bela, and Bela Mini. Both are open-source hardware systems, and are based on the Beagle single-board computers (Bela uses the BeagleBone Black, and Bela Mini uses the Pocketbeagle). The Bela software extends the functionality of the Beagle systems by integrating audio processing and sensor connectivity in a single, high-performance package.

Goal
The main purpose of the project is to provide the unified access by means of ALSA plugin. Adding ALSA plugins would make it easier for Bela to be written in more programming languages and for other audio softwares that use libasound on linux.

The project will also focus on writing all necessary components for interfacing with this plugin, such as an exemplary userspace application and instructions on how to use the ALSA API with BELA.

ALSA plugin
ALSA plugins are used to create virtual devices that can be used like normal hardware devices but cause extra processing of the sound stream to take place. They are used while configuring ALSA in the. asoundrc file. PCM plugins extends functionality and features of PCM devices. Programs that use the PCM interface generally follow this pseudo-code: To display PCM types, features and setup parameters and or to read from any PCM device and write to standard output, these code snippets are pretty helpful.

Why ALSA Plugins?
ALSA consists of these 3 components:

1.A set of kernel drivers. — These drivers are responsible for handling the physical sound hardware from within the Linux kernel, and have been the standard sound implementation in Linux since kernel version 2.5

2.A kernel level API for manipulating the ALSA devices.

3.A user-space C library for simplified access to the sound hardware from userspace applications. This library is called libasound and is required by all ALSA capable applications.

Plugins are used to create virtual devices that can be used like normal hardware devices but cause extra processing of the sound stream to take place. Virtual devices are defined in the .asoundrc file in your home directory. This creates a new virtual device with name SOMENAME of type PLUGINTYPE that pipes its output to some other virtual or hardware device SLAVENAME. SOMENAME can be any simple name. It's the name you'll use to refer to this device in the future. There are several virtual device names that are predefined, such as default and dmix. PLUGINTYPE is one of the names listed in the official documentation above. Examples are dmix (a plugin type as well as a predefined virtual device), jack, and linear. SLAVENAME is the name of another virtual device or a string describing a hardware device. To specify the first device of the first card use "hw:0,0" (with the quotes).

To add Bela support to - more programming languages ,the cross-platform user-space libraries like RtAudio, portaudio, jack connect to ALSA drivers.

ALSA Drivers API
Will be using some of the functions mentioned here to read/write, open/close and interact with Bela.

Bela has a simple API of three functions: setup, render , and cleanup The setup function to initialise hardware, allocate memory, and set up any other resources you will need in render. The render function is where all of Bela’s real-time processing takes place. The cleanup function to do tasks like freeing up any memory that was allocated in setup.

Outline for writing ALSA plugin
Bela core cannot call into the Linux kernel, so there has to be a custom alsa plugin which in user space lets interaction with the Bela. So to write a plugin on ALSA, I'll follow these basic steps first: The PCM middle layer of ALSA is quite powerful and it is only necessary for each driver to implement the low-level functions to access its hardware. Including these libraries     to access some functions related with hw_param. A pcm instance is allocated by the snd_pcm_new function. After the pcm is created, need to set operators : After setting the operators, call to pre-allocate the buffer. When the PCM substream is opened, a PCM runtime instance is allocated and assigned to the substream. This pointer is accessible via substream->runtime. This runtime pointer holds most information you need to control the PCM: the copy of hw_params and sw_params configurations, the buffer pointers, mmap records, spinlocks, etc.
 * create probe callback.
 * create remove callback.
 * create a struct bela_driver structure containing the callbacks.
 * create an init function just calling the Bela_initAudio to initialise the rendering system.
 * create an exit function to call the cleanup function
 * PCM interface :

This callback is used to read the current value of the control and to return to user-space. For example : This callback is used to write a value from user-space. return 1 if the value is changed, else return 0. If any fatal error happens, return a negative error code as usual.
 * get callback
 * put callback


 * When the application calls snd_pcm_open/snd_pcm_readi/writei ; the pcm data is called in a thread created when the user space driver was initialized.

As ALSA only allows a plugin to work with snd_pcm_readi and snd_pcm_writei)) calls and not callback-driven callback then the read/write functions are to be called in with a wrapping code.

User space
To call read and write functions without the system calls, there has to be a generic user space alsa device , because Bela Core cannot call into Linux kernel. To write a user space driver, basic steps are : open the UIO device (so it is ready to use), get size of memory region, map the device registers, unmap, then write the functions needed (select , read, write , mmap).



To connect to the driver, Xenomai with Cobalt core has basic device I/O such as open and close, ioctl wrapped under a more user friendly API for developers in user space. So, will make use of the ioctl to call read [#define "ioctl name" __IOR] and write [#define "ioctl name" __IOW] .As ioctl is useful for implementing a device driver to set the configuration on the device. Following are the steps involved to use IOCTL :
 * Create IOCTL command in driver
 * Write IOCTL function in the driver
 * Create IOCTL command in a Userspace application
 * Use the IOCTL system call in a Userspace

Bela Cape and ALSA interface
At present, there are separate audio drivers for Supercollider and Csound,to call into Bela’s C API. Referring to these, I'd have an idea of how to write the ALSA plugins. BELA system operates and utilizes the ARM CPU and the PRU unit. The PRU shuttles data between the hardware and a memory buffer; the Xenomai audio task then processes data. This data interupts ARM. Bela is based on Linux with the Xenomai realtime kernel extensions.



The programs will interface with the virtual ALSA devices created by means of plugins and still utilize the Xenomai threads for data transfers. To test, I'll first run few examples of ALSA plugins using ALSA API. The API provides initialisation and cleanup functions in which the programmer can allocate and free resources. Using the following commands in linux to play the audio and play the text as audio :

Experience and approach

 * I have done few projects in embedded systems with ESP32 and freeRTOS. In my last project, me and my team designed a PCB for a maze solving bot, used BFS algorithm and dead-end replacement rules. I have decent experience in C, C++ and python.
 * I am quite acquainted with Computer Vision and Web development.
 * The project idea requires good knowledge of the build systems as well, I'd do my best in contributing towards this project and learn from the journey.

Contingency
Inspite of less resources available for libasound (low-level ALSA user space library), would keep on trying to fetch and solve the errors over the internet. If at all I fail in between, or the project is not heading in a positive direction, would get in touch with the mentors on the IRC channels and try some other approaches.

Benefit
In order to make the process of adding Bela support to more programming languages in the future easier, we could think of simply adding Bela support to some common audio backend libraries, so that a single effort can be reused across several pieces of software that use that same library. Upon successful completion, the project will make it easier to add more applications and programming languages to Bela.

The purpose of this project is to allow Bela to show up as a device that libasound can interact with, so that one does not need to  adapt existing programs in order to run them on Bela (though some programs may still require additional changes). ~ Giulio Moro

Resources

 * ALSA        : https://www.linuxjournal.com/article/6735, https://www.volkerschatz.com/noise/alsa.html#config
 * PCM         : https://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html
 * ALSA drivers : https://www.kernel.org/doc/html/v4.15/sound/kernel-api/writing-an-alsa-driver.html
 * Bela        : https://learn.bela.io/
 * Bela github : https://github.com/BelaPlatform/Bela

Misc
Completed all the requirements listed on the ideas page and submitted the cross compilation task through the pull request #166.