Machine learning in Ada (an ONNX runtime binding)

As an Ada enthusiast, I’ve always wanted to try my hand at machine learning, but was discouraged by the fact that Python seemed to be the only language supported by most ML frameworks. However, I recently discovered the Open Neural Network Exchange (ONNX) format, a universal way of representing machine learning models, and a library for working with it. One thing I love about ONNX is that there are many pre-trained models available for various applications.

I was excited to find out that I could use Ada to load and make inference with ONNX models, so I created a binding for the ONNX Runtime library and wrote a couple of example programs. These examples include handwriting digit recognition and English speech synthesis. Unfortunately, the speech synthesis example required converting text to phoneme sequences, which I haven’t yet implemented in Ada, in order to keep the examples compact.

Overall, I’m thrilled to have been able to dive into the world of machine learning using my favorite programming language, and I hope that other Ada enthusiasts will find this information useful. If anyone is interested in trying it out for themselves, I’m happy to share my code and provide more information on how to get started with Ada and ONNX.

9 Likes

This looks great, I will try it at some point for sure!

Incidentally, in my search for TTS engines usable from Ada, I’ve found a way to run PyTorch models without Python using libtorch (a C++ library) and a C+ wrapper function.

For example Silero TTS could be run like this (after exporting model from Python with model.model.save ("model.pt")

#include <torch/script.h> // One-stop header.

#include <iostream>
#include <memory>

int main(int argc, const char* argv[]) {
  torch::jit::script::Module module;
  try {
    // Deserialize the ScriptModule from a file using torch::jit::load().
    module = torch::jit::load("model.pt");
  }
  catch (const c10::Error& e) {
    std::cerr << "error loading the model\n";
    std::cerr << e.what();
    return -1;
  }
  std::vector<std::string> a1{"v nedrah tundry vydry v g+etrah t+yr9t v v1dra 9dra kedrov."};
  std::vector<std::string> a2{"v nedrah tundry vydry v getrah tyr9t v v1dra 9dra kedrov"};
  c10::optional<int64_t> o1;
  c10::List<c10::optional<int64_t>> a3{o1};
  c10::List<double> a4{1.0};
  // torch::jit::IValue list=torch::jit::IValue(v);
  std::vector<torch::jit::IValue> inputs;
  inputs.push_back(torch::jit::IValue(a1));
  inputs.push_back(torch::jit::IValue(a2));
  inputs.push_back(torch::jit::IValue(a3));
  inputs.push_back(torch::jit::IValue(a4));
  inputs.push_back(torch::jit::IValue(a4));
  inputs.push_back(torch::ones(1, torch::TensorOptions().dtype(torch::kInt32)));
  auto output = module.forward(inputs);
  auto output1 = output.toTuple()->elements()[0].toTensor();
  auto output2 = output.toTuple()->elements()[1].toTensor();
  std::cout << output1.sizes() << " " << output2[0].item().toInt() << "\n";
  std::cout << "ok\n";
}
1 Like

Hi Max,
Thx for exploring this route.
Can you provide me more info and help me get started with Ada and ONNX please ?
Thx in advance.
Kind regards,
Kristof Bouckaert

Sure! Suppose you are using Linux, then to get started:

  • Download ONNX Runtime and unpack it into some folder.
  • Set LIBRARY_PATH and LD_LIBRARY_PATH to <folder>/lib
  • Fetch my ONNX Runtime binding
    git clone https://github.com/reznikmm/onnx_runtime.git
    
  • Build examples. Start with MNIST:
    cd onnx_runtime/examples
    alr build
    curl -LO https://github.com/microsoft/onnxruntime-inference-examples/raw/main/c_cxx/MNIST/mnist.onnx
    ./bin/mnist
    
    It will recognize handwritten 7 hardcoded in the demo as an array and prints Result: 7.

Don’t hesitate to ask any questions.