Toro kernel

A dedicated kernel for multi-threading applications.

Friday, April 20, 2018

New Fat driver for the Fat qemu interface

Hello folks!, I have just committed the first version of a fat driver. This driver together with the vfat interface of Qemu eases a lot the sharing of files between the guest and the host. This new feature relies on the mechanism of Qemu to present to a guest a fat partition from a directory in the host. This mechanism is enabled by passing "-drive file=fat:rw:ToroFiles", where ToroFiles is the path of a directory in the host machine. By doing so, Qemu presents to the guest a new block device in which there is a fat partition that includes all the file structure of the ToroFiles directory. Depending on some flags, the partition can be either fat32 or fat16. From he qemu's source code, it seems fat32 is not tested enough so I decided to support fat16 only. The main benefits of this mechanism is to ease the sharing of files between the guest and the host. The main drawback is you should not modify the directory while the guest is running because Qemu may get confused. For the moment, the driver allows only read operations. I expect to have writing operations soon.


Saturday, February 10, 2018

Docker image to compile Toro on Linux, Part II

In the first part of this post (here), I explained how to use a docker image to compile Toro. I worked a bit on this procedure and I modified to make it use the container. To compile Toro by using, you need first to install docker and then follow these steps:
1. Pull the docker image from dokerhub
docker pull torokernel/ubuntu-for-toro
2. Clone torokernel git repo
3. Go to torokernel/examples and run:
./ ToroHello 
If everything goes well, you will get ToroHello.img in torokernel/examples. In addition, if you have installed KVM, you will get an instance of a Toro guest that runs ToroHello.  


Monday, February 05, 2018

Docker image to compile Toro

Hello folks! I just created a docker image to compile Toro kernel examples. You can find the image in To try it, follow the steps:

1. Install docker. You can find a good tutorial in

2. Once installed, in a command line run:

 docker pull torokernel/ubuntu-for-toro

3. Clone ToroKernel repository that will be used to provide the code to be compiled:

git clone

and then move current directory to ./torokernel

4. In a command line, run:

sudo docker run -it -v $(pwd):/home/torokernel torokernel/ubuntu-for-toro bash 

This command returns a bash in which current directory, i.e., torokernel directory, is mounted at /home/torokernel. So now we can just go to /home/torokernel/examples and run:

wine c:/lazarus/lazbuild.exe ToroHello.lpi

This will compile and build ToroHello.img. When we are done in the Docker, we can Exit.


Thursday, January 25, 2018

Toro supports Virtio network drivers!

Hello folks! the last three weeks, I have been working on adding support for virtio network drivers in Toro (see VirtIONet.pas). In a virtualisation environment, virtio drivers have many benefits:
- they perform better than e1000 or other emulated network card.
- they abstract away the hardware of the host thus enabling the drivers to work on different hardware.  
- they are an standard way to talk with network cards which is supported by many hypervisors like KVM, QEMU or VirtualBox. 
The way that virtio network cards work is quite simple. They are based on the notion of virtqueue. In the case of networking, network cards have mainly two queues: the reception queue and the transmission queue. Roughly speaking, each queue has two rings of buffers: the available buffers and the used buffers. To provide a buffer to the device, the driver puts buffers in the available ring, then the device consumes and put them in the used ring. For example, in the case of the reception queue, the driver feeds the device by putting buffers in the available queue. Then, when a packet arrives, the device takes buffers from the available queue, writes the content and puts them in the used queue. You can find many bibliography on internet. However, I would recommend this post which also proposes the code C of the drivers. I think testing is the harder part. I found different behaviours depending on where are you testing, e.g., KVM, QEMU. For example, in KVM, if you don't set the model=virtio-net, the driver just does not work. To test, I basically have my own version of QEMU which prints all the logs straight to stdout. Also, Wireshark helps to trace the traffic and find duplicate packets or other kind of misbehaviour. The good part: when you are done with one virtio network driver, you can easily use it as a template because all virtio driver are very similar. I had no time yet to compare with e1000 but I am expecting good numbers :)

Cheers, Matias.