First commit, Vystem v0.1

This commit is contained in:
2026-03-31 22:15:00 +02:00
commit e15daed8c0
462 changed files with 134655 additions and 0 deletions

View File

@@ -0,0 +1,45 @@
# Kernel heap manager
## Introduction
In order to allocate small-sized object or medium-sized buffers (approximately between one and one hundred pages), the lower half have been designed for hosting the heap of the kernel. This is an early design subject to changes for perfomances and/or robustess in the coming updates. It's defined inside `shelter/lib/include/memory/heap.h` and implemented inside `shelter/lib/src/memory/heap.c`.
## Overview
The kernel heap manager is based on a dual allocation strategy:
- small objects, less or equal to 1024 bytes goes into slab allocators
- medium buffers, greater than 1024 bytes, are allocated using the Pez allocator for physical and virtual planes management
The heap metadatas are stored inside the `sh_heap_KERNEL_HEAP` structure. The `alloc_size_tree` field is a radix tree mapping virtual pages offset of any pages mapping with their size in pages. This allow for quick retrieval of any pages allocations on the heap.
## Slab allocators
These are described into [generic slab allocator documentation](slabs.md#generic-slab-allocators). Each one is setted up with a PBA containing 12 terabytes of virtual memory
Here are a table describing the virtual pages ranges of all allocators:
Object size range (bytes) | Allocator level | Virtual pages range start address
--- | --- | ---
1-8 | 0 | `0x0000200000000000`
9-16 | 1 | `0x00002C0000000000`
17-32 | 2 | `0x0000380000000000`
33-64 | 3 | `0x0000440000000000`
65-128 | 4 | `0x0000500000000000`
129-256 | 5 | `0x00005C0000000000`
257-512 | 6 | `0x0000680000000000`
513-1024 | 7 | `0x0000740000000000`
## Pages allocations
If the needed amount of bytes is greater than 1024 bytes, pages are allocated on the heap. The number of pages allocated is the nearest amount that can satisfy the demand. The pages are allocated inside a virtual pages range starting at `0x0000100000000000` and contain 16 terabytes of virtual memory.
## API
The heap manager provide the following API, that should only be called by the abstraction provided by `sh_malloc`, except the first three one:
- `sh_heap_KERNEL_HEAP *sh_heap_get_default_heap()`: return a pointer to the kernel heap structure
- `void sh_heap_load_default_heap(sh_heap_KERNEL_HEAP *heap)`: set the pointer to the kernel heap structure
- `SH_STATUS sh_heap_init_heap(sh_pez_PHYSICAL_PLANE *phys_plane,sh_pez_VIRTUAL_PLANE *virt_plane,sh_page_PAGE_TABLE_POOL *kernel_ptp,sh_heap_KERNEL_HEAP *kernel_heap)`: initialize the heap structure, the calling entity need to setup and place the generic slab allocators pointers manually inside the returned structure
- `SH_STATUS sh_heap_allocate_pages(sh_uint32 pages_count,sh_page_VIRTUAL_ADDRESS *address)`: allocate a certain amount of pages, provided by the Pez physical backend and mapped on the heap pages allocation range
- `SH_STATUS sh_heap_free_pages(sh_page_VIRTUAL_ADDRESS va)`: free an allocated region of pages from the heap. Return the pages to Pez physical backend
- `SH_STATUS sh_heap_allocate_object(sh_uint32 size_bytes,sh_page_VIRTUAL_ADDRESS *address)`: allocate a certain object based on his size, automatically uses the most adapted generic slab allocator. Will refuse any size superior to 1024 bytes
- `SH_STATUS sh_heap_free_object(sh_page_VIRTUAL_ADDRESS va)`: free a certain object by providing his VA. Automatically free from the previously used slab allocator. Will refuse any VA not in the range of the object allocation range

View File

@@ -0,0 +1,16 @@
# Memory subsystem
## Introduction
The memory subsystem is responsible for handling tasks like physical pages allocations, virtual memory management, initial memory map analysis, pages mapping and unmapping and kernel heap management.
## Summary
1) [Page subsystem](page.md)
2) [Virtual memory layout](vmemlayout.md)
3) [Ring buffer](ring.md)
4) [Pages block allocator](pba.md)
5) [Slabs allocator](slabs.md)
6) [Radix trees subsystem](radix.md)
7) [Pez plane manager](pez.md)
8) [Kernel heap manager](heap.md)

View File

@@ -0,0 +1,90 @@
# Page subsystem
## Introduction
The Page subsystem is a part of the memory subsystem responsible for managing initial memory map analysis, physical pages bitmap, pages tables pool management, mapping and unmapping pages as well as memory allocations before Pez initialization and memory statistics. It's defined inside `shelter/lib/include/memory/page.h` and implemented inside `shelter/lib/src/memory/page.c`
## Macros and types
The Page header defines a few usefuls macros like:
- the start and end of the kernel big allocations virtual range
- the size of a page
- the max memory count in bytes as well as the max physical pages count
- the standard size of a pages table pool pages range
- all the types of EFI memory type
- the null value of physical and virtual addresses
- the flags for pages table entry
The Page header also defines the following types:
- `sh_page_MEMORY_TYPE`: a wrapper of `sh_uint32`
- `sh_page_PHYSICAL_ADDRESS` and `sh_page_VIRTUAL_ADDRESS`: wrappers of `sh_uint64`
## Structures
The Page header defines the following structure:
- `sh_page_MEMORY_MAP_ENTRY` and `sh_page_MEMORY_MAP_HEADER`: internal structures used for memory map analysis
- `sh_page_PAGE_TABLE_POOL`: structure for pages table pool, containing the physical and virtual address of a pages table pool as well as the internal bitmap for the internal allocator
- `sh_page_MEM_STATS`: a structure filled by a specific function to get statistics on physical memory using physical pages bitmap
## Pages table pool
It's recommended to read [pages table pool documentation](../ptp.md) before reading this.
The Page subsystem provide the following functions regarding PTP management:
- `SH_STATUS sh_page_load_boot_ptp_va(sh_page_VIRTUAL_ADDRESS pt_pool_va)`: load the virtual address of the boot PTP provided by bootloader. This VA is the address of the root PML4, not the address of the PTP structure. Should only be used once
- `sh_page_VIRTUAL_ADDRESS sh_page_get_boot_ptp_va()`: return the pointer to root PM4 stored inside boot PTP
- `SH_STATUS sh_page_init_ptp(sh_page_PHYSICAL_ADDRESS ptp_pa,sh_page_VIRTUAL_ADDRESS ptp_va,sh_uint64 initial_fill_level,sh_page_PAGE_TABLE_POOL *page_table_pool)`: initialize the boot PTP structure with the fill level of the PTP, provided by bootloader. Doesn't allocate pages for a new PTP but it use the pages already allocated for the boot PTP. Should only be used once for boot PTP
- `SH_STATUS sh_page_dump_ptp_bitmap(sh_page_PAGE_TABLE_POOL *ptp)`: dump the PTP bitmap on the log output, intended for debug purposes
- `sh_page_PHYSICAL_ADDRESS sh_page_ptp_alloc_one_page(sh_page_PAGE_TABLE_POOL *pt_pool)`: allocate one page from the PTP internal bitmap allocator
- `static inline sh_uint64 *sh_page_ptp_pa_to_va(sh_page_PAGE_TABLE_POOL *ptp,sh_uint64 pa)`: convert a PA from the PTP pages range to a usable VA using the current PTP
- `SH_STATUS sh_page_ptp_va_to_pa(sh_page_PAGE_TABLE_POOL *ptp,sh_page_VIRTUAL_ADDRESS va,sh_page_PHYSICAL_ADDRESS *pa)`: return equivalent physical address by searching inside provided PTP
## Memory map analysis
The Page subsystem is responsible for the initial memory map analysis. It provide the following functions:
- `SH_STATUS sh_page_copy_memory_map()`: copy the memory map from her original VA to a dedicated buffer
- `SH_STATUS sh_page_check_memory_map()`: check for memory map signature and detect eventual buffers overflow
- `SH_STATUS sh_page_analyse_memory_map(sh_page_PAGE_TABLE_POOL *ptp)`: analyse memory map, initialize physical pages bitmap by allocating into a free area of the physical memory layout. This allow for dynamic sizing for physical pages bitmap.
All three functions should be run in order and only once.
## Physical pages bitmap
In order to ensure a reliable source of thruth and early allocations, the Page subsystem manage a physical pages bitmap where one bit on mean one page occupied.
The Page subsystem provides the following functions regarding physical pages bitmap:
- `SH_STATUS sh_page_set_pages_range_bitmap(sh_uint8 *bitmap,sh_uint64 page_count_in_bitmap,sh_uint64 page_index,sh_uint64 page_count,sh_bool state)`: allow for manipulation of any bitmap, but should only be used for physical pages bitmap
- `static inline sh_bool sh_page_is_allocated(sh_uint8 *bitmap,sh_uint64 page_index)`: return the status of a specific page inside the bitmap
- `sh_page_VIRTUAL_ADDRESS sh_page_get_physical_bitmap_ptr()`: return physical pages bitmap pointer
## Pages mapping and unmapping
The Page subsystem provide the following functions regarding pages mapping:
- `SH_STATUS sh_page_map_one_page_ptp(sh_page_PAGE_TABLE_POOL *ptp,sh_page_VIRTUAL_ADDRESS va,sh_page_PHYSICAL_ADDRESS pa,sh_uint64 flags)`: map one page inside the pages table contained in the provided PTP
- `SH_STATUS sh_page_is_va_mapped_ptp(sh_page_PAGE_TABLE_POOL *ptp,sh_page_VIRTUAL_ADDRESS va)`: return `SH_STATUS_VA_NOT_MAPPED` or `SH_STATUS_VA_MAPPED`, which aren't errors, according to page mapping state
- `SH_STATUS sh_page_is_va_range_mapped_ptp(sh_page_PAGE_TABLE_POOL *ptp,sh_page_VIRTUAL_ADDRESS va,sh_uint64 size_bytes)`: return `SH_STATUS_VA_NOT_MAPPED`, `SH_STATUS_VA_FULLY_MAPPED` or `SH_STATUS_VA_PARTIALLY_MAPPED` according to pages range mapping state
- `SH_STATUS sh_page_map_contiguous_pages_range_ptp(sh_page_PAGE_TABLE_POOL *ptp,sh_page_VIRTUAL_ADDRESS va,sh_page_PHYSICAL_ADDRESS pa,sh_uint64 flags,sh_uint64 size_bytes)`: map a pages range that has to be contiguous virtually and physically. VAs availability is checked, not physicals pages availability
- `SH_STATUS sh_page_unmap_one_page_ptp(sh_page_PAGE_TABLE_POOL *ptp,sh_page_VIRTUAL_ADDRESS va)`: unmap one page from the pages table contained inside provided PTP
- `SH_STATUS sh_page_unmap_contiguous_pages_range_ptp(sh_page_PAGE_TABLE_POOL *ptp,sh_page_VIRTUAL_ADDRESS va,sh_uint64 size_bytes)`: unmap a pages range from the pages table inside provided PTP. Both physical and virtual ranges have to be contiguous. Virtual pages range is checked, not physical ranges occupation
## Basic memory allocations
The Page subsystem implement basic memory allocation, available before Pez initialization, using a first fit algorithm.
The Page subsystem provides the following functions regarding basic memory allocations:
- `sh_uint64 sh_page_get_one_page_na()`: return the index of the first available page inside physical pages bitmap
- `SH_STATUS sh_page_search_available_va_range(sh_page_PAGE_TABLE_POOL *ptp,sh_page_VIRTUAL_ADDRESS range_base,sh_page_VIRTUAL_ADDRESS range_size_bytes,sh_uint64 size_bytes,sh_page_VIRTUAL_ADDRESS *address_found)`: search a free region inside PTP using provided size and search area boundaries provided
- `SH_STATUS sh_page_search_physical_contiguous_block_na(sh_uint64 pages_needed,sh_page_PHYSICAL_ADDRESS *pa)`: search a free region of provided size inside physical pages bitmap
- `SH_STATUS sh_page_alloc_contiguous(sh_page_PAGE_TABLE_POOL *ptp,sh_uint64 size_bytes,sh_page_VIRTUAL_ADDRESS* va)`: allocate a specified amount of pages and return the start VA of the allocated region
- `SH_STATUS sh_page_alloc_contiguous_extended(sh_page_PAGE_TABLE_POOL *ptp,sh_uint64 size_bytes,sh_page_VIRTUAL_ADDRESS* va,DEFAULT sh_uint64 flags,DEFAULT sh_page_VIRTUAL_ADDRESS va_range_start,DEFAULT sh_uint64 va_range_size_bytes)`: allow for extensive allocations parameters for selecting mapping flags and VAs search range
- `SH_STATUS sh_page_unalloc_one_page(sh_page_PAGE_TABLE_POOL *ptp,sh_page_VIRTUAL_ADDRESS va)`: free one page. Check if page is mapped before hand. PA is calculated from searching into PTP
- `SH_STATUS sh_page_unalloc_contiguous(sh_page_PAGE_TABLE_POOL *ptp,sh_page_VIRTUAL_ADDRESS va,sh_uint64 size_bytes)`: free a contiguous memory region. Check if the entire region is allocated before hand
This role of memory allocations is lost once Pez is initialized.
## Memory statistics
The Page subsystem provide the following functions regarding memory statistics:
- `sh_uint64 sh_page_get_physical_memory_amount_pages()`: return amount of physical memory installed in pages
- `sh_uint64 sh_page_get_physical_memory_amount_bytes()`: return amount of physical memory installed in bytes
- `SH_STATUS sh_page_get_memory_stats(sh_page_MEM_STATS *mem_stats)`: provide a extensive amount of statistics on physical memory.
For a more human-readable output, the function `sh_log_mem_stats` can be used.

View File

@@ -0,0 +1,19 @@
# Pages block allocator
## Introduction
The pages block allocator is a simple configurable bumb allocator designed to be the pages source of slabs allocator inside the kernel. It's defined inside `shelter/lib/include/memory/pba.h` and implemented inside `shelter/lib/src/memory/pba.c`.
## Overview
A pages block allocator is conifigurable with various parameters.
Firstly, it need to know the size of each block that are allocated. This is specified inside the `block_pages` field of the PBA structure, under the `sh_uint64` type.
The `start_va` field, a `sh_page_VIRTUAL_ADDRESS`, specify the starting VA of the virtual pages ranges dedicated to the allocator. It need to be aligned on a boundary of a region `block_pages` pages. The `total_pages` field, a `sh_uint64`, specify the size of the decicated virtual pages range, in pages. It need to be a multiple of `block_pages`.
Finally, the `block_count` field, a `sh_uint64`, store the amount of blocks already allocated. The `max_blocks` field, a `sh_uint64`, is computed at initialization and allow to know instantly if the PBA is full, which happen when the dedicated virtual pages range is fully mapped.
All those fields are stored inside the `sh_pba_PAGE_BLOCK_ALLOCATOR` structure. This structure can be manipulated by the following functions:
- `SH_STATUS sh_pba_init(sh_pba_PAGE_BLOCK_ALLOCATOR *pba,sh_page_VIRTUAL_ADDRESS start_va,sh_uint64 area_pages_amount,sh_uint64 block_pages)`: initialize a PBA, using provided parameters. It ensure the previously specified constraints are met with the provided parameters
- `SH_STATUS sh_pba_alloc(sh_pba_PAGE_BLOCK_ALLOCATOR *pba,sh_page_PAGE_TABLE_POOL *ptp,sh_page_VIRTUAL_ADDRESS *ptr)`: allocate a block from the PBA. Take care of the physical region search (using Page subsystem or Pez if it's initialized), VA computation (using provided virtual pages range) and pages mapping, as well as physical pages bitmap update if Pez isn't initialized.

View File

@@ -0,0 +1,86 @@
# Pez plane manager
## Introduction
Pez is the allocator that manages physical pages allocation and virtual pages ranges allocation. It's a custom design, (normally) never seen before. It's defined into `shelter/lib/include/memory/pez/pez.h` and implemented into `shelter/lib/src/memory/pez/pez.c`.
## Planes
Pez manages memory using planes. It can be as granular as a 4KB pages. Since Pez uses `sh_uint32` for representating pages index and regions sizes, a plane can technically manages up to 16 terabytes of memory.
In the implementation, Pez separate the physical plane from virtual planes. They work the same in their logic and algorithm but use differents source of truth and the virtual plane can provide an offset to each address it allocate.
The physical plane uses the physical pages bitmap that the Page subsystem maintain as source of truth. While the source of truth for virtual planes are the corresponding pages mapping into the virtual pages range the virtual plane manages, for the moment, only empty (not allocated at all) virtual planes can be created.
## Metadatas
### Regions objects
The Pez allocator uses regions objects allocated from their dedicated (physical or virtual) slab allocators.
The physical version is:
``` C
#pragma pack(1)
typedef struct {
sh_uint32 start_page_index;
sh_uint32 region_size_pages;
sh_uint8 next_region_index[3];
sh_uint8 flags;
} sh_pez_REGION_PHYSICAL_OBJECT;
#pragma pack()
```
The virtual version is:
``` C
#pragma pack(1)
typedef struct {
sh_uint32 start_page_index;
sh_uint32 region_size_pages;
sh_uint32 next_region_index;
} sh_pez_REGION_VIRTUAL_OBJECT;
#pragma pack()
```
The `start_page_index` field is the index of the first page of the region, starting at 0 from the start of the plane. The `region_size_pages` field is the size of the region in pages. The `next_region_index` field is the index of the next region in the size list. It's one byte longer on virtual region object to account for the 29 bits index. The `flags` field, while not being used in this implementation, can be used to store flags.
These regions objects represente free regions that are ready to be allocated.
### Size lists radix tree
Each plane maintain lists of free regions by their sizes. These are call size lists. They are linked through the `next_region_index` field, using their object index inside their respective slab allocator. These lists only contain regions of exact size. To signal the end of a size list, the last element in the list have their `next_region_index` field set to 0. This prevent the first slot of any of the slab inside the slab allocator to be allocated.
Each plane maintain a radix tree that link sizes of free regions (in page count) to indexes of the first region in this size list. This radix tree is called the size lists radix tree and is 8 levels deep.
This architecture allow for two allocations methods that are stricly time-constant or nearly time-constant:
Each time the allocator need to allocate a region, it search for this size into the corresponding size list. Two possibles results:
- if a free region of exactly this size is found, it's popped of the size list and allocated
- if no free region of exactly this size is found, the algorithm search for another region immediatly above in size of the requested size using the backtracking algorithm. This free region is popped of the size list and splitted in two. The first part is allocated and the second part is inserted into the corresponding size list.
This allocation scheme allow for zero internal fragmentation at the granularity of 4KB pages.
### Boundary radix tree
Each plane maintain what we call a boundary radix tree. This radix tree map the start and end pages index in the plane of each free regions to a boundary entry. For one-page free region, there is only one boundary. The two boundaries for free regions with more than one page are strictly identical.
A boundary is structured like this:
- The lower 32 bits of the boundary are the region object index of the region to which the boundary belong
- The upper 32 bits of the boundary are the previous region object index in the size list of the region to which the boundary belong
This radix tree allow for two things:
- constant-time passive fusion of free regions with deallocated occupied regions, at each free of occupied region. The external fragmentation is repaired as soon as the allocated region that cause it is deallocated.
- instant element suppression of any free region in any size lists
## API
The Pez allocator implementation inside the Shelter kernel provide the following functions:
- `void sh_pez_set_available()`: executed once to signal that Pez has taken over the role of Page for allocating pages
- `sh_bool sh_pez_is_available()`: return the current status of availability of Pez
- `sh_pez_PHYSICAL_PLANE* sh_pez_get_reference_phys_plane()`: return the reference physical plane that is used to manage physical memory
- `SH_STATUS sh_pez_init_physical_plane(sh_uint8 *physical_bitmap,sh_uint64 physical_page_count,sh_slab_reg_phys_SLAB_ALLOCATOR *slab_reg_phys,struct sh_slab_radix_node_SLAB_ALLOCATOR *slab_radix_node,sh_page_PAGE_TABLE_POOL *kernel_ptp,sh_pez_PHYSICAL_PLANE *phys_plane)`: initialize the physical plane by scanning the physical pages bitmap
- `SH_STATUS sh_pez_alloc_physical_pages(sh_pez_PHYSICAL_PLANE *phys_plane,sh_uint32 pages_count,sh_page_PHYSICAL_ADDRESS *address)`: allocate pages range from the physical plane. Doesn't map the pages at all
- `SH_STATUS sh_pez_free_physical_pages(sh_pez_PHYSICAL_PLANE *phys_plane,sh_page_PHYSICAL_ADDRESS *address,sh_uint32 pages_count)`: free a pages range and return them to the physical plane. Doesn't unmap the pages at all
- `SH_STATUS sh_pez_debug_physical(sh_pez_PHYSICAL_PLANE *phys_plane)`: print debugging information about Pez physical plane, intented for debug
- `SH_STATUS sh_pez_init_virtual_plane(sh_page_VIRTUAL_ADDRESS plane_offset,sh_slab_reg_virt_SLAB_ALLOCATOR *slab_reg_virt,struct sh_slab_radix_node_SLAB_ALLOCATOR *slab_radix_node,sh_page_PAGE_TABLE_POOL *kernel_ptp,sh_page_PAGE_TABLE_POOL *reference_ptp,sh_pez_VIRTUAL_PLANE *virt_plane)`: initialize an empty virtual plane
- `SH_STATUS sh_pez_alloc_virtual_pages(sh_pez_VIRTUAL_PLANE *virt_plane,sh_uint32 pages_count,sh_page_VIRTUAL_ADDRESS *address)`: allocate virtual pages range from the provided virtual plane
- `SH_STATUS sh_pez_free_virtual_pages(sh_pez_VIRTUAL_PLANE *virt_plane,sh_page_VIRTUAL_ADDRESS *address,sh_uint32 pages_count)`: free virtual pages range and return them to the provided virtual plane

View File

@@ -0,0 +1,37 @@
# Radix trees subsystem
## Introduction
The memory subsystem require a capable radix trees to unleash the full power of the Pez allocator. It's defined inside `shelter/lib/include/memory/pez/radix.h` and implemented inside `shelter/lib/src/memory/pez/radix.c`.
## Overview
In Shelter, the radix nodes are 128 bytes wide, capable of storing 16 `sh_uint64` values. This mean all radix trees have a 16 fanout, they filter 4 bits per level, and require 16 levels for 64 bits keys and 8 levels for 32 bits keys.
The node is defined by this sructure:
``` C
typedef struct {
sh_page_VIRTUAL_ADDRESS ptr[16];
} sh_radix_NODE;
```
A radix tree is defined into the following structure:
``` C
typedef struct {
sh_radix_NODE *root_node;
sh_uint8 depth;
} sh_radix_TREE;
```
The API for modifying nodes is as follow:
- `SH_STATUS sh_radix_node_read_value(sh_radix_NODE *node,sh_uint8 index,sh_page_VIRTUAL_ADDRESS* value)`: read a value from the node at the following index
- `SH_STATUS sh_radix_node_set_value(struct sh_slab_radix_node_SLAB_ALLOCATOR *alloc,sh_radix_NODE *node,sh_uint8 index,sh_page_VIRTUAL_ADDRESS value)`: set the value inside the node at the provided node
The API for manipulating radix trees is as follow:
- `SH_STATUS sh_radix_tree_init(struct sh_slab_radix_node_SLAB_ALLOCATOR *alloc,sh_page_PAGE_TABLE_POOL *ptp,sh_radix_TREE *tree,sh_uint8 depth)`: initialize a radix tree, fail if depth is greater than 16. Allocate the root node
- `SH_STATUS sh_radix_tree_get_value(sh_radix_TREE *tree,sh_uint64 key,sh_page_VIRTUAL_ADDRESS *value)`: search in a straight line inside the tree to provide the associated value of a key. Stop and return `SH_STATUS_NOT_FOUND` as soon as it hit an empty pointer where there should be one
- `SH_STATUS sh_radix_tree_insert_value(struct sh_slab_radix_node_SLAB_ALLOCATOR *alloc,sh_page_PAGE_TABLE_POOL *ptp,sh_radix_TREE *tree,sh_uint64 key,sh_page_VIRTUAL_ADDRESS value)`: insert a value associated with a key inside the tree. Can allocates new nodes and overwrite previously setted value.
- `SH_STATUS sh_radix_tree_delete_value(struct sh_slab_radix_node_SLAB_ALLOCATOR *alloc,sh_radix_TREE *tree,sh_uint64 key)`: delete a vale and deallocates all nodes (including leaf) that form a path to the leaf if the deletion make them empty
- `SH_STATUS sh_radix_tree_search_smallest_min_bound(struct sh_slab_radix_node_SLAB_ALLOCATOR *alloc,sh_radix_TREE *tree,sh_uint64 lower_bound_key,sh_page_VIRTUAL_ADDRESS *value)`: backtracking algorithm used to quickly search the value with a key equal or greater than the provided key. Can't allocate new nodes
The radix trees subsystem uses the same slab allocator for all radix trees.

View File

@@ -0,0 +1,11 @@
# Ring buffer
## Introduction
The memory subsystem provide a simple ring buffer API, mainly used by the log API. This implementation doesn't allow for automatic structure initialization with allocation. The structure has to be manually created for it to work. This ring buffer implementation is defined inside `shelter/lib/include/memory/ring.h` and implemented inside `shelter/lib/src/memory/ring.c`
## Overview
The main structure for a ring buffer is `sh_ring_RING_BUFFER_HEADER`. The function provided by the API are volontary very simple for the moment:
- `SH_STATUS sh_ring_write_byte(sh_ring_RING_BUFFER_HEADER *ring_buffer,sh_uint8 byte)`: write a byte inside provided ring buffer
- `SH_STATUS sh_ring_write_string(sh_ring_RING_BUFFER_HEADER *ring_buffer,char *string)`: write a null-terminated string inside provided ring buffer

View File

@@ -0,0 +1,91 @@
# Slabs allocator
## Introduction
The memory subsystem provides 4 independant but similars slab allocators dedicated to various purposes.
## Regions objects slabs allocator
These slabs allocator are dedicated to the storage of region objects. These objects are 12 bytes in size. There is two slab allocators that store region objects:
- `slab_reg_phys`: the slab allocator for physical region objets. It uses 24 bits index, making it able to store up to 16 millions objects. Defined in `shelter/lib/include/memory/slabs/slab_reg_phys.h` and implemented in `shelter/lib/src/memory/slabs/slab_reg_phys.c`
- `slab_reg_virt`: the slab allocator for virtual region objets. It uses 29 bits index, making it able to store up to 536 millions objects. Defined in `shelter/lib/include/memory/slabs/slab_reg_virt.h` and implemented in `shelter/lib/src/memory/slabs/slab_reg_virt.c`
They uses 3 pages per slab, capable of storing 1024 objects per slabs. They store the slab header outside, into a dedicated, pre allocated area. The slabs are allocated, reused, but never deallocated.
The slab allocator uses indexes to identify objects slots. These indexes are what the user is given upon allocation and need to be provided back in order to free objects slots. The indexes are composed of two parts:
- index in the slab: 10 lower bits
- slab index: 14 or 19 upper bits
The structures `sh_slab_reg_phys_SLAB_STRUCT` and `sh_slab_reg_virt_SLAB_STRUCT` define the headers of each slab. The structures `sh_slab_reg_phys_SLAB_ALLOCATOR` and `sh_slab_reg_virt_SLAB_ALLOCATOR` are used to store allocators metadata. The `sh_slab_reg_phys_OBJECT_INDEX` and `sh_slab_reg_virt_OBJECT_INDEX` are both wrapper of `sh_uint32`.
The only difference between these two are the amount of slabs they can store and their dedicated virtual pages ranges.
The API for physical region objects slab allocator is as follow:
- `SH_STATUS sh_slab_reg_phys_alloc_init(sh_slab_reg_phys_SLAB_ALLOCATOR* slab_alloc,sh_page_PAGE_TABLE_POOL *ptp)`: initialize the allocator, allocate all the headers but does not add the first slab
- `SH_STATUS sh_slab_reg_phys_add_slab(sh_slab_reg_phys_SLAB_ALLOCATOR* alloc,sh_page_PAGE_TABLE_POOL *ptp,sh_slab_reg_phys_SLAB_STRUCT** out_slab)`: add a new slab, can allocate from Page or Pez subsystem, do all the mapping itself
- `void* sh_slab_reg_phys_ref_to_ptr(sh_slab_reg_phys_SLAB_ALLOCATOR* alloc,sh_slab_reg_phys_OBJECT_INDEX ref)`: transform an index into a pointer to the referenced object
- `SH_STATUS sh_slab_reg_phys_alloc(sh_slab_reg_phys_SLAB_ALLOCATOR* alloc,sh_page_PAGE_TABLE_POOL *ptp,sh_slab_reg_phys_OBJECT_INDEX* out_index)`: allocate a new object and return his index. Can allocate new slabs
- `SH_STATUS sh_slab_reg_phys_dealloc(sh_slab_reg_phys_SLAB_ALLOCATOR* alloc,sh_slab_reg_phys_OBJECT_INDEX index)`: deallocate a object using the provided index. Doesn't delete slabs that are made empty by objects deallocation
The API for virtual region objects slab allocator is as follow:
- `SH_STATUS sh_slab_reg_virt_alloc_init(sh_slab_reg_virt_SLAB_ALLOCATOR* slab_alloc,sh_page_PAGE_TABLE_POOL *ptp)`: initialize the allocator, allocate all the headers but does not add the first slab
- `SH_STATUS sh_slab_reg_virt_add_slab(sh_slab_reg_virt_SLAB_ALLOCATOR* alloc,sh_page_PAGE_TABLE_POOL *ptp,sh_slab_reg_virt_SLAB_STRUCT** out_slab)`: add a new slab, can allocate from Page or Pez subsystem, do all the mapping itself
- `void* sh_slab_reg_virt_ref_to_ptr(sh_slab_reg_virt_SLAB_ALLOCATOR* alloc,sh_slab_reg_virt_OBJECT_INDEX ref)`: transform an index into a pointer to the referenced object
- `SH_STATUS sh_slab_reg_virt_alloc(sh_slab_reg_virt_SLAB_ALLOCATOR* alloc,sh_page_PAGE_TABLE_POOL *ptp,sh_slab_reg_virt_OBJECT_INDEX* out_index)`: allocate a new object and return his index. Can allocate new slabs
- `SH_STATUS sh_slab_reg_virt_dealloc(sh_slab_reg_virt_SLAB_ALLOCATOR* alloc,sh_slab_reg_virt_OBJECT_INDEX index)`: deallocate a object using the provided index. Doesn't delete slabs that are made empty by objects deallocation
## PBA-based slab allocators
These slab allocators relies on a PBA to automatically allocate and map slab pages inside their dedicated virtual pages ranges. They doesn't create their own PBA. While this responsability fall on the user, this allow the user to set personnalized virtual pages range, even if they aren't intented to have multiples instances of them at the same time.
These slabs allocator also store the header of each slabs inside the slab, making it possible to allocate slabs headers on the fly. The slab header contain pointers that can form a partial slabs linked list.
Finally, these slab allocators provide infinite slabs without any software limits. They reuse but doesn't deallocate slabs. They also don't use indexes but direct pointers.
### Radix node slab allocator
This allocator use 32 pages slabs, and manage 128 bytes object. The slab contain 1024 objects slots. But due to the space taken by the header, each slab only contain 1006 objects due to the header taking space.
Inside the slab header, the slab allocator also maintain a 16 bit extra object per radix node. It serve at a bitmap for radix nodes.
The `sh_slab_radix_node_SLAB` structure describe the header of each slab, as well as each object slot. The `sh_slab_radix_node_SLAB_ALLOCATOR` structure contain the metadatas of the allocator. The header is 256 bytes wide.
The API for radix nodes slab allocator is as follow:
- `SH_STATUS sh_slab_radix_node_alloc_init(struct sh_slab_radix_node_SLAB_ALLOCATOR* slab_alloc,sh_pba_PAGE_BLOCK_ALLOCATOR *pba)`: initialize the slab allocator, doesn't allocate any slab
- `SH_STATUS sh_slab_radix_node_add_slab(struct sh_slab_radix_node_SLAB_ALLOCATOR* alloc,sh_page_PAGE_TABLE_POOL *ptp,sh_slab_radix_node_SLAB** out_slab)`: add a new slab, allocate from the PBA
- `SH_STATUS sh_slab_radix_node_alloc(struct sh_slab_radix_node_SLAB_ALLOCATOR* alloc,sh_page_PAGE_TABLE_POOL *ptp,sh_radix_NODE** out_obj)`: allocate a new object and return his pointer. Can allocate new slabs
- `SH_STATUS sh_slab_radix_node_dealloc(struct sh_slab_radix_node_SLAB_ALLOCATOR* alloc,sh_radix_NODE *object_ptr)`: deallocate a object using the provided pointer. Doesn't delete slabs that are made empty by objects deallocation
- `sh_uint16 *sh_slab_radix_node_get_node_bitmap(struct sh_slab_radix_node_SLAB_ALLOCATOR* alloc,sh_radix_NODE *object_ptr)`: return a pointer to the bitmap of the node
It's defined inside `shelter/lib/include/memory/slabs/slab_radix_node.h` and implemented inside `shelter/lib/src/memory/slabs/slab_radix_node.c`.
### Generic slab allocators
These slab allocators are used to store objects allocated on the heap. The header is always 128 bytes. Each slab contain 512 objects but the actual size of each slab depend on the level of the allocator:
Level | Object size in bytes | Amount of pages per slab | Amount of actual objects per slab
--- | --- | --- | ---
0 | 8 | 1 | 496
1 | 16 | 2 | 504
2 | 32 | 4 | 508
3 | 64 | 8 | 510
4 | 128 | 16 | 511
5 | 256 | 32 | 511
6 | 512 | 64 | 511
7 | 1024 | 128 | 511
The `sh_slab_generic_SLAB` structure describe the header of each slab, as well as each object slot. The `sh_slab_generic_SLAB_ALLOCATOR` structure contain the metadatas of the allocator.
The API for generic slab allocators is as follow:
- `SH_STATUS sh_slab_generic_alloc_init(sh_uint8 level,struct sh_slab_generic_SLAB_ALLOCATOR* slab_alloc,sh_pba_PAGE_BLOCK_ALLOCATOR *pba)`: initialize the slab allocator, doesn't allocate any slab
- `SH_STATUS sh_slab_generic_add_slab(struct sh_slab_generic_SLAB_ALLOCATOR* alloc,sh_page_PAGE_TABLE_POOL *ptp,sh_slab_generic_SLAB** out_slab)`: add a new slab, allocate from the PBA
- `SH_STATUS sh_slab_generic_alloc(struct sh_slab_generic_SLAB_ALLOCATOR* alloc,sh_page_PAGE_TABLE_POOL *ptp,void** out_obj)`: allocate a new object and return his pointer. Can allocate new slabs
- `SH_STATUS sh_slab_generic_dealloc(struct sh_slab_generic_SLAB_ALLOCATOR* alloc,void *object_ptr)`: deallocate a object using the provided pointer. Doesn't delete slabs that are made empty by objects deallocation
It's defined inside `shelter/lib/include/memory/slabs/slab_generic.h` and implemented inside `shelter/lib/src/memory/slabs/slab_generic.c`.
## Notes
The current implementation of all slabs allocator uses a bitmap stored inside the headers to find the first available object slot. It's planned the next update to test the freelist approch. If it's concluant, it will be used as the default approch in the next update.
It's also planned that all slabs allocators uses PBAs in the next update. This switch will not changes the indexes-based approch for region objects slab allocators.

View File

@@ -0,0 +1,9 @@
# Virtual memory layout
## Introduction
The virtual memory layout is a file `shelter/lib/include/memory/vmem_layout.h` defining which area of virtual memory serve which purposes. It's automatically checked for no overlaps by a python script.
## Overview
Virtual memory regions are defined with macros. Any macro ending with `_VA` will create a new virtual region for the script. The script will then look for the size of this virtual region in another macro that start with the same prefix and end with `_SIZE_BYTES`. If a macro ending with `_VA` doesn't have a corresponding macro ending with `_SIZE_BYTES`, the script will trigger an error and the kernel compilation will fail. If a macro ending with `_SIZE_BYTES` doesn't have a corresponding macro ending with `_VA`, the script will ignore it. The start of each virtual region must be aligned to 4096 bytes and the size must be provided in bytes. Any overlapping virtual region will trigger a compilation error. Consider using this file as the source of trust for everything related to static virtual regions. Any macro that doesn't end with `_VA` or `_SIZE_BYTES` or that doesn't correspong to the behaviour described above will be ignored.