|
|
|
## RFSoC4x2 Setup
|
|
|
|
You need to prepare an SD Card from which the rfsoc4x2 can boot Linux. The compressed image is hosted by BYU here https://casper.groups.et.byu.net/rfsoc4x2/rfsoc4x2_casper.img.tar.gz
|
|
|
|
|
|
|
|
The uncompressed image is 16GB so choose an appropriate sd card.
|
|
|
|
|
|
|
|
Uncompress the image with
|
|
|
|
|
|
|
|
$ tar -xzf rfsoc4x2_casper.img.tar.gz
|
|
|
|
|
|
|
|
Flash the sd card with the dd utility:
|
|
|
|
|
|
|
|
$ dd if=rfsoc4x2_casper.img of=/dev/<name of device> bs=32MB status=progress
|
|
|
|
|
|
|
|
!!!Warning, make sure /dev/... corresponds to your sd card, mistakes here could break your system
|
|
|
|
|
|
|
|
Insert the sd card into the rfsoc4x2 and position the adjacent switch to 'SD' so the board knows to boot from the sd card.
|
|
|
|
|
|
|
|
# Getting the IP address of the rfsoc4x2
|
|
|
|
If you know the ip address you can skip this step.
|
|
|
|
With a micro-usb cable, establish a serial connection with the rfsoc4x2.
|
|
|
|
Power on the board, the boot log should appear in the serial connection. After boot is complete, login with user: 'casper', password: 'casper'. Run,
|
|
|
|
|
|
|
|
$ ip addr
|
|
|
|
|
|
|
|
and get the ip address of the board from the output.
|
|
|
|
|
|
|
|
Power off the rfsoc4x2. Connect it to a network over ethernet cable and power on the rfsoc4x2. Try to ssh into the rfsoc 4x2 with casper@<ip address> and password: 'casper'. If this works you can move on to the next step.
|
|
|
|
|
|
|
|
# Orin Board Setup and Installation
|
|
|
|
For the Orin board and getting the proper drivers/packages I took the following steps to get the board set up.
|
|
|
|
## System Flashing
|
|
|
|
Using a Ubuntu 22.04 host machine we installed the Nvidia SDK manager and flashed the Orin board with the 6.0 Jetpack ORIN board SDK image. It is important to clarify that the flash must be done on a host machine that is natively running Ubuntu. We have tried on VMs/Docker but both are unable to complete the flash process. For what we are doing I only installed the CUDA drivers and runtime/utils which is the minimal amount of packages in the flash. You will need to pay attention to what version is being installed for the later steps. The following websites were used as instuctions for the SDK manager. Most if not all instructions were followed from them.
|
|
|
|
|
|
|
|
You will need a USB-C to USB-A serial cable to complete the flashing process. The instuctions on how to set this up are also found in the provided links.
|
|
|
|
|
|
|
|
https://docs.nvidia.com/sdk-manager/index.html
|
|
|
|
https://developer.nvidia.com/embedded/learn/jetson-agx-orin-devkit-user-guide/two_ways_to_set_up_software.html#how-to-install-sdk-manager
|
|
|
|
|
|
|
|
NOTE: For flashing you will need to put the Orin board into recovery mode which is also mentioned in the how-to-install web link.
|
|
|
|
## Post Flash
|
|
|
|
Post flash you will need to confirm that CUDA and its respective drivers are loaded and up to-date and that CUDA can run. The current version runs CUDA 12.2 and its respective driver CUDART. You can check this by downloading the CUDA samples from NVIDIA or by going to where CUDA was installed and run the devicequery executable included with the install.
|
|
|
|
NOTE: Make sure you download the cuda-samples repository that matches the cuda version you are running. This is done changing GIT tag to the correct version.
|
|
|
|
|
|
|
|
## Mellanox Drivers - MLNX OFED
|
|
|
|
For the Mellanox PCIE card you will need to install its respective drivers specific for the aarch64 architecture and tegra kernel. For this all information found at https://gitlab.ras.byu.edu/alpaca/wiki/-/wikis/Unix-Networking under Installing MLNX OFED still applys with some minor adjustments to the commands. The driver ISO file can be found at https://network.nvidia.com/products/infiniband-drivers/linux/mlnx_ofed/. For the University Node the you will need to install the 23.10-3.2.2.0 version of the driver. The drop down menu should include this version. As for the change in commands you will need to run the following as sudo with the NIC card installed after the image is mounted:
|
|
|
|
|
|
|
|
```
|
|
|
|
mount -o ro, loop <.iso> /mnt
|
|
|
|
cd /mnt/
|
|
|
|
./mlnxofedinstall --without-dkms --add-kernel-support --kernel <kernel version> --without-fw-update --force --enable-gds
|
|
|
|
```
|
|
|
|
|
|
|
|
If at any point this command fails you'll have to debug why it didn't complete. It will most likely be due to conflicting system packages. The `--force` command should resolve this but if not take caution in removing other packages as CUDA might depend on some of them.
|
|
|
|
|
|
|
|
After the command states that it has successfully installed the drivers and MLNX_OFED dependencies you should reboot the system.
|
|
|
|
|
|
|
|
## Post Driver Install
|
|
|
|
After reboot you should check if everything started correctly and the MLNX NIC is recognized with its respective ports. This can be done by running the following command-line tools:
|
|
|
|
```
|
|
|
|
sudo dpkg -l | grep -I mlnx
|
|
|
|
sudo lsmod | grep mlx
|
|
|
|
sudo ibv_devinfo
|
|
|
|
```
|
|
|
|
These commands will check:
|
|
|
|
1. If the Mellanox packages were installed
|
|
|
|
2. If the infiniband libary/devices are working
|
|
|
|
3. If the kernel modules are present.
|
|
|
|
|
|
|
|
If the ibv_devinfo command returns no devices and you see that the kernel modules are indeed installed and running, you might need to reinstall the NIC on the PCIE slot and reboot.
|
|
|
|
|
|
|
|
## Post Driver Install
|
|
|
|
|
|
|
|
From this point the NIC should be addressable for a network. To have the device setup for the university node and data streaming, several system services need to be installed and configured.
|
|
|
|
|
|
|
|
### 'network-config' service
|
|
|
|
|
|
|
|
For this service we will be configuring the NIC for our specific needs. This configuration can be changed to best fit other needs when required. Instructions on how to configuration this service, however is not provided in this document.
|
|
|
|
|
|
|
|
To create the service run the following:
|
|
|
|
```sudo touch /etc/systemd/system/network-config.service```
|
|
|
|
Then with your favorite text editor enter the following into that file that you just created:
|
|
|
|
```
|
|
|
|
[Unit]
|
|
|
|
Description=Network Configuration Service
|
|
|
|
After=network.target
|
|
|
|
|
|
|
|
[Service]
|
|
|
|
Type=oneshot
|
|
|
|
ExecStart=/bin/bash /usr/local/bin/network-config.sh
|
|
|
|
RemainAfterExit=yes
|
|
|
|
|
|
|
|
[Install]
|
|
|
|
WantedBy=multi-user.target
|
|
|
|
```
|
|
|
|
Next create the network-config.sh script that will be ran by the service by doing the following:
|
|
|
|
`sudo touch /usr/local/bin/network-config.sh`
|
|
|
|
In your favorite text editor again write the following:
|
|
|
|
```
|
|
|
|
#!/bin/bash
|
|
|
|
|
|
|
|
ip address add 192.168.2.100/24 dev eth0
|
|
|
|
ip link set eth0 mtu 9000
|
|
|
|
echo 32 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
|
|
|
|
ethtool -K eth0 tso off
|
|
|
|
ethtool -K eth0 gro on
|
|
|
|
ethtool -G eth0 rx 8192
|
|
|
|
ethtool -K eth0 lro on
|
|
|
|
ethtool -C eth0 rx-usecs 0 rx-frames 128
|
|
|
|
ethtool -K eth0 receive-hashing off
|
|
|
|
ethtool -K eth0 rx-udp-gro-forwarding on
|
|
|
|
sysctl -w net.core.wmem_max=67108864
|
|
|
|
```
|
|
|
|
Then finally the following commands in order:
|
|
|
|
```
|
|
|
|
sudo chmod +x /usr/local/bin/network-config.sh
|
|
|
|
sudo systemctl daemon-reload
|
|
|
|
sudo systemctl enable network-config.service
|
|
|
|
sudo systemctl start network-config.service
|
|
|
|
```
|
|
|
|
After all the of the following has been completed the NIC will be configured, and will continue to be configured anytime the system is rebooted or turned off.
|
|
|
|
|
|
|
|
### Boot configurations
|
|
|
|
|
|
|
|
In order for the data pipeline to run we need to have isolated cpus and HugeTLB pages for correct memory allocation. For this to happen we need to change the boot parameters of the system. To do this do the following:
|
|
|
|
|
|
|
|
With your text editor open the following file `/boot/extlinux/extlinux.conf`. When the file is open follow the instructions given in the comments for creating a back up and create a new entry above the comments that is the following:
|
|
|
|
|
|
|
|
```
|
|
|
|
LABEL primary
|
|
|
|
MENU LABEL primary kernel
|
|
|
|
LINUX /boot/Image
|
|
|
|
INITRD /boot/initrd
|
|
|
|
APPEND ${cbootargs} root=PARTUUID=6d40c020-58ba-48ec-ac31-606f57963602 rw rootwait rootfstype=ext4 mminit_loglevel=4 console=ttyTCU0,115200 console=ttyAMA0,115200 firmware_class.path=/etc/firmware fbcon=map:0 net.ifnames=0 nospectre_bhb video=efifb:off console=tty0 nv-auto-config isolcpus=0 default_hugepagesz=1G hugepagesz=1G iommu=on
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
Save then reboot the system. Post reboot you will want to check whether this change took effect by looking at what the command line actually returned at boot. You will check this by entering the following in the command-line:
|
|
|
|
|
|
|
|
`cat /proc/cmdline`
|
|
|
|
|
|
|
|
If this doesn't return what is placed in the APPEND key above then one of the previous steps was done correctly. If it does match this process is complete.
|
|
|
|
|
|
|
|
## Installing python and casperfpga
|
|
|
|
The python module casperfpga will allow you to communicate with the rfsoc4x2, including programming the fpga.
|
|
|
|
|
|
|
|
The following instructions explain how to setup a python environment using miniconda.
|
|
|
|
Miniconda is a lightweight version of anaconda. Before proceeding, download and install miniconda on your system by running the bash script:
|
|
|
|
|
|
|
|
$ sh ./Miniconda3-latest-Linux-x86_64.sh
|
|
|
|
|
|
|
|
We are going to create a shell environment that uses a specific version of python. Do so by running,
|
|
|
|
|
|
|
|
$ conda create -n casper python=3.8.20
|
|
|
|
|
|
|
|
Activate this environment by running
|
|
|
|
|
|
|
|
$ conda activate casper
|
|
|
|
|
|
|
|
Remember to activate the environment at the start of every new session.
|
|
|
|
|
|
|
|
We will now install casperfpga. The repository for casperfpga has been included. Navigate to its directory and run the following:
|
|
|
|
|
|
|
|
$ pip install -r requirements.txt
|
|
|
|
$ pip install .
|
|
|
|
|
|
|
|
Casperfpga is now installed! Test this by opening an ipython session
|
|
|
|
$ ipython
|
|
|
|
In the ipython session, import casperfpga:
|
|
|
|
|
|
|
|
import casperfpga
|
|
|
|
|
|
|
|
If no errors are thrown, then you installed casperfpga correctly.
|
|
|
|
|
|
|
|
# Programming the rfsoc4x2
|
|
|
|
This needs to be done every time the rfsoc4x2 boots up. We use casperfpga for programming. Once again enter an ipython session:
|
|
|
|
|
|
|
|
$ ipython
|
|
|
|
|
|
|
|
Create an object that connects to the board
|
|
|
|
|
|
|
|
import casperfpga
|
|
|
|
fpga = casperfpga.CasperFpga(<ip addr in quotes>)
|
|
|
|
|
|
|
|
Upload the provided .fpg file
|
|
|
|
|
|
|
|
fpga.upload_to_ram_and_program(<.fpg path>)
|
|
|
|
|
|
|
|
Program the clocks
|
|
|
|
|
|
|
|
rfdc = fpga.adcs['rfdc']
|
|
|
|
rfdc.init()
|
|
|
|
rfdc.progpll('lmk', 'rfsoc4x2_lmk_PL_125M_SYSREF_5M_LMXREF_250M.txt')
|
|
|
|
rfdc.progpll('lmx', 'rfsoc4x2_lmx_inputref_250M_outputref_250M.txt')
|
|
|
|
|
|
|
|
At this point the rfsoc4x2 is ready to go! The next section describes how to set up the Orin.
|
|
|
|
|
|
|
|
## Hashpipe
|
|
|
|
|
|
|
|
Hashpipe is the pipeline firmware running on the ORIN which enables the data acquisition. For the university node the shared libraries and executables are given. The following section shows how to start Hashpipe for data acquisition.
|
|
|
|
|
|
|
|
### Starting Hashpipe
|
|
|
|
To run Hashpipe you must be su. Along with being su, you will need to set the LD_LIBRARY_PATH environment variable to the correct shared libraries required to run the executable.
|
|
|
|
|
|
|
|
As su change directory to the bin/ directory and run the following on the command line:
|
|
|
|
```
|
|
|
|
export LD_LIBRARY_PATH=/<hashpipe_root_dir>/lib/aarch64-linux-gnu
|
|
|
|
|
|
|
|
```
|
|
|
|
To ensure that nothing is wrong it is recommended to always use an absolute path.
|
|
|
|
|
|
|
|
Once the environment variable is set you can run Hashpipe. The only command that needs to run in the following:
|
|
|
|
```
|
|
|
|
./hashpipe -p libtest_rx_hashpipe.so -I 0 -o IBVPKTSZ=8298 -o IBVIFACE=eth0 -o IBVSNIFF=1 -o DATADIR=<path_for_data> -o FILENAM=<file_name>.bin -c 0 ibvpkt_thread -c 1 disk_async_t
|
|
|
|
```
|
|
|
|
The only thing that should be changed in this command is what is inside the '<>'. Datadir is the path to where the data will be written and filename is the name of the file that will be written.
|
|
|
|
|
|
|
|
### Monitoring Hashpipe
|
|
|
|
|
|
|
|
To see what Hashpipe is doing it will be useful to check specific control values that are being updated in the pipeline such as the data rate or amount of disk writes that have occurred. In order to monitor Hashpipe, in a separate terminal from the same as su bin/ directory with the LD_LIBRARY_PATH variable set you will enter the following command:
|
|
|
|
|
|
|
|
```
|
|
|
|
watch -n 1 "./hashpipe_check_status -I 0 -v | fold -w 80"
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
### Hashpipe Outputs
|
|
|
|
Hashpipe will write a binary file containing all of the ADC samples and packet meta-data. Provided is the file parse_bits.m which can be used to parse that binary file into time samples from each ADC. It can also be used as reference as to how the network packets are formatted. Each "chunk" as described in the parse bits file consists of 1024 time samples from each ADC and you can specify how many chunks to read at a time by changing the num_chunks variable. |
|
|
|
\ No newline at end of file |