|
|
[<< Home](/home#7-interfaces-between-alpaca-and-gbo-infrastructure-panel-charge-6)
|
|
|
|
|
|
[<< Section 7.2](./7.2)
|
|
|
|
|
|
## 7.3 Monitor and Control Integration
|
|
|
In order to be a user-instrument the ALPACA hardware and its corresponding
|
|
|
monitor and control software must integrate into the existing GBO facility
|
|
|
infrastructure. It is our understanding that the GBO Monitor and Control System
|
|
|
(M&C) is a modular networked hierarchy of different software "managers" that
|
|
|
implement the command and control of the hardware used for observations. This
|
|
|
includes both the hardware infrastructure of the GBT such as: dish steering,
|
|
|
switching receivers, command to and status from receiver electronics; as well as
|
|
|
the configuration and coordination of the digital back end processors.
|
|
|
|
|
|
To satisfy this requirement the ALPACA hardware will present itself as a server
|
|
|
to a GBO provided manager over the network acting as a client initiating the
|
|
|
communication that configures the operation of the hardware and requests status
|
|
|
information. The following high-level block diagram shows this interface
|
|
|
relationship:
|
|
|
|
|
|
<div align="center">
|
|
|
<img src="../img/dbe/sw/alpaca-gbo-sw-interface.png" width=700"\>
|
|
|
</div>
|
|
|
|
|
|
The red dashed line depicts the interface boundary between ALPACA hardware and
|
|
|
the GBO M&C systems. The ALPACA front end cryostat and digital beamformer back
|
|
|
end will each have hardware with their respective control server processes. GBO
|
|
|
M&C system will incorporate the addition of a new `ALPACA Manager`. Here it is
|
|
|
depicted as a container with two separate sub-managers, the `Beamformer Manager`
|
|
|
and the `FE (front end) Manager`. This distinction is to only demonstrate that
|
|
|
GBO M&C will need to communicate with two separate processes, but the
|
|
|
functionality could feasibly be a single manager. This implementation detail
|
|
|
would be up to the GBO software teams as to what is the best fit for their
|
|
|
manager framework.
|
|
|
|
|
|
### 7.3.1 Digital Back End Controller
|
|
|
The ALPACA control servers will define a port used for communication and when a
|
|
|
connection request comes in on that port the machine will start up an instance
|
|
|
of the control server opening a socket using TCP and begin listening for
|
|
|
incoming messages. The proposed message format will be a fixed length 4 byte
|
|
|
(network order) message header followed by the message body. This message header
|
|
|
will represent the length in bytes of the message body containing ASCII
|
|
|
characters. There will be a set of commands using a key/value configuration
|
|
|
scheme that the ALPACA controller will react to.
|
|
|
|
|
|
The following state digram shows an example of the underlying state machine for
|
|
|
the digital back end controller:
|
|
|
|
|
|
<div align="center">
|
|
|
<img src="../img/dbe/sw/alpaca-control-server-state-diagram.png" width=600"\>
|
|
|
</div>
|
|
|
|
|
|
When a network connection appears on the specified port the server starts up and
|
|
|
initializes the control and processing environment (status checks, test and
|
|
|
prepare communication with the rfsoc control server, etc.). When complete the
|
|
|
server will push back a ready response indicating that it is moving to the
|
|
|
`idle` state. In this state, the back end will respond to any valid `configure`
|
|
|
message preparing the processing pipeline by starting the underlying observation
|
|
|
mode. The response will be the earliest available start time that the back end
|
|
|
will be ready to start acquiring data. The server moves then to the `prepared`
|
|
|
state and is waiting for a `start` command. Upon receiving `start`, the server
|
|
|
will transition to the `processing` state and begin to monitor the status and
|
|
|
progress of data moving through the acquisition pipeline. After a period of time
|
|
|
(either coordinated as a specified `scan_len` parameter as part of a `configure`
|
|
|
message, or a `done` message from the manager) the server moves to the `cleanup`
|
|
|
state preparing the pipeline either for the next scan or a new configuration
|
|
|
event.
|
|
|
|
|
|
At anytime the server can be interrupted and sent back to the idle state with a
|
|
|
`cancel` command. Additionally, the control server will always respond to a
|
|
|
valid `info` message requesting status information. The server can be
|
|
|
terminated by sending a `configure` message indicating it should standby and
|
|
|
close.
|
|
|
|
|
|
There are aspects of this design that are yet to be established due to the
|
|
|
inherent nature of a required collaboration when integrating into a host
|
|
|
observatory's environment. As GBO software teams receive approval to work with
|
|
|
ALPACA following the design review, further coordination and collaboration can
|
|
|
proceed to better understand the `Manager` communication framework and to
|
|
|
define all event driver sequences, error handling and logging, and the complete
|
|
|
set of telescope meta data that is both available to the digital back end and is
|
|
|
required in a typical user-instrument observation setting on the GBT.
|
|
|
|
|
|
### 7.3.2 Front End Controller
|
|
|
The primary function of the front end controller is to be a monitor point for
|
|
|
cryostat hardware electronics. This server will work similarly to the back end
|
|
|
controller in that it will define a port for communication by opening a socket
|
|
|
using TCP and listening for valid formatted ASCII messages. In this case, the
|
|
|
cryostat only has need to respond to `info` commands when the `FE Manager` is
|
|
|
requesting status information. Examples would be info requesting a temperature
|
|
|
sensor readout or the value of an LNA current sensor, or when sending an
|
|
|
`update` command to adjust the bias voltage of an LNA.
|
|
|
|
|
|
<div align="center">
|
|
|
<img src="..:/img/dbe/sw/cryostat-sequence-diagram.png" width=300"\>
|
|
|
</div>
|
|
|
|
|
|
### 7.3.3 Back end Development and BYU Site Integration
|
|
|
A lightweight engineering client will be implemented for controlling the digital
|
|
|
back end and front end during development and the integration tests that are to
|
|
|
be done at BYU. In these scenarios this client will function similar to the GBO
|
|
|
M&C `ALPACA Manager` by providing ALPACA specific configuration commands and
|
|
|
only provide values simulating what would be the required meta data associated
|
|
|
with an observation. This lightweight engineering client will be made available
|
|
|
to GBO software development staff for use as an engineering control interface
|
|
|
and a model of ALPACA command functionality and protocols as they develop their
|
|
|
ALPACA Manager software as integrated components of the GBO M&C infrastructure.
|
|
|
|
|
|
### 7.3.4 Example Message Definition
|
|
|
As described in [Digital Back End Controller](#731-digital-back-end-controller),
|
|
|
communication with the back end will be over a socket using TCP with the message
|
|
|
scheme of `<msgHdr><msgBdy>` with `msgHdr` the fixed size 4 byte (network order)
|
|
|
length of the data in `msgBdy`. After the design review the ALPACA and GBO
|
|
|
software teams will collaborate to define a complete specification to includes
|
|
|
all the required telescope meta data along with the alpaca required parameters.
|
|
|
The following are examples of the ASCII message for different commands:
|
|
|
|
|
|
Configure the back end to perform beamformer calibration:
|
|
|
```json
|
|
|
{"CMD": "CONF", "mode": "CAL", "pattern": "grid", "weight_file": "/home/alpaca/weights/beams.wx",
|
|
|
"alg": "max-snr", "int_len": 0.1, "antenna_fits": "/home/alpaca/antenna/antenna1.fits",
|
|
|
"freq": 1510.0, "proj_id": "a1paca", "obs": "mcb",
|
|
|
/*other calibration configuration parameters*/}
|
|
|
```
|
|
|
|
|
|
Configure the back end to start coarse channel spectrometer mode:
|
|
|
```json
|
|
|
{"CMD": "CONF", "mode": "COAR", "chan_sel": 25, "freq": 1510.0, "weight_file":
|
|
|
"/home/alpaca/weights/beams.wx", "int_len": 0.1, "proj_id": "a1paca", "obs": "mcb",
|
|
|
/*other coarse mode configuration parameters*/}
|
|
|
```
|
|
|
|
|
|
Configure the back end to start the fine channel spectrometer mode:
|
|
|
```json
|
|
|
{"CMD": "CONF", "mode": "FINE", "chan_sel": 25, "freq": 1510.0, "weight_file":
|
|
|
"/home/alpaca/weights/beams.wx", "int_len": 0.1, "proj_id": "a1paca", "obs": "mcb",
|
|
|
/*other fine mode configuration parameters*/}
|
|
|
```
|
|
|
|
|
|
Query back end status and parameters,
|
|
|
```json
|
|
|
{"CMD": "INFO", "keys":["freq", "atten", "adc_status"]}
|
|
|
```
|
|
|
with the following response:
|
|
|
```json
|
|
|
{"RESP": "OK", "keys":["freq", "atten", "adc_status"], "values": [1510.0,
|
|
|
{"tile0": 3, "tile1": 3, "tile2": 3, "tile3": 4}, {"tile0": {"state": 15,
|
|
|
"pll": 1}}, {"tile1": {"state": 15, "pll": 1}}, {"tile2": {"state":7, "pll": 0}},
|
|
|
{"tile3": {"state": 7, "pll": 0}}]}
|
|
|
```
|
|
|
|
|
|
Start a scan for a specified duration starting at the indicated time (seconds
|
|
|
since 1970 Unix):
|
|
|
```json
|
|
|
{"CMD": "START", "scan_len": 30.0, "start_time": 1630861102}
|
|
|
```
|
|
|
|
|
|
Abort the current scan:
|
|
|
```json
|
|
|
{"CMD": "CANCEL"}
|
|
|
```
|
|
|
|
|
|
Indicates that the server should sleep and wait for a new client connection:
|
|
|
```json
|
|
|
{"CMD": "CONF", "mode": "STDBY"}
|
|
|
```
|
|
|
|
|
|
### 7.3.5 F-engine Control
|
|
|
Command and control of the F-engine will be brokered by the digital beamformer
|
|
|
back end by dedicating one of the GPU nodes as the main controller. This
|
|
|
controller will communicate with a server running on the Zynq UltraScale+ MPSoC
|
|
|
A53 processor of the RFSoC. The `ALPACA Manager` will therefore not have direct
|
|
|
access to the RFSoCs. Rather, relevant user information will be passed along or
|
|
|
generated by the beamformer back end controller as part of commands given by the
|
|
|
manager.
|
|
|
|
|
|
The underlying control in many CASPER instruments has been done with the remote
|
|
|
client software `casperfpga` (historically `corr`) which wraps the transport protocol
|
|
|
`katcp` for communication with an instance of `tcpborphserver` running on the
|
|
|
platforms processor. The ALPACA team has [ported the RFSoC to the CASPER
|
|
|
tools][alpaca-casper-fork] including the use of `tcpborphserver` as the control
|
|
|
server by developing the modifciations and driver code needed for use with the
|
|
|
Zynq A53, RFDC, and fabric of the FPGA.
|
|
|
|
|
|
`tcpborphserver` is then capable for the required F-engine monitor and control
|
|
|
tasks such as: power-up initializations, programming PLLs, status checks on the
|
|
|
RFDC, performing MTS/MCS, arming the F-engine in preparation for a scan,
|
|
|
pipeline resets and diagnostics, etc. The following are examples of how the
|
|
|
digital beamformer back end will control the ZCU216 for preparing the f-engine
|
|
|
for operation by programming the on-board PLLs or check status of the RFDC.
|
|
|
|
|
|
```python
|
|
|
import casperfpga
|
|
|
alpaca1 = casperfpga.CasperFpga('192.168.2.101')
|
|
|
alpaca1.upload_to_ram_and_program('/home/mcb/alpaca/fengine.fpg')
|
|
|
|
|
|
# initialize rfdc driver
|
|
|
rfdc = alpaca1.adcs['rfdc']
|
|
|
|
|
|
# program the onboard PLLs, res contains `True` on success else `False`
|
|
|
res = rfdc.progpll('lmk', '250M_PL_125M_SYSREF_10M.txt')
|
|
|
|
|
|
# check status of the rfdc
|
|
|
rfdc_status = rfdc.status()
|
|
|
|
|
|
"""
|
|
|
with `rfdc_status` containing the following deserialized response showing nominal
|
|
|
operation of the RFDC
|
|
|
|
|
|
ADC0: Enabled 1, State: 15 PLL: 1
|
|
|
ADC1: Enabled 1, State: 15 PLL: 1
|
|
|
ADC2: Enabled 1, State: 15 PLL: 1
|
|
|
ADC3: Enabled 1, State: 15 PLL: 1
|
|
|
"""
|
|
|
```
|
|
|
|
|
|
[Section 7.4 >>](./7.4)
|
|
|
|
|
|
[alpaca-casper-fork]: https://gitlab.ras.byu.edu/alpaca/casper
|
|
|
[alpaca-rfsoc-mlib-devel]: https://gitlab.ras.byu.edu/alpaca/casper/mlib_devel/-/tree/rfsocs/devel
|
|
|
[alpaca-rfsoc-casperfpga]: https://gitlab.ras.byu.edu/alpaca/casper/casperfpga/-/tree/rfsocs/rfdc
|
|
|
[alpaca-rfsoc-katcp]: https://gitlab.ras.byu.edu/alpaca/casper/katcp/-/blob/rfsoc/rfdc/tcpborphserver3/rfsoc.c |