1. 03 Jun, 2022 1 commit
    • Mitch Burnett's avatar
      change weight mechanics · 8afbb82a
      Mitch Burnett authored
      user would set the rtbf context to a weight array and then call update
      weights. This was removed.
      
      Also, initialization of the beamformer would return the device weight
      memory in an indetermined state. The init now initializes the weights to
      all ones [CMPLXF(1.0, 0)] and then the user is to make a call to
      `update_weights` to change them. This allows the beamformer to at least
      not be completely broken initially.
      
      The update mechanics now have `update_weights` accepting an array of
      float values to load to the device. The idea is that the user
      creates some weights and manually update them and then they are to still
      be responsible to free that memory.
      
      simplified and improved the logic in `update_weight` to combine the
      conjugate transpose into one loop instead of across multiple temporary
      arrays. Two versions are left here and will follow up with a new commit
      that removes the second (because I think I personally favor the first
      implementation as it is more descriptive in its memory access).
      8afbb82a
  2. 02 Jun, 2022 2 commits
  3. 01 Jun, 2022 1 commit
  4. 31 May, 2022 1 commit
    • Mitch Burnett's avatar
      rename compile parameters, work on registering host memory · a26238a2
      Mitch Burnett authored
      compile parameter names have changed to be more descriptive. still
      working on some of the size parameters. Working on a struct containing
      the compiled info.
      
      started working on registering host memory, stopped to get parameters
      changed and to have compiled info to compute sizes.
      
      this has the start of detecting if pinned memory regions overlap that
      seems to happen for small beamform sizes
      a26238a2
  5. 29 May, 2022 3 commits
  6. 28 May, 2022 6 commits
  7. 27 May, 2022 5 commits
    • Mitch Burnett's avatar
    • Mitch Burnett's avatar
      remove references to "transpose" kernel · dc981ff5
      Mitch Burnett authored
      Not sure why that implementation was still in the beamformer.
      
      That version assumes data as received at the network (group by f-engine
      packetized). The cublasGemmBatched works on batching frequency bins and
      uses that as the slowest moving dimension. It is still useful code for
      elsewhere in the system and was copied out in a temporary file to be
      moved somewhere else more permanent (not yet decided). But I could not
      figure out why it would have been kept around. The thought came to me
      for looking at beamform spectra data but, still needs to be beamformed
      first needing to be grouped by batch for this implementation.
      
      (just a thought, would need to think about it)
      transposing the data in that specific way may not be necessary as
      long as the data is contiguous the pointer-to-pointer interface just
      needs to point to the start of each batch (frequency bin) and as long as
      the data is contiguous there
      dc981ff5
    • Mitch Burnett's avatar
      look at transpose code · ef3dba92
      Mitch Burnett authored
      looking over transpose code since still probably faster to transpose in
      GPU than CPU and will continue to do that.
      
      But, because input assumes network ordered data means that xgpu needs to
      have a transpose in it. The real goal would be to get to a gpu pipeline
      implementation that gets data on device and just pass that between
      kernel methods.
      
      also, these transpose could probably be sped up with a LUT
      ef3dba92
    • Mitch Burnett's avatar
    • Mitch Burnett's avatar
      fix size cudaMalloc'd in pointer-to-pointer interface · c4b24461
      Mitch Burnett authored
      gemmBatched uses a pointer-to-pointer interface where the input arrays
      to gemmBatched is an array of the memory addresses for the start of each
      back. The `d_arr_*` array's are these interface array's. They were being
      malloc'd with a size equal to what the data sizes are where these arrays
      just contain the start of the next batch. The array size only needs to
      be the batchCount.
      c4b24461
  8. 26 May, 2022 2 commits
  9. 23 May, 2022 2 commits
  10. 11 May, 2022 2 commits
  11. 09 May, 2022 5 commits
  12. 08 May, 2022 3 commits
    • Mitch Burnett's avatar
      new testbench and example of working with beamformer · c3fa924c
      Mitch Burnett authored
      header downsizes for a smaller example and beamformer adjusts magic
      numbers for setting up dimensions to support that.
      
      init method setups up the weights instead of only being loaded by a
      file, this makes `cublas_main` unusable but this is for setting up to
      verify output and be able to start making adjustments
      
      `testbeam` was meant to be `multibeam`. the new testbench sets up data
      using the time dimension as scanning angle to plot beam patterns. So
      each time element is an angle. beamformer weights are ULA and the angles
      chosen split up the number of beams. All the same data is in each
      channel
      c3fa924c
    • Mitch Burnett's avatar
      fix for compile with cuda11 · 778e49c7
      Mitch Burnett authored
      couldn't compile for cuda 11 using complex, needed to move to cuComplex
      
      change formatting of code
      
      start to adjust header definition for alpaca but ended up more with
      todo's and notes to try and understand what to do in moving from flag to
      onr to alpaca and be more flexible/standalone
      778e49c7
    • Mitch Burnett's avatar
      initial commit bringing over flag gpu beamformer · 996a6783
      Mitch Burnett authored
      flag beamformer library has always been part of a larger project, this
      strips that down to a separate repo and removing much of the other
      baggage
      
      additionally, a lot of changes have been made to adjust flag to onr but
      we need now need the beamformer to look more like flag and there was no
      tag or real commit to revert back
      996a6783