Project Specifics Quantified

By Christian Brown and Mustafa Momin

It is my belief that I can define my own data types. Therefore, I can input, process and output data with any bit length. Furthermore, I’m assuming that all values are positive and unsigned data such that they can be padded to any given length and then processed in the FPGA according to any sort of processing that gets requested. For now, you can essentially assume I’m using 16-bit unsigned integer values for all data-types. I would like to be able to keep the data to 12-bits such that memory can be saved and unnecessary multiplication and addition to the extra 4 bits which may have to be padded don’t exist. If they must exist, it will be an easy fix. I will assign a data-type constant value which affects the length of the data such that the data and data stride will automatically increase to compensate throughout the entire program simply by changing that 1 value.

Explanation of Block Diagram Blocks Hyperspectral imager:

block

15FLOPS per pixel per channel with possible 2x for sensor angular field-of-view correction and another 2x possible and addition of up to 67FLOPSpppc1 from Resonon, we will say the hyperspectral imager takes 15-60FLOPSpppc. Assuming we are using the Pika NIR2 channels, this comes to .696-2.784MFLOPS. Upon further thought analysis, we’ve realized that all of these corrections will now be done in the FPGA fabric so we will go with a value of 696kFLOPS.

Store Data from ARM Processor to Cyclone V FPGA: 9.28MOPS = 100fps * 320 spatial channels * 145 spectral channels * 2 instructions (load and store) Process Data Vectors in FPGA: Impossible to calculate (Unknown number of operations to be done on each pixel) Output Processed Data in FPGA to ARM Processor: 4.64MOPS = 100fps * 320 spatial channels * 145 spectral channels * 1 instruction (load, synchronized with ARM Processor) ARM Processor to

Flash Memory (DRAM): 4.64MOPS = 100fps * 320 spatial channels * 145 spectral channels / 16 (because there are 4 64 bit read/write ports meaning that 16 16 bit data values can be transferred at a time) ARM Processor to HDMI Output: 74.24MOPS = 100 frames per second *320 spatial channels * 145 spectral channels * 16 bit data (12 bits padded to 16 bits as well as HDMI having a serial interface) , we have 145 channels thereby coming to 4350-8700FLOPS per pixel. With 320 spatial Data movement Hyperspectral Imager to ARM Processor: 55.68Mbits/sec = 100 frames per second *320 spatial channels * 145 spectral channels * 12 bit depth ARM Processor to FPGA: 55.68Mbits/sec = 100 frames per second *320 spatial channels * 145 spectral channels * 12 bit depth ARM Processor to DRAM: 74.24Mbits/sec = 100 frames per second *320 spatial channels * 145 spectral channels * 16 bit unsigned integers (12 bit depth padded to 16 bits) FPGA to ARM Processor: 55.68Mbits/sec = 100 frames per second *320 spatial channels * 145 spectral channels * 12 bit depth ARM Processor to HDMI Output: 74.24Mbits/sec = 100 frames per second *320 spatial channels * 145 spectral channels * 16 bit unsigned integers (12 bit depth padded to 16 bits) [HDMI can only support up to 340Mbytes/sec 1 http://www.ll.mit.edu/publications/journal/pdf/vol14_no1/14_1compensation.pdf http://www.resonon.com/products_imagers_main.html 2