FPGA Compilation Fails When Using Deep BlockRAM DMA FIFOs in LabVIEW 2016

Updated Jan 16, 2018

Reported In


  • LabVIEW 2016 Full
  • LabVIEW 2016 FPGA Module
  • LabVIEW 2016 Professional
  • LabVIEW 2016 Base

Issue Details

I have an existing FPGA application which I have upgraded to LabVIEW 2016 or a new FPGA application I have created in LabVIEW 2016. The FPGA code makes use of "deep" or very large DMA FIFOs, and is failing compilation in LabVIEW 2016 due to not meeting timing constraints. Why is this happening in LabVIEW 2016 and how can I ensure my FPGA code will compile successfully?


This is due to a change in the default Xilinx Compilation directives used by LabVIEW in previous versions. Specifically, the BlockRAM power optimization directive.

To resolve this behavior:
  1. Right-click on the FPGA Build Specification and choose Properties.
  2. Select the Xilinx Options category from the properties window.
  3. In the Implementation Strategy drop-down, choose the Custom option.
  4. Set the Design optimization directive to Disable BRAM power optimization. An example configuration can be seen in the screenshot below.

Additional Information

In past versions of LabVIEW FPGA, this directive was enabled on all LabVIEW FPGA compilations by default. As of 2016 this directive was removed and the Xilinx default of enabling BRAM power optimization was used instead. In certain cases, such as deep FIFOs, the gating logic implemented on FIFOs by this power optimization will cause a timing violation. Since power optimization was disabled in past versions, this change may result in previous working code failing compilation after upgrading to LabVIEW FPGA 2016. 


Not Helpful