What best practices should be observed when implementing HDL code?
What are the commonalities and differences when compared to more common software development fields?
The most popular hardware description languages are Verilog and VHDL. They are widely used in conjunction with FPGAs, which are digital devices that are specifically designed to facilitate the creation of customized digital circuits.
VHDL is more verbose than Verilog and it is also has a non-C like syntax. With VHDL, you have a higher chance of writing more lines of code. VHDL can also just seem more natural to use at times. When you're coding a program with VHDL, it can seem to flow better.
Verilog, standardized as IEEE 1364, is a hardware description language (HDL) used to model electronic systems. It is most commonly used in the design and verification of digital circuits at the register-transfer level of abstraction.
What Is VHDL? Very High-Speed Integrated Circuit Hardware Description Language (VHDL) is a description language used to describe hardware. It is utilized in electronic design automation to express mixed-signal and digital systems, such as ICs (integrated circuits) and FPGA (field-programmable gate arrays).
Sort of an old thread, but wanted to put in my $0.02. This isn't really specific to Verilog/VHDL.. more on hardware design in general... specifically synthesizable design for custom ASICs.
This is my opinion based on years of industry (as opposed to academic) experience on design. They are in no particular order
My umbrella statement is to Design for validation execution. In hardware design, validation is paramount. Bugs are a lot more expensive when found in actual silicon. You can't just re-compile. Therefore, pre-silicon is given much more focus.
Know the difference between control paths and data paths. This enables you to create much more elegant and maintainable code. Also allows you to save gates and minimize X propagation. For instance, data paths should never need resettable flops, control paths should always need it.
Prove functionality before validation. Either through a formal approach or through waveforms. This has many advantages, I will explain 2. First, it will save you wasted time onion peeling through issues. Unlike lots of application level design (esp while learning) and most course work, the turn-around time for code changes is very large (anywhere from 10 minutes to days, depending on complexity). Every time you change code, you need to go through elaboration, lint checking, compiling, waveform bring-up, and finally actual simulation.. which can itself take hours. Second, you are much less likely to have difficult to hit corner cases. Note this is with respect to pre-silicon validation. These will surely hit at post-silicon costing you lots of $$$. Trust me, the up front cost of proving functionality greatly minimizes risk and is well worth the effort. This is sometimes difficult to convince recent college grads.
Have "chicken bits". Chicken bits are bits in MMIO set via the driver to disable a feature in silicon. It's intended to revert changes made in which confidence is not high (confidence is directly proportional to validation efforts). It is next to impossible to hit every possible state in pre-silicon. Confidence on your design cannot truly be met until it's proven in post-silicon. Even if there is only 1 state that is hit 0.000005% of the time that exposes the bug, it WILL HIT in post-silicon, but not necessarily in pre-silicon.
Avoid exceptions in the control path at all costs. Every new exception you have doubles your validation efforts. This one is hard to explain. Lets say there is a DMA block that will save out data to memory that another block will use. Lets say the data structure saved out is dependent on some function being done. If you decided to design such that the data structure saved was different between different functions, you just multiplied your validation efforts by the number of DMA functions. If this rule is followed, the data structure saved out would be a super-set of all data available for every function where the content locations are hardcoded. Once the DMA save logic is validated for 1 function its validated for all functions.
Minimize interfaces (read minimize control paths). This is related to minimizing exceptions. First, every new interface requires validation. This includes new checkers/trackers, assertions, coverage points, and bus functional models in your testbench. Secondly, it can increase your validation efforts exponentially! Lets say you have 1 interface for reading data in caches. Now lets say (for some odd reason) you decide you want another interface for reading main memory. You just quadrupled your validation efforts. You now need to validate these combinations at any given time n:
Understand and communicate assumptions. Lacking this is the main reason for block to block communication issues. You could have a perfect block fully validated.. however, without understanding all assumptions, your block will fail when its connected.
Minimize potential states. The less states (intended or unintended) a design has, the less effort required to validate. It's good practice to group like functions into 1 top level function (like sequencers and arbiters). It is very difficult to identify and define this high level function such that it encompasses as many smaller functions as possible, but in doing so you vastly eliminate state and in turn potential for bugs.
Always provide a strong signal leaving your block. Most of the time flopping it is the solution. You have no idea what the endpoint block(s) will do with it. You could run into timing issues which can have a direct impact on your perfect implementation.
Avoid mealy type FSMs unless performance is negatively impacted. Mealy FSMs are more likely to produce timing issues over Moore
.. and finally the one I dislike the most: "if it ain't broke, don't fix it" Because of the risk involved and the high cost of bugs, many times hacking is a more practical solution to solving problems. Others have eluded to this by mentioning utilization of existing components.
As for comparing against more traditional software design:
discrete event driven programming is a completely different paradigm. People see verilog syntax and think "oh, its just like C"... however, this cannot be further from the truth. Although the syntax is similar, one must think differently. For example, a traditional debugger is virtually meaningless on synthesizable RTL (Testbench design is different). Waveforms on paper are the best tool available. However, that being said, FSM design can at times mimic procedural programming. People with a software background tend to go crazy with FSMs (I know I did at first).
System Verilog has lots and lots (and lots) of testbench specific features. It is completely object oriented. As far as testbench design goes, its very similar to traditional software design. However, it does have 1 more dimension associated with it, that of time. race conditions and protocol delays must be accounted for
As for validation, it is also different (and the same). There are 3 main approaches;
... for completeness, I need to also discuss best test-bench design practices... but that's for another day
Sorry for the length.. I was in "The Zone" :)
The best book on this topic is Reuse Methodology Manual. It covers both VHDL and Verilog.
And in particular some issues that don't have an exact match in software:
Some that are the same include
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With