How do I detect the timescale precision used in a simulation from the source code ?. Consider I have a configuration parameter(cfg_delay_i) of some delay value given by user in timeunits as fs .If the user gives 1000 , my code has to wait 1000fs or 1ps before executing further.
#(cfg_delay_i * 1fs );//will wait only if timescale is 1ps/1fs
do_something();
If the timescale precision is 1fs ,there won’t be any problem but if the precision is higher than that it won’t wait and it will work as 0 delay . So I want to write a code which will determine the timescale used by the user and give the delay accordingly.My expected pseudo-code will be like below,
if(timeprecision == 1fs )#(cfg_delay_i * 1fs ) ;
else if(timeprecision == 1ps )#(cfg_delay_i/1000 * 1ps ) ;
Please help me with the logic to determine the timescale unit and precision internally.
You can write if (int'(1fs)!=0) // the time precision is 1fs
and so on. But there's no need to do this.
#(cfg_delay_i/1000.0 * 1ps)
The above works regardless if the precision is 1ps or smaller. Note the use of the real literal 1000.0
to keep the division real. 1ps is already a real number, so the result of the entire expression will be real. You could also do
#(cfg_delay_i/1.e6 * 1ns)
If the time precision at the point where this code is located is greater than 1fs, the result gets rounded to the nearest precision unit. For example if cfg_delay
is 500 and the current precision is 1ps, this would get rounded to #1ps
.
Do be aware that the user setting cfg_delay
has to take the same care to make sure their value is set with the correct scaling/precision.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With