//`timescale 10ps/1fs
module time_presion();
timeunit 100ps/10ps; //If We change this to 100ns/10ps it works fine
parameter p=11.49;
int a;
initial begin
$monitor("%t ,My values Changes %d",$time,a);
#p a = 10;
#p a = 30;
#p a = 40;
//#100us;
#p a = 50;
#1 $finish(1);
end
endmodule
When I run this code, I get this error:
file: time_prcision.sv
timeunit 100ps/10ps;
|
ncvlog: *E,TUSERR (time_prcision.sv,4|11): timeunit is smaller than the specified time precision [IEEE Std 1800-2009].
module worklib.time_presion:sv
errors: 1, warnings: 0
If I change the timeunit to 100ns/10ps
, then the code runs properly. What is wrong in the above code?
From SystemVerilog LRM 1800-2012, section 3.14.2.2:
The time unit and precision can be declared by the timeunit and timeprecision keywords, respectively, and set to a time literal (see 5.8).
The line timeunit 100ps/10ps;
defines the time unit in current module,program, package or interface, locally.
If specified, the timeunit and timeprecision declarations shall precede any other items in the current time scope.
The time unit tells that when you give #1
delay (for example) the unit of that delay. If we select 100ps
as time unit, then providing #1
delay shall result in 100ps delay.
The time precision tells the smallest delay you can configure in a given time unit. Precision represent how many decimal points of precision to use relative to the time units. For example:
timescale 100ps/10ps shall have a #1 delay of 100ps
while you can give #0.1 as the smallest delay i.e. of 10ps.
timescale 1ns/1ps shall have `#1` as 1ns and `#0.001`
as 1ps as the smallest delay.
In your code, timescale 10ps/1fs
shall represent a #1
of 10ps delay and #0.0001
is the smallest measurable delay. Now, coming to the error:
timeunit is smaller than the specified time precision
Intuitively, it can be said that time unit must never be smaller than time precision. This is stated in the above error.
A timescale of 100ps/10ps
shall round-off every delay after one decimal place. Providing 11.49 shall be rounded off to 11.5 and multiplied by time precision and then displayed.
In short, using timescale 1ns/1ps
, delays are interpreted to be in nanoseconds and any fractions to be rounded-off to the nearest picosecond. I used $realtime
in display statement and the output is as follows. The display 115
is due to default time scaling (time*timeunit/timeprecision)
in timeformat.
0 ,My values Changes 0
115 ,My values Changes 10
230 ,My values Changes 30
345 ,My values Changes 40
460 ,My values Changes 50
For more information, refer to timeunit, difference between time unit and time precision and time scale tutorial links.
I don't think there's anything wrong with your code. I think this is a problem with nc/irun/xrun; even the recent versions still give this error. On the other hand, vcs compiles it without any issues.
Update Sep 2021: Still broken in xrun!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With