Jump to content

UVM Enhancement uvm_root::set_timeout() should use uvm_tlm_time rather then time


Recommended Posts

The method uvm_root::set_timeout() is now using time as data type which means that the time out you do get when specifying it in an test case depends on the time scale used in both UVM and your test case. Since UVM most often in pre compiled you do not really now what time scale is used there.

Hopefully some day some one will wake up and understand that SystemVerilog needs a way to pass time that is consistent regardless of time scales but until the uvm_tlm_time is the best we do have.

Link to comment
Share on other sites

The value passed to set_timeout will use the timescale of whatever the UVM package was compiled with -it does not matter what your testcase was compiled with, unless you used a time literal somewhere the calculation of the timeout value. It would have helped if `uvm_delay multiplied its time argument by an agreed upon time unit literal.

This is another case (like with random stability) that once both the developers and users of the UVM understand the semantics of how time is managed in SystemVerilog, you can plan accordingly.

Link to comment
Share on other sites

Well the problem is that i do use an time literal to calculate the time out value and that is exactly what I want to do. In any case I need to know which time unit was used when UVM was pre compiled which is wrong. If the uvm_tlm_time type was used this would not be the case. We could still end up in problems with precision but that is less likely to be a problem for the use it is intended for where only large values of time should be specified.

The need to know what time unit the last receiver of the time you send has been compiled with is a bad thing and should be avoided when ever possible.

Link to comment
Share on other sites

If uvm_delay was defined as

`define uvm_delay(time) #(time*1ns)

Then a timeout of 100ms would be

set_timeout(100ms/1ns)

It would not matter what the current timescale was as long it was at most 1ns. Also, if people started using the SystemVerilog timeunit construct, there would be no problems with time scales and compilation order.

Link to comment
Share on other sites

Well I was under the impression that the uvm_tlm_time class was created for taking care of exactly this problem and I there fore thought it would be a good idea to use it here. If it is not I think we should try to create a new type that does take care of this to make life easier for people writing test benches. The naming of the class uvm_tlm_time is not so good since it is not only when using TLM you have problems with the stupid way SystemVerilog represents time.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...