ryz Posted February 20, 2012 Report Share Posted February 20, 2012 The method uvm_root::set_timeout() is now using time as data type which means that the time out you do get when specifying it in an test case depends on the time scale used in both UVM and your test case. Since UVM most often in pre compiled you do not really now what time scale is used there. Hopefully some day some one will wake up and understand that SystemVerilog needs a way to pass time that is consistent regardless of time scales but until the uvm_tlm_time is the best we do have. Quote Link to comment Share on other sites More sharing options...
dave_59 Posted February 21, 2012 Report Share Posted February 21, 2012 The value passed to set_timeout will use the timescale of whatever the UVM package was compiled with -it does not matter what your testcase was compiled with, unless you used a time literal somewhere the calculation of the timeout value. It would have helped if `uvm_delay multiplied its time argument by an agreed upon time unit literal. This is another case (like with random stability) that once both the developers and users of the UVM understand the semantics of how time is managed in SystemVerilog, you can plan accordingly. Quote Link to comment Share on other sites More sharing options...
ryz Posted February 23, 2012 Author Report Share Posted February 23, 2012 Well the problem is that i do use an time literal to calculate the time out value and that is exactly what I want to do. In any case I need to know which time unit was used when UVM was pre compiled which is wrong. If the uvm_tlm_time type was used this would not be the case. We could still end up in problems with precision but that is less likely to be a problem for the use it is intended for where only large values of time should be specified. The need to know what time unit the last receiver of the time you send has been compiled with is a bad thing and should be avoided when ever possible. Quote Link to comment Share on other sites More sharing options...
dave_59 Posted February 23, 2012 Report Share Posted February 23, 2012 If uvm_delay was defined as `define uvm_delay(time) #(time*1ns) Then a timeout of 100ms would be set_timeout(100ms/1ns) It would not matter what the current timescale was as long it was at most 1ns. Also, if people started using the SystemVerilog timeunit construct, there would be no problems with time scales and compilation order. Quote Link to comment Share on other sites More sharing options...
ryz Posted February 24, 2012 Author Report Share Posted February 24, 2012 Well I was under the impression that the uvm_tlm_time class was created for taking care of exactly this problem and I there fore thought it would be a good idea to use it here. If it is not I think we should try to create a new type that does take care of this to make life easier for people writing test benches. The naming of the class uvm_tlm_time is not so good since it is not only when using TLM you have problems with the stupid way SystemVerilog represents time. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.