Jump to content

tudor.timi

Members
  • Content Count

    289
  • Joined

  • Last visited

  • Days Won

    33

Everything posted by tudor.timi

  1. I think you're confused with what passing handles means. Passing a handle to a memory object does not mean that a new object is created. It still references the old object. class tb_env extends uvm_env; some_memory_class memory; function void build_phase(...); // create your entire agents memory = some_memory_class::type_id::create("memory"); // pass down handle to memory to sequencers uvm_config_db #(some_memory_class)::set(this, "agent1.sequencer", "memory", memory); uvm_config_db #(some_memory_class)::set(this, "agent2.sequencer", "memory", memory); endf
  2. Another point: the way I interpret TLM (and someone please correct me if I'm wrong) is that a port is basically a stand-in for an actual object that implements the interface of that port. For example, an analysis port has only a write(...) method, this means it will connect (though exports and imports) to a component that implements a compatible write(...) function. By calling write(...) on the analysis port, you are effectively calling write(...) on the end-point component. You couldn't have a get(...) inside your component do what you want to do, for two reasons. First it's an export and you
  3. While I don't use clocking blocks myself, we were considering moving to them for our VIPs. I would find it worrisome to not be able to model an asynchronous reset so I tried out what you wanted to do (updating the interface signal asynchronously based on reset). It worked on EDA Playground. Example here: http://www.edaplayground.com/x/2nj In case you use a different tool vendor, I would find it surprising that they complain about any conflict as the standard clearly states that a signal can appear in multiple clocking blocks (regardless of direction) - 14.6 Signals in multiple clocking blo
  4. Hi everyone, I'm not sure if this is the right place to post this. I have question regarding the usage of $past(...) and the other members of that family inside procedural code. The SV 2012 standard says the following: "The use of these functions is not limited to assertion features; they may be used as expressions in procedural code as well." I've tried using a call to $past(...) with an explicit clocking event inside a class task. One of our simulators complained that this is only allowed in assertions and procedural blocks. The other one we use allows it. The standard is rather vagu
  5. Hi everyone, I have a question that is more on the methodology side. Let me describe the issue. For our APB peripherals we have a little deviation from the standard protocol. The protocol mentions that PWDATA must be stable already during the setup phase (first cycle of a transfer), but in order to have a more area optimized implementation of our AHB2APB bridge our designers have declared that PWDATA is valid only during the access phase (second cycle) so that they can set a multicycle path on it. This is a requirement for our peripherals to sample PWDATA only on the second cycle and not b
  6. The problem with putting your checking code in your sequence is that that if you want to create a new version of that sequence you might end up doubling your checking code as well. You could do it, but it might require tricky partitioning of code, which is extra effort. Another problem is that if you want to do vertical reuse of a block verification environment, you won't be starting those sequences anymore because traffic will be provided by another RTL block. If you had a reference model that just monitors the DUT inputs then it would work regardless of the source of those inputs (UVM TB or
  7. You have to do std::randomize(k) to randomize variables declared in the global scope. With respect to k's value, you just constrained it to be both '5' and '9' at the same time. You have to say "k inside { 5, 9 }".
  8. I like your idea of creating a new interface that instantiates the original, but with a small twist. I would also add all of the extra signals into a separate interface and instantiate that inside the wrapper interface. This way multiple orthogonal protocol extensions could be supported (although I still have to think of how to do multiple orthogonal extensions in OOP). Not really sure what you mean by a UVM fix. This is more of a vanilla SystemVerilog problem with extending interfaces than a UVM one.
  9. Hi everyone, I'm curious how you handle extensions to a protocol UVC. Let's say we have an APB UVC that implements the AMBA protocol. Let's also say that we have a DUT that, aside from the signals defined in the specification, also implements a few other signals that are related to the generic APB signals (they add support for protected accesses or whatever). On the class side it's pretty easy to handle: just create a new sequence item subclass with extra fields and do type overrides. Where it gets tricky is when working at the signal level. Our UVC already uses an SV interface that only c
  10. First, the obvious, change your constraints to not use paths: `uvm_do_with(item,{ addr == addr; data == data; }); The "item.*" is implicit. Also make sure you use "==" and not "=", although two of the major simulators don't allow "=" so it might just be a copy-paste error. Also, something is fishy about your code. On the one side you try to constrain item.addr and on the other side you assign to item.wr_adr. Are you sure you don't have multiple fields and are just constraining the wrong ones?
  11. Hi, I've tried compiling UVM with QuestaSim and the switch "-pedanticerrors" but it complains in the file uvm_component.svh that virtual method calls are not allowed in the constructor because it can lead to unpredictable results. I know that in C++ the behavior of this scenario is clearly defined (it calls the method of the base class while inside the base class construct and the method of the derived class while in the derived class constructor), but is the same also clearly described in the SV LRM? Thanks, Tudor
  12. Hi Adrian, Since you want to have clk as a port to the interface, the only idea I have at the moment is to do something like this: interface Inter (inout logic clk); logic a; logic b; modport master( output clk, output a, input b ); modport slave ( input clk, input a, output b ); endinterface module A(Inter.master inter); bit clk; always #1 clk = ~clk; assign inter.clk = clk; endmodule module B(Inter.slave inter); always_ff @(posedge inter.clk) $display("foo"); endmodule module top; Inter inter(clk); A a( .* );
  13. First of all what I do is only cover transactions that I have monitored and not transactions that I have randomized. The reason is that you may create an item, randomize it, but just not send it as traffic. You could connect the monitors of both your agents to a coverage collector object and do the cross there. This way you have all info available in one place. As Dave already mentioned on Verification Academy what you have to be careful with is what you cross. Is there any special relationship between the traffic coming from agent1 and agent2 (say, agent1 and agent2 both send transactions
  14. @David: We've basically been operating on the premise that if one thing in the TB goes wrong there is no point in simulating any further so we flagged a fatal error. Stuff with missing configuration values we just flagged as warnings, though errors are probably more appropriate since some people might just ignore warnings. @ljepson: I do like the idea of filtering based on tag and (maybe if required) body, especially for info messages. This is the kind of thing I usually do in Specman, where I can very easily set different verbosity levels on different hierarchies at run time. I haven't wo
  15. I would rather have a clear separation of error messages by source. It's easier for someone new to a project to categorize a fail after doing some code changes. Seeing a UVM_FATAL means the testbench is not being used properly, while seeing a UVM_ERROR means you found a bug in the DUT. Depending on what you see you know what person to talk to (TB developer or designer). Specman/e has a clear differentiation when flagging errors: you have dut_errors for checks and asserts (not to be confused with SV asserts) for doing design by contract of your TB.
  16. There is the +UVM_MAX_QUIT_COUNT plusarg which you can just set to 1. Leave your fatal errors for testbench internal issues and not for DUT errors.
  17. I had no idea that it's possible to cross points from two different covergroups. I've looked in the LRM and it says that in a cross, coverpoints or variables are allowed. The simulator seems to be treating the value of the coverpoint from an outer covergroup as a variable. Seeing as how you probably have 2 instances of you sequence item with different names, they cannot at the same time sample both cg_for_agent1 and cg_for_agent2 (due to the if statement). You will effectively cross the one coverpoint that does get sampled with nothing from the other one. You are probably analyzing per_type co
  18. So just to make it more clear to me, calling uvm_run_phase::get() will return the implementation node and the handle getting passed to run_phase() or the result of phase.find_by_name("run",0) is the node used for scheduling, right?
  19. Hi, I remember back in the day in OVM it was possible to say uvm_test_done.set_drain_time(...). With UVM, when compiling with +UVM_NO_DEPRECATED, it's not possible to do this. Phases have their own drain times now. The cool thing about the OVM approach was that I could set the drain time from the end_of_elaboration phase somewhere in my base test and be done with it. I dug around a bit in UVM and found that all phases have singletons. I tried to get a handle to the run phase singleton during the end_of_elaboration phase and set the drain time on that. It didn't work. All phase methods
×
×
  • Create New...