Jump to content

tudor.timi

Members
  • Posts

    289
  • Joined

  • Last visited

  • Days Won

    33

Everything posted by tudor.timi

  1. I think you're confused with what passing handles means. Passing a handle to a memory object does not mean that a new object is created. It still references the old object. class tb_env extends uvm_env; some_memory_class memory; function void build_phase(...); // create your entire agents memory = some_memory_class::type_id::create("memory"); // pass down handle to memory to sequencers uvm_config_db #(some_memory_class)::set(this, "agent1.sequencer", "memory", memory); uvm_config_db #(some_memory_class)::set(this, "agent2.sequencer", "memory", memory); endfunction endclass class some_sequencer extends uvm_sequencer; some_memory_class memory; function void build_phase(...); if(!uvm_config_db #(some_memory_class)::get(this, "", "memory", memory)) `uvm_fatal("NOMEM", "Could not get handle to memory") endfunction endclass // same for some_other_sequencer What I did in the code snippet (of the top of my head, not compilable) is create a memory in a central location inside the testbench env. I then pass it via the config_db to the sequencers. This way both of them see the same object. Previously, the OVM old set_config_object(...) functions had an extra parameter that could be used to clone an object, which is why you might be confused.
  2. Another point: the way I interpret TLM (and someone please correct me if I'm wrong) is that a port is basically a stand-in for an actual object that implements the interface of that port. For example, an analysis port has only a write(...) method, this means it will connect (though exports and imports) to a component that implements a compatible write(...) function. By calling write(...) on the analysis port, you are effectively calling write(...) on the end-point component. You couldn't have a get(...) inside your component do what you want to do, for two reasons. First it's an export and you can't call methods on exports (as the flow of method calls in TLM1 is uni-directional). Second, even if you could, this would mean that you would be calling get(...) on the component connect on the other side. The way bhunter suggested it (using the analysis FIFO) is the way I've implemented it in the past as well and I would consider it pretty standard TLM. I guess it's not documented in UVM as an example because it's a more "exotic" use case. With respect to finding the question and not the answer on Google or on forums, I do think that there has been more participation lately w.r.t. to how to do stuff in UVM, because it's gotten wider adoption. Don't forget to try StackOverflow as well when you have SV questions, as that site is also pretty active.
  3. While I don't use clocking blocks myself, we were considering moving to them for our VIPs. I would find it worrisome to not be able to model an asynchronous reset so I tried out what you wanted to do (updating the interface signal asynchronously based on reset). It worked on EDA Playground. Example here: http://www.edaplayground.com/x/2nj In case you use a different tool vendor, I would find it surprising that they complain about any conflict as the standard clearly states that a signal can appear in multiple clocking blocks (regardless of direction) - 14.6 Signals in multiple clocking blocks. Having a signal as an output in multiple clocking blocks would cause the same situation (an update form multiple sources).
  4. Hi everyone, I'm not sure if this is the right place to post this. I have question regarding the usage of $past(...) and the other members of that family inside procedural code. The SV 2012 standard says the following: "The use of these functions is not limited to assertion features; they may be used as expressions in procedural code as well." I've tried using a call to $past(...) with an explicit clocking event inside a class task. One of our simulators complained that this is only allowed in assertions and procedural blocks. The other one we use allows it. The standard is rather vague here. What constitutes procedural code? Hopefully someone from the SV Committee hangs out in this forum. Here's the code I'm trying. module top; bit clk; bit some_signal; always #1 clk = ~clk; class some_class; task some_task(); bit foo; foo = $past(some_signal,,,@(negedge clk)); $display("some_signal was %b at the last negedge", foo); endtask endclass initial begin automatic some_class obj = new(); @(posedge clk); some_signal <= 1; @(negedge clk); some_signal <= 0; @(posedge clk); obj.some_task(); $finish(); end endmodule
  5. Hi everyone, I have a question that is more on the methodology side. Let me describe the issue. For our APB peripherals we have a little deviation from the standard protocol. The protocol mentions that PWDATA must be stable already during the setup phase (first cycle of a transfer), but in order to have a more area optimized implementation of our AHB2APB bridge our designers have declared that PWDATA is valid only during the access phase (second cycle) so that they can set a multicycle path on it. This is a requirement for our peripherals to sample PWDATA only on the second cycle and not before. I'm not really sure what the best way to verify this would be as this seems kind of like doing timing verification. What has up to now been done is to randomize the value of the control signals during the setup phase and then drive PENABLE and PSEL high together with PWDATA (basically a legal access phase). I find this approach clumsy and trying to be overly clever. My idea would have been to drive legal protocol except for PWDATA which I would drive to unknown during the first cycle. This way, if the DUT were to sample the signal during the setup phase, it would sample an unknown value and it would fire up any assertions inside it or in the testbench in case of a readback. This is basically what would happen in a physical system (or?). I would like to know if anybody else has encountered something similar and how they solved the problem. Thanks, Tudor
  6. The problem with putting your checking code in your sequence is that that if you want to create a new version of that sequence you might end up doubling your checking code as well. You could do it, but it might require tricky partitioning of code, which is extra effort. Another problem is that if you want to do vertical reuse of a block verification environment, you won't be starting those sequences anymore because traffic will be provided by another RTL block. If you had a reference model that just monitors the DUT inputs then it would work regardless of the source of those inputs (UVM TB or other RTL block). I've seen colleagues implement some checks in their sequences, but they were doing full chip verification, so no need to consider vertical reuse. The sequence of commands you expect could be very easily implemented in a reference model. You should anyway have a monitor that can recognize when an OPEN AS CLIENT command is sent and informs the RM. After that it's just a matter of implementing the same sequence of operations you just described (wait for response, check that it's a SYN, etc.).
  7. You have to do std::randomize(k) to randomize variables declared in the global scope. With respect to k's value, you just constrained it to be both '5' and '9' at the same time. You have to say "k inside { 5, 9 }".
  8. I like your idea of creating a new interface that instantiates the original, but with a small twist. I would also add all of the extra signals into a separate interface and instantiate that inside the wrapper interface. This way multiple orthogonal protocol extensions could be supported (although I still have to think of how to do multiple orthogonal extensions in OOP). Not really sure what you mean by a UVM fix. This is more of a vanilla SystemVerilog problem with extending interfaces than a UVM one.
  9. Hi everyone, I'm curious how you handle extensions to a protocol UVC. Let's say we have an APB UVC that implements the AMBA protocol. Let's also say that we have a DUT that, aside from the signals defined in the specification, also implements a few other signals that are related to the generic APB signals (they add support for protected accesses or whatever). On the class side it's pretty easy to handle: just create a new sequence item subclass with extra fields and do type overrides. Where it gets tricky is when working at the signal level. Our UVC already uses an SV interface that only contains the APB signals and it's not possible to extend it in any way. How would we get these extra signals into the UVC to drive and monitor? What we have done up to now is, since we use our own homegrown UVCs, we just pack everything into the base UVC and have it highly configurable. I don't like this approach as I don't feel it's properly encapsulated. It confuses the user with too many extra config parameters and it makes development a lot harder. I'm just wondering if anyone has a nicer solution to this. Thanks and best regards, Tudor
  10. First, the obvious, change your constraints to not use paths: `uvm_do_with(item,{ addr == addr; data == data; }); The "item.*" is implicit. Also make sure you use "==" and not "=", although two of the major simulators don't allow "=" so it might just be a copy-paste error. Also, something is fishy about your code. On the one side you try to constrain item.addr and on the other side you assign to item.wr_adr. Are you sure you don't have multiple fields and are just constraining the wrong ones?
  11. Hi, I've tried compiling UVM with QuestaSim and the switch "-pedanticerrors" but it complains in the file uvm_component.svh that virtual method calls are not allowed in the constructor because it can lead to unpredictable results. I know that in C++ the behavior of this scenario is clearly defined (it calls the method of the base class while inside the base class construct and the method of the derived class while in the derived class constructor), but is the same also clearly described in the SV LRM? Thanks, Tudor
  12. Hi Adrian, Since you want to have clk as a port to the interface, the only idea I have at the moment is to do something like this: interface Inter (inout logic clk); logic a; logic b; modport master( output clk, output a, input b ); modport slave ( input clk, input a, output b ); endinterface module A(Inter.master inter); bit clk; always #1 clk = ~clk; assign inter.clk = clk; endmodule module B(Inter.slave inter); always_ff @(posedge inter.clk) $display("foo"); endmodule module top; Inter inter(clk); A a( .* ); B b( .* ); endmodule You have to instantiate the interface on top level to hold your bundle of wires and pass that to both your modules. We have declare clk as inout to be able to both read and write from/to it. You declare modports to restrict access to the signals. You want to only be able to drive the clock from the master, but not from the slave; same goes for the other signals that you don't define as ports (a is a master output/slave input and b is a slave output/master input). You can read more about modports here: http://www.asic-world.com/systemverilog/interface3.html. You can also find the code on EDA Playground: http://www.edaplayground.com/x/rA.
  13. First of all what I do is only cover transactions that I have monitored and not transactions that I have randomized. The reason is that you may create an item, randomize it, but just not send it as traffic. You could connect the monitors of both your agents to a coverage collector object and do the cross there. This way you have all info available in one place. As Dave already mentioned on Verification Academy what you have to be careful with is what you cross. Is there any special relationship between the traffic coming from agent1 and agent2 (say, agent1 and agent2 both send transactions that are processed at the same time)? In this case it makes sense to do a cross because you know when to do the sampling, when both transactions come in. If you don't have any such relationship, then you don't really need a cross and it probably makes more sense to just have a separate covergroup per agent.
  14. @David: We've basically been operating on the premise that if one thing in the TB goes wrong there is no point in simulating any further so we flagged a fatal error. Stuff with missing configuration values we just flagged as warnings, though errors are probably more appropriate since some people might just ignore warnings. @ljepson: I do like the idea of filtering based on tag and (maybe if required) body, especially for info messages. This is the kind of thing I usually do in Specman, where I can very easily set different verbosity levels on different hierarchies at run time. I haven't worked that much with UVM, but it seems the plusargs are more powerful than those for OVM w.r.t. setting the verbosity (or I just didn't use the OVM ones properly) and there is also vendor support for doing TCL based UVM sets on the simulator command line these days (something OVM didn't really have) so it's now definitely possible to do this easily in SV as well. What I also like to do for DUT checks is do an immediate assert that I can then tag in my vplan. This is probably a better way of separating TB errors from DUT errors, instead of severity. Thanks a lot for the great input, I'll definitely consider it for my next project!
  15. I would rather have a clear separation of error messages by source. It's easier for someone new to a project to categorize a fail after doing some code changes. Seeing a UVM_FATAL means the testbench is not being used properly, while seeing a UVM_ERROR means you found a bug in the DUT. Depending on what you see you know what person to talk to (TB developer or designer). Specman/e has a clear differentiation when flagging errors: you have dut_errors for checks and asserts (not to be confused with SV asserts) for doing design by contract of your TB.
  16. There is the +UVM_MAX_QUIT_COUNT plusarg which you can just set to 1. Leave your fatal errors for testbench internal issues and not for DUT errors.
  17. I had no idea that it's possible to cross points from two different covergroups. I've looked in the LRM and it says that in a cross, coverpoints or variables are allowed. The simulator seems to be treating the value of the coverpoint from an outer covergroup as a variable. Seeing as how you probably have 2 instances of you sequence item with different names, they cannot at the same time sample both cg_for_agent1 and cg_for_agent2 (due to the if statement). You will effectively cross the one coverpoint that does get sampled with nothing from the other one. You are probably analyzing per_type coverage and wondering, but these crosses happen on a per instance basis, so don't get tricked into thinking that the cross actually happens. If you want to do such a cross you have to do it somewhere outside the sequence item, where you can get both instances.
  18. So just to make it more clear to me, calling uvm_run_phase::get() will return the implementation node and the handle getting passed to run_phase() or the result of phase.find_by_name("run",0) is the node used for scheduling, right?
  19. Hi, I remember back in the day in OVM it was possible to say uvm_test_done.set_drain_time(...). With UVM, when compiling with +UVM_NO_DEPRECATED, it's not possible to do this. Phases have their own drain times now. The cool thing about the OVM approach was that I could set the drain time from the end_of_elaboration phase somewhere in my base test and be done with it. I dug around a bit in UVM and found that all phases have singletons. I tried to get a handle to the run phase singleton during the end_of_elaboration phase and set the drain time on that. It didn't work. All phase methods get a "phase" argument passed to them. I would have expected that this parameter contains already a handle to the phase singleton. I made a small example on EDA Playground (http://www.edaplayground.com/x/2PL) where I get the run phase singleton and print it and also print the argument. They are different objects. Does anyone know what the phase argument getting passed to phase methods is and why it's not the singleton? Thanks, Tudor
×
×
  • Create New...