Jump to content

tudor.timi

Members
  • Content Count

    289
  • Joined

  • Last visited

  • Days Won

    33

Reputation Activity

  1. Like
    tudor.timi got a reaction from qwerty in How to do multiple writes using uvm_hdl_force   
    Use $sformatf("DUt.abc.pkt.reg_%0d.w[%0d]", i, j) to return your path string.
  2. Like
    tudor.timi reacted to uwes in who can interpret use of soft constraint ?   
    hi,
     
    asking for help but raise the question in some other forum isnt helpful.
     
    /uwe
  3. Like
    tudor.timi got a reaction from Attaluri in print internal variables of property in sva   
    Hi Pavan,
     
    You can just add a display in the first action block:
    property p_period; realtime current_time; disable iff (!nreset) ('1, current_time = $time, $display("current time is ", current_time)) |=> (clk_period == ($time - current_time)); endproperty : p_period You are allowed to have as many actions there as you want.
  4. Like
    tudor.timi reacted to alangloria in "Mixin" classes - parametrizing the base class   
    Hi UVM and SystemVerilog users,
     
    I've stumbled upon a particular pattern of writing a "utility" class, which I have called "mixin".
     
    class derived_class#(type BASE=base_class) extends BASE;
        ....
    endclass : derived_class
     
    This pattern was inspired by some C++ Boost code I saw, where the base class is templated.  The reasoning was that under some compilers, multiple inheritance had higher overhead than chains of inheritance (specifically, an "empty" base class might be allocated the minimum size, so multiple inheritance would increase the size of the object, but if you used a chain of inheritance and a derived class in the chain was "empty" (i.e. did not add any data members), it would not increase the object size).  So instead of inheriting from multiple C++ classes, you'd typedef a class like foo<bar<nitz> > and derive from that.
     
    Since SystemVerilog has no multiple inheritance, I thought this pattern would be appropriate for use in SV, to at least ease some of the "oh no SV has no multiple inheritance oh no" pain.
     
    I've defined a simple utility class, utility::slave_sequence_item (utility is a package) defined like so:
     
    class slave_sequence_item#(type BASE=uvm_sequence_item) extends BASE;
        local uvm_barrier wait_barrier_;
        local uvm_barrier fin_barrier_;
        `uvm_object_param_utils_begin(utility::slave_sequence_item#(BASE))
            `uvm_field_object(wait_barrier_, UVM_ALL_ON)
            `uvm_field_object(fin_barrier_, UVM_ALL_ON)
        `uvm_ojbect_utils_end
        function new(string name="");
            super.new(name);
            wait_barrier_ = new("wait_barrier_", 2);
            fin_barrier_ = new("fin_barrier_", 2);
        endfunction
     
        // to be called by sequence
        task wait_for_transaction;
            wait_barrier_.wait_for;
        endtask
        task finish_transaction;
            finish_barrier_.wait_for;
        endtask
        // to be called by driver
        task indicate_transaction;
            wait_barrier_.wait_for;
            finish_barrier_.wait_for;
        endtask
    endclass : slave_sequence_item
     
     
    Basically, this slave sequence item class adds three new methods.  By default, you just derive from utility::slave_sequence_item.  But if you already have an existing sequence item type derived from uvm_sequence_item, you just do typedef utility::slave_sequence_item#(my_sequence_item) my_slave_sequence_item; and the added methods and variables will get "mixed in" the my_slave_sequence_item type.
     
    What do you think?
     
    I've tested it on Cadence IUS10.20-s103, and it seems to work properly.  From my understanding of the IEEE standard, the above is not specifically disallowed (but then it might not be well supported on actual simulators).
     
  5. Like
    tudor.timi reacted to pratta in How to uvm_config_db to set parameter based interface down to my env?   
    I wrote series of articles about the difficulties that SV parameters introduce when developing reusable verification components.  See the following:
     
    http://www.vip-central.org/2012/09/parameterized-interfaces-and-reusable-vip-part-1/
    http://www.vip-central.org/2012/09/parameterized-interfaces-and-reusable-vip-part-2/
    http://www.vip-central.org/2012/09/parameterized-interfaces-and-reusable-vip-part-3/
  6. Like
    tudor.timi got a reaction from David Black in override UVM phase?   
    I think you are a bit confused about overriding. If you want to override the default behavior of build_phase(...) which is a function, then you just declare your own implementation:
    class my_driver extends uvm_driver #(my_item); // factory registration stuff function void build_phase(uvm_phase phase); // super.build_phase means calling the build_phase function // as it was defined in the base class (our case uvm_driver) super.build_phase(phase); // up to now our build_phase does the exact same thing that // it did in the base class // add your extra code here // - code that does more stuff that you need endfunction Whenever you extend a class (create a subclass) and redefine a method, this means you are overriding the method (method means function or task). Method overrides control what object actually do. When you instantiate an object of type my_driver and call build_phase, then the implementation we have defined above will get executed.
     
    Type and instance overrides are a different thing that applies to the factory. They control what types of objects actually get instantiated.
  7. Like
    tudor.timi reacted to chiggs in Code under src/dpi should be improved   
    Hi Uwe,
     
    Thanks for the response.
     
    I appreciate that it's not an ideal world, however you are effectively saying that it's too difficult to achieve a reference implementation using the existing standards and I strongly disagree with this.
     
    It might not be possible to provide an implementation that is optimal for all vendors, however that is not the purpose of the reference implementation. Many vendors bundle a UVM implementation with their simulators so if they want to optimise to differentiate, that is the appropriate place to do so.  If they have failed to implement the required standards then their workarounds should go there.  By all means allow vendors to optimise, but that doesn't belong in the reference implementation.
      The reference implementation shouldn't rely on any non-standard vendor specific implementation details.  This should definitely be feasible!   As you note - it's possible to do a part select with VPI using a bit-by-bit access (vpi_handle_by_index or vpi_handle_by_multi_index).  It's also possible with DPI.
     
    I think the problem is that the motivation isn't present right now - it is possible to write cross-simulator compatible VPI code, it just takes a little more effort.  By allowing the vendors to duplicate implementations you have removed all motivation to even attempt to stay standards compliant.
     
    My biggest issue is really the message this sends - if the vendors themselves are unable to write a relatively simplistic bit of functionality conforming to the standards without resorting to proprietary hacks, what hope do us users have?
     
    You also have to appreciate the irony of the situation - a methodology promoting use of standards and re-use is unable to be implemented using the existing standards and contains copious amounts of copy'n'pasted code!
     
    If putting the reference implementation forward as part of the IEEE standard then this needs to be addressed.  I believe with the right incentives and processes in place it will be possible to create a generic reference implementation.  I also think that the vendors should work together to create a generic this as a display of confidence in existing IEEE standards and the quality of their own products.  If they are unwilling, my offer to assist in this effort with patches still stands.
     
    To answer your specific points:
     

     
     
    I think you've been successful with SV because that is the main strength of your contributors.  With proper development and review process this wouldn't be so hard to maintain and it should be possible to enforce a consistent standard of code.
     
     
     
    I disagree - the feature-set of UVM should be consistent and only use standards.  The only conditional compilation in the C codebase would be selecting which standards are available.  To clarify, the granularity is whether a standard is available (i.e. DPI, VHPI), not to particular vendor tool/version capabilities.
     
    Thanks,
     
    Chris
  8. Like
    tudor.timi reacted to chiggs in Code under src/dpi should be improved   
    The C code under src/dpi has various issues which should be improved if the reference implementation forms part of the standard.
     
     
    Duplicate code
     
    There is significant duplication of code in the uvm_hdl_*.c files (almost 50% duplicate code).  This is clearly below the quality one would expect from an industry standard.  Since the interface to the simulators use IEEE standards (VPI, DPI and VHPI) it should be possible to have a single implementation that works on any standards compliant simulator.
     
    If simulator specific workarounds are required they should be minimised.
     
     
    Inconsistent Features
     
    The features provided by the different implementations are not consistent.  For example, VCS doesn't provide a part-select, and Questa doesn't support VHPI. The part-select capability is a useful feature and possible using standards compliant calls.
     
    Overall, rather than splitting out (and duplicating) this code into separate files for different simulators, there should be a single reference implementation. Features that depend on standards should be conditionally compiled based on the interfaces available, not the simulator itself (for example #ifdef VHPI_AVAILABLE rather than #ifdef VCSMX).  Any simulator specific workarounds should be avoided if possible, with the aim of eliminating them completely.
     
     
    There are also various style/best practice issues with the C code - for example including a .cc inside an extern "c" block (as mentioned previously).
     
     
    I'm happy to contribute patches to assist with any improvements.
     
    Thanks,
     
    Chris
  9. Like
    tudor.timi got a reaction from Attaluri in UVM_REG : override some addresses of old register model with another register model.   
    In the top address map, couldn't you just not declare the areas that are mapped to other internal modules? You could then add the other module level map as a submap to the top level one: http://www.vmmcentral.org/uvm_vmm_ik/files3/reg/uvm_reg_map-svh.html#uvm_reg_map.add_submap
  10. Like
    tudor.timi reacted to Florin Oancea in Migrating to UVM 1.2   
    Introduction

    Couple of days ago I watched a presentation of UVM1.2 by Tom Fitzpatrick, UVM evanghelist at Mentor, and I thought to give it a try and port one of our environments based on UVM1.1d. Bellow is the story I've been through.

    1. Download UVM 1.2

    I started by cloning UVM1.2 on my local machine:

    git clone http://git.code.sf.net/p/uvm/code uvm-code

    This will make a clone of uvm code on your computer from >>sourceforge.net<<. The clone is on branch master so you have to switch to branch UVM_1_2.
     
    cd uvm-code
    git checkout UVM_1_2

    Or the short way :
    git clone http://git.code.sf.net/p/uvm/code uvm-code -b UVM_1_2

    2. VE

    I experimented with a VE that was complex enough to make the porting interesting; it features:
    3 different agent instances (passive and active) for 3 different protocols includes configuration registers, including RC registers includes virtual sequences there was a regression with fixed number of tests that we run (or rerun with same seeds) has a regression with fixed number of tests that I can run (or rerun with same seeds) and that gets 100% coverage I compiled and run individual test using NCSIM, QUESTA and VCS and I run regressions using NCSIM and QUESTA.

    3. Using NCSIM
       
    Compilation stage was done by setting two flags in irun :
     
    -uvmhome=/path/to/new/uvm
    -uvmnoautocompile

    Individual test runs indicate a negligible difference :
    the log has identical size the test duration is about the same (+/- 5 sec for a 2..7 min test) Regressions results :
    regression duration was about the same for a one hour regression  (+/- 3 min); for both uvm versions I used the same regression. I also run different regressions for each uvm version but with the same number of tests (50,100,200,300). The collected coverage was about the same but still worth mentioning that for UVM 1.2 was slightly lower by 1-2%. Negligible but recurent. the number of runs to reach 100% functional coverage is a bit larger in case of UVM1.2 4. Using VCS

    Compilation stage was  done by changing uvm path :
     
    export VCS_UVM_HOME=/path/to/new/uvm

    Individual test runs indicate a negligible difference :
    the test duration is about the same (+/- 5 sec for a 2..7 min test) log has the same dimension for both versions (I was getting a number of warnings in both UVM1.1d and UVM1.2 that I didn't bother to fix) 5. Using QUESTA

    Compilation stage requires a little bit more effort, but it's not a chore. Basically I had to precompile and get a shared object "uvm_dpi64.so" :

    export UVM_HOME=/uvm-code/distrib
    vlib work
    vlog +incdir+$UVM_HOME/src $UVM_HOME/src/uvm_pkg.sv
    cd $UVM_HOME/examples
    export MTI_HOME=/mentor/10.2a-64bit/questa_sim
    make LIBNAME=uvm_dpi64 BITS=64 -f Makefile.questa dpi_lib
    cd -

    To run a test in  "run.sh" I used the shared object:
     
    vsim -sv_lib $UVM_HOME/lib/uvm_dpi64

    Individual test runs indicate a negligible difference for log size and test duration.

    6. Conclusions :
    Migrating to UVM 1.2 it's easy and there is no compilation issues. Results indicate pretty similar behavior for all 3 simulators when it comes to performance, log size or random stability. Results collected in the regression were aligned with what I've seen in the individual runs. Although there are new features added to UVM1.2 those were not impacting in a visible way the VE. I think the new macro `uvm_info_begin|end is nice and quite useful add: implementing messages it's easier and they are more readable than before.                task task_name ();
                     .....
                    `uvm_info_begin("ID", "MSG", verbosity)
                     ....
                    `uvm_message_add_object(my object)
                     ....
                    `uvm_message_add_int(my int, UVM_DEC)
                     ....
                    `uvm_info_end
                    ....
                  endtask
    UVM 1.2 signaled an error which UVM 1.1d didn't:               set_report_severity_id_action_hier(UVM_INFO,"RTR_SCOREBOARD",UVM_NO_ACTION)
    UVM1.1d did not signal that I used UVM_HIGH instead of UVM_INFO. Probably because in UVM 1.2 these are real enums and the type mismatch can be seen. The VE does not use the phasing mechanism, thus some of the features of UVM1.2 were not exercised.  
    I also uploaded this guide on AMIQ's Blog page so you can check there for more updates as well.
  11. Like
    tudor.timi reacted to David Black in How to delay a sequence from a test   
    I disagree with the approach of putting code into the config object on the basis that the config object is inteded conceptually for configuration and not operation. Don't mix abstractions.
     
    In order to synchronize to a hardware signal event, I think it would be architecturally cleaner to add fields to the transaction type (or a derived type) and let the driver do the delay. Consider a transaction class:
    typedef enum { read_e, write_e }; class bus_trans;   rand oper_t m_oper; // read_e, write_e   rand addr_t m_addr;   rand data_t m_data; endclass if you add an attribute m_delay, you can have the driver use this to create an arbitrary delay.
    class delayable_bus_trans_t extends bus_trans_t;   int unsigned m_delay = 0; //< default no delay endclass The driver could do something like:
    seq_item_port.get_next_item(req); if (req.delay == 0) begin   // Normal transaction   end else begin   repeat(req.delay) @(vif.cb);   end Notice that the driver is not obligated to used every attribute (field) of the transaction. In my example, the driver only does the delay. Of course this depends on the requirements of the interface.
     
    You could also have driver fork off a process to wait on the delay and send a response when the delay completed. This would allow other transactions from other sequences to be intermingled while the waiting. So the sequence would be waiting on a response matching the delay request. This depends on whether the delay should be blocking or not. We describe this in the advanced sequences portion of our UVM adopters class.
     
    If unable to modify the driver, then I would suggest creating a "side" agent connected to the same interface with a sole purpose of monitoring clocks, and other synchronization. You can then have your sequence call a sub-sequence on the side-agent's sequencer with a transaction to request the delay information. This may seem a bit contrary to the usual notion of a driver, but it keeps the distinction of pin-level interface activity relegated to the driver rather than polluting configuration objects.
     
    A monitor could also provide the information, but then you would need to connect an analysis port or add a callback to the sequencer. Remember that the sequence wants to be reusable. Tying it too closely to the hardware violates the separation of concerns and makes it less reusable.
     
    This is just one of several approaches. You could also use uvm_event_pool to create a driver clock event, but I think it is less clean. Signal level tasks should be relegated to the driver (BFM) or monitor. 
  12. Like
    tudor.timi got a reaction from silverstream in Accessing Memory model from various sequences   
    I think you're confused with what passing handles means. Passing a handle to a memory object does not mean that a new object is created. It still references the old object.
    class tb_env extends uvm_env; some_memory_class memory; function void build_phase(...); // create your entire agents memory = some_memory_class::type_id::create("memory"); // pass down handle to memory to sequencers uvm_config_db #(some_memory_class)::set(this, "agent1.sequencer", "memory", memory); uvm_config_db #(some_memory_class)::set(this, "agent2.sequencer", "memory", memory); endfunction endclass class some_sequencer extends uvm_sequencer; some_memory_class memory; function void build_phase(...); if(!uvm_config_db #(some_memory_class)::get(this, "", "memory", memory)) `uvm_fatal("NOMEM", "Could not get handle to memory") endfunction endclass // same for some_other_sequencer What I did in the code snippet (of the top of my head, not compilable) is create a memory in a central location inside the testbench env. I then pass it via the config_db to the sequencers. This way both of them see the same object. Previously, the OVM old set_config_object(...) functions had an extra parameter that could be used to clone an object, which is why you might be confused.
  13. Like
    tudor.timi reacted to dave_59 in assert(std::randomize(variable)) when assertions are turned off   
    Regardless of the direct answer to your question, I suggest that you not use an immediate assertion to check the result of randomize() and instead use a simple if/else statement. This is because assertions are included the coverage statistics for the design, and this check does not belong with the design, it is part of the testbench. 
     
    If you do plan to turn off assertions, I suggest that you apply it to a specific DUT scope instead of globally to the entire simulation. You can also use the new $assertcontrol system task to only target concurrent assertions. Using both these suggestions will ensure that you do not lose the randomize functionality no matter what your tool decides to do.
  14. Like
    tudor.timi reacted to bhunter1972 in uvm_analysis_imp#()::get would be nice   
    This isn't a solution to a glaring problem:  it's standard operating procedure. If you want your sequences to receive events or access other information from the component hierarchy, this is what you do.
     
    However, the way you wrote it isn't entirely clear. I tend to plop a uvm_tlm_analysis_fifo in the sequencer and have the sequence pull from that. Or, I'll implement the write in the sequencer and put the data item in something like a mailbox that the sequence can access.
     
    The downside to this practice is that these sequences cannot just run on any ol' sequencer--once you declare the p_sequencer, that's the only one they can run on. It so happens that that is what I want most of the time, but some UVM purists may whince at the notion.
     
    Sequences can't have TLM imps or exports because they are not components. They can be created and destroyed throughout the life of the sim and so cannot have these quasi-static elements inside them. The declaration of the p_sequencer roots them into the component hierarchy.
  15. Like
    tudor.timi reacted to dave_59 in Regarding the UVM field automation macros   
    There have been some improvements to the performance of the field automation macros, but I still do not believe their benefit is worth the cost. You should be able to prove it to yourself by creating a simple testbench with and without the macros.
  16. Like
    tudor.timi got a reaction from shane in verifying protocol without reference model   
    The problem with putting your checking code in your sequence is that that if you want to create a new version of that sequence you might end up doubling your checking code as well. You could do it, but it might require tricky partitioning of code, which is extra effort. Another problem is that if you want to do vertical reuse of a block verification environment, you won't be starting those sequences anymore because traffic will be provided by another RTL block. If you had a reference model that just monitors the DUT inputs then it would work regardless of the source of those inputs (UVM TB or other RTL block). I've seen colleagues implement some checks in their sequences, but they were doing full chip verification, so no need to consider vertical reuse.
     
    The sequence of commands you expect could be very easily implemented in a reference model. You should anyway have a monitor that can recognize when an OPEN AS CLIENT command is sent and informs the RM. After that it's just a matter of implementing the same sequence of operations you just described (wait for response, check that it's a SYN, etc.).
  17. Like
    tudor.timi reacted to David Black in Why you use systemC / should i learn systemC?   
    [WARNING: The following is philisophical]
     
    Assembly language programmers felt the same way about high level languages like FORTRAN.
    Functional programmers (e.g. C) felt the same way about object oriented languages.
    Schematics were the way of things for many engineers until RTL showed up.
    Verifying a design by occular inspection waveforms was replaced by self-checking testbenches with some resistance.
    Verification using directed test was mainstream and constrained random with functional coverage was resisted for many years. The advantages of reuse and scalability of the new techniques has slowly changed many.
     
    Engineers and programmers facilitate change, but themselves are some of the most resistant to change when it affects their own world.
     
    I think it's inevitable. As complexity increases, we have to find new ways to deal with it, and abstracting upwards is the way of things.
     
    One way to deal with change is to decide not to let the new ways master you. Instead read books, take classes and become a master of the new technology.
     
    Sadly, many universities have not caught up with modern practices at the undergraduate level.
  18. Like
    tudor.timi reacted to dave_59 in Rnadomize a variable inside a function   
    The reason for the failure is because of the bad constraint. Since the call to randomize is outside of a class declaration, std::randomize() gets called implicitly.
  19. Like
    tudor.timi reacted to David Black in overriding a registered local variable in a sequence from test   
    Sequences are uvm_object's; whereas, the driver is a uvm_component. The field automation macros automatic configuration fetching works due to super.build_phase automation. You will need to do two things:
     
    Add the following in your sequence:  `declare_p_sequencer(SEQR_TYPE) In function pre_start add:  uvm_config_db#(string)::get(p_sequencer,"","file_name",file_name); Notice that the information is tied to the sequencer that the sequence is running on.
  20. Like
    tudor.timi reacted to David Black in fixed-size arrays : Do they not 'support' size()?   
    First, before I discuss the problems with SystemVerilog, I would like to point out that you are really missing a much simpler solution to your problem:
    module top; int farray[10]; //fixed array initial begin foreach (farray[jjj]) begin farray[jjj] = $urandom_range(121,0); end $display("******************************"); foreach (farray[jjj]) begin $display("%0d: %0d",jjj,farray[jjj]); end end endmodule : top  With respect to "how many elements does my container have?", try the following. Note that you may need to comment out a few lines since the EDA vendor simulators don't all agree on whether some of these should work, and to be fair, the standard is not entirely clear...
    module top;   byte   Fixed_array[3];   byte   Dynamic_array[] = {10, 20, 30};   string String = "abc";   byte   Queue[$] = {40,50,60};   byte   Assoc_array[string] = '{"a":70,"b":80,"c":90};   initial $display("Fixed_array size     is %0d", $size(Fixed_array)   );   initial $display("Dynamic_array size   is %0d", Dynamic_array.size() );   initial $display("String size          is %0d", String.len()         );   initial $display("Queue size           is %0d", Queue.size()         );   initial $display("Assoc_array size     is %0d", Assoc_array.num()    );   // Alternate approach   initial $display("$size(Fixed_array  ) is %0d", $size(Fixed_array)   );   initial $display("$size(Dynamic_array) is %0d", $size(Dynamic_array) );   initial $display("$size(String size  ) is %0d", $size(String)        ); // May not be legal   initial $display("$size(Queue size   ) is %0d", $size(Queue)         );   initial $display("$size(Assoc_array  ) is %0d", $size(Assoc_array)   );   // Yet another approach   initial $display("$bits(Fixed_array  ) is %0d", $bits(Fixed_array)   );   initial $display("$bits(Dynamic_array) is %0d", $bits(Dynamic_array) );   initial $display("$bits(String size  ) is %0d", $bits(String)        );   initial $display("$bits(Queue size   ) is %0d", $bits(Queue)         );   initial $display("$bits(Assoc_array  ) is %0d", $bits(Assoc_array)   ); // Strange result endmodule The standard attempts to rationalize away this inconsistency at the bottom of page 45 (85 in the PDF) in the IEEE 1800-2012 standard:[/size][/font][/color][/code]
    I for one don't entirely agree with this rationalization. The concepts are all closely related and should have been unified to make the coders job easier.
     
    Actually, $size() appears to work with most of them.
  21. Like
    tudor.timi got a reaction from karandeep963 in Cross coverage of two covergroups   
    First of all what I do is only cover transactions that I have monitored and not transactions that I have randomized. The reason is that you may create an item, randomize it, but just not send it as traffic. You could connect the monitors of both your agents to a coverage collector object and do the cross there. This way you have all info available in one place.
     
    As Dave already mentioned on Verification Academy what you have to be careful with is what you cross. Is there any special relationship between the traffic coming from agent1 and agent2 (say, agent1 and agent2 both send transactions that are processed at the same time)? In this case it makes sense to do a cross because you know when to do the sampling, when both transactions come in. If you don't have any such relationship, then you don't really need a cross and it probably makes more sense to just have a separate covergroup per agent.
  22. Like
    tudor.timi got a reaction from karandeep963 in Cross coverage of two covergroups   
    I had no idea that it's possible to cross points from two different covergroups. I've looked in the LRM and it says that in a cross, coverpoints or variables are allowed. The simulator seems to be treating the value of the coverpoint from an outer covergroup as a variable. Seeing as how you probably have 2 instances of you sequence item with different names, they cannot at the same time sample both cg_for_agent1 and cg_for_agent2 (due to the if statement). You will effectively cross the one coverpoint that does get sampled with nothing from the other one. You are probably analyzing per_type coverage and wondering, but these crosses happen on a per instance basis, so don't get tricked into thinking that the cross actually happens.
     
    If you want to do such a cross you have to do it somewhere outside the sequence item, where you can get both instances.
  23. Like
    tudor.timi got a reaction from getvictor in UVM phase singletons   
    Hi,
     
    I remember back in the day in OVM it was possible to say uvm_test_done.set_drain_time(...). With UVM, when compiling with +UVM_NO_DEPRECATED, it's not possible to do this. Phases have their own drain times now. The cool thing about the OVM approach was that I could set the drain time from the end_of_elaboration phase somewhere in my base test and be done with it. I dug around a bit in UVM and found that all phases have singletons. I tried to get a handle to the run phase singleton during the end_of_elaboration phase and set the drain time on that. It didn't work.
     
    All phase methods get a "phase" argument passed to them. I would have expected that this parameter contains already a handle to the phase singleton. I made a small example on EDA Playground (http://www.edaplayground.com/x/2PL) where I get the run phase singleton and print it and also print the argument. They are different objects. Does anyone know what the phase argument getting passed to phase methods is and why it's not the singleton?
     
    Thanks,
    Tudor
×
×
  • Create New...